pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Ukrainian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Ukrainian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "uk", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-ukrainian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/uk.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-ukrainian")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/uk/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/uk/clips/"
def clean_sentence(sent):
sent = sent.lower()
# normalize apostrophes
sent = sent.replace("’", "'")
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() or ch == "'" else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 32.29 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
{"language": "uk", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Ukrainian XLSR Wav2Vec2 Large 53 by Anton Lozhkov", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice uk", "type": "common_voice", "args": "uk"}, "metrics": [{"type": "wer", "value": 32.29, "name": "Test WER"}]}]}]}
|
anton-l/wav2vec2-large-xlsr-53-ukrainian
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"uk",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"uk"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #uk #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Ukrainian
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Ukrainian using the Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
Test Result: 32.29 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
|
[
"# Wav2Vec2-Large-XLSR-53-Ukrainian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Ukrainian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Ukrainian test data of Common Voice.\n\n\n\nTest Result: 32.29 %",
"## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #uk #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Ukrainian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Ukrainian using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Ukrainian test data of Common Voice.\n\n\n\nTest Result: 32.29 %",
"## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
null | null |
This is a standalone Turkish Wav2Vec2 tokenizer config intended for use with `run_speech_recognition_ctc_streaming.py`
|
{"license": "cc0-1.0"}
|
anton-l/wav2vec2-tokenizer-turkish
| null |
[
"license:cc0-1.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#license-cc0-1.0 #region-us
|
This is a standalone Turkish Wav2Vec2 tokenizer config intended for use with 'run_speech_recognition_ctc_streaming.py'
|
[] |
[
"TAGS\n#license-cc0-1.0 #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5806
- Wer: 0.3998
- Cer: 0.1053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 0.5369 | 17.0 | 500 | 0.6021 | 0.6366 | 0.1727 |
| 0.3542 | 34.0 | 1000 | 0.5265 | 0.4906 | 0.1278 |
| 0.1866 | 51.0 | 1500 | 0.5805 | 0.4768 | 0.1261 |
| 0.1674 | 68.01 | 2000 | 0.5336 | 0.4518 | 0.1186 |
| 0.19 | 86.0 | 2500 | 0.5676 | 0.4427 | 0.1151 |
| 0.0815 | 103.0 | 3000 | 0.5510 | 0.4268 | 0.1125 |
| 0.0545 | 120.0 | 3500 | 0.5608 | 0.4175 | 0.1099 |
| 0.0299 | 137.01 | 4000 | 0.5875 | 0.4222 | 0.1124 |
| 0.0267 | 155.0 | 4500 | 0.5882 | 0.4026 | 0.1063 |
| 0.025 | 172.0 | 5000 | 0.5806 | 0.3998 | 0.1053 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"language": ["tr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "model-index": [{"name": "wav2vec2-xls-r-common_voice-tr-ft", "results": []}]}
|
anton-l/wav2vec2-xls-r-common_voice-tr-ft-100sh
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #tr #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-xls-r-common\_voice-tr-ft
==================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the COMMON\_VOICE - TR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5806
* Wer: 0.3998
* Cer: 0.1053
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 5000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #tr #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-common_voice-tr-ft-stream
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3519
- Wer: 0.2927
- Cer: 0.0694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.6768 | 9.01 | 500 | 0.4220 | 0.5143 | 0.1235 |
| 0.3801 | 19.01 | 1000 | 0.3303 | 0.4403 | 0.1055 |
| 0.3616 | 29.0 | 1500 | 0.3540 | 0.3716 | 0.0878 |
| 0.2334 | 39.0 | 2000 | 0.3666 | 0.3671 | 0.0842 |
| 0.3141 | 49.0 | 2500 | 0.3407 | 0.3373 | 0.0819 |
| 0.1926 | 58.01 | 3000 | 0.3886 | 0.3520 | 0.0867 |
| 0.1372 | 68.01 | 3500 | 0.3415 | 0.3189 | 0.0743 |
| 0.091 | 78.0 | 4000 | 0.3750 | 0.3164 | 0.0757 |
| 0.0893 | 88.0 | 4500 | 0.3559 | 0.2968 | 0.0712 |
| 0.095 | 98.0 | 5000 | 0.3519 | 0.2927 | 0.0694 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"language": ["tr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "model-index": [{"name": "wav2vec2-xls-r-common_voice-tr-ft-stream", "results": []}]}
|
anton-l/wav2vec2-xls-r-common_voice-tr-ft-stream
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #tr #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-xls-r-common\_voice-tr-ft-stream
=========================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the COMMON\_VOICE - TR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3519
* Wer: 0.2927
* Cer: 0.0694
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 5000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.2
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #tr #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-common_voice-tr-ft-500sh
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5794
- Wer: 0.4009
- Cer: 0.1032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 0.5288 | 17.0 | 500 | 0.5099 | 0.5426 | 0.1432 |
| 0.2967 | 34.0 | 1000 | 0.5421 | 0.4746 | 0.1256 |
| 0.2447 | 51.0 | 1500 | 0.5347 | 0.4831 | 0.1267 |
| 0.122 | 68.01 | 2000 | 0.5854 | 0.4479 | 0.1161 |
| 0.1035 | 86.0 | 2500 | 0.5597 | 0.4457 | 0.1166 |
| 0.081 | 103.0 | 3000 | 0.5748 | 0.4250 | 0.1144 |
| 0.0849 | 120.0 | 3500 | 0.5598 | 0.4337 | 0.1145 |
| 0.0542 | 137.01 | 4000 | 0.5687 | 0.4223 | 0.1097 |
| 0.0318 | 155.0 | 4500 | 0.5904 | 0.4057 | 0.1052 |
| 0.0106 | 172.0 | 5000 | 0.5794 | 0.4009 | 0.1032 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
{"language": ["tr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "model-index": [{"name": "wav2vec2-xls-r-common_voice-tr-ft-500sh", "results": []}]}
|
anton-l/wav2vec2-xls-r-common_voice-tr-ft
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #tr #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-xls-r-common\_voice-tr-ft-500sh
========================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the COMMON\_VOICE - TR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5794
* Wer: 0.4009
* Cer: 0.1032
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 5000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2
* Datasets 1.18.2
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #tr #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
# Italian Bert Base Uncased on Squad-it
## Model description
This model is the uncased base version of the italian BERT (which you may find at `dbmdz/bert-base-italian-uncased`) trained on the question answering task.
#### How to use
```python
from transformers import pipeline
nlp = pipeline('question-answering', model='antoniocappiello/bert-base-italian-uncased-squad-it')
# nlp(context="D'Annunzio nacque nel 1863", question="Quando nacque D'Annunzio?")
# {'score': 0.9990354180335999, 'start': 22, 'end': 25, 'answer': '1863'}
```
## Training data
It has been trained on the question answering task using [SQuAD-it](http://sag.art.uniroma2.it/demo-software/squadit/), derived from the original SQuAD dataset and obtained through the semi-automatic translation of the SQuAD dataset in Italian.
## Training procedure
```bash
python ./examples/run_squad.py \
--model_type bert \
--model_name_or_path dbmdz/bert-base-italian-uncased \
--do_train \
--do_eval \
--train_file ./squad_it_uncased/train-v1.1.json \
--predict_file ./squad_it_uncased/dev-v1.1.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./models/bert-base-italian-uncased-squad-it/ \
--per_gpu_eval_batch_size=3 \
--per_gpu_train_batch_size=3 \
--do_lower_case \
```
## Eval Results
| Metric | # Value |
| ------ | --------- |
| **EM** | **63.8** |
| **F1** | **75.30** |
## Comparison
| Model | EM | F1 score |
| -------------------------------------------------------------------------------------------------------------------------------- | --------- | --------- |
| [DrQA-it trained on SQuAD-it](https://github.com/crux82/squad-it/blob/master/README.md#evaluating-a-neural-model-over-squad-it) | 56.1 | 65.9 |
| This one | **63.8** | **75.30** |
|
{"language": "it", "widget": [{"text": "Quando nacque D'Annunzio?", "context": "D'Annunzio nacque nel 1863"}]}
|
antoniocappiello/bert-base-italian-uncased-squad-it
| null |
[
"transformers",
"pytorch",
"question-answering",
"it",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"it"
] |
TAGS
#transformers #pytorch #question-answering #it #endpoints_compatible #has_space #region-us
|
Italian Bert Base Uncased on Squad-it
=====================================
Model description
-----------------
This model is the uncased base version of the italian BERT (which you may find at 'dbmdz/bert-base-italian-uncased') trained on the question answering task.
#### How to use
Training data
-------------
It has been trained on the question answering task using SQuAD-it, derived from the original SQuAD dataset and obtained through the semi-automatic translation of the SQuAD dataset in Italian.
Training procedure
------------------
Eval Results
------------
Comparison
----------
Model: DrQA-it trained on SQuAD-it, EM: 56.1, F1 score: 65.9
Model: This one, EM: 63.8, F1 score: 75.30
|
[
"#### How to use\n\n\nTraining data\n-------------\n\n\nIt has been trained on the question answering task using SQuAD-it, derived from the original SQuAD dataset and obtained through the semi-automatic translation of the SQuAD dataset in Italian.\n\n\nTraining procedure\n------------------\n\n\nEval Results\n------------\n\n\n\nComparison\n----------\n\n\nModel: DrQA-it trained on SQuAD-it, EM: 56.1, F1 score: 65.9\nModel: This one, EM: 63.8, F1 score: 75.30"
] |
[
"TAGS\n#transformers #pytorch #question-answering #it #endpoints_compatible #has_space #region-us \n",
"#### How to use\n\n\nTraining data\n-------------\n\n\nIt has been trained on the question answering task using SQuAD-it, derived from the original SQuAD dataset and obtained through the semi-automatic translation of the SQuAD dataset in Italian.\n\n\nTraining procedure\n------------------\n\n\nEval Results\n------------\n\n\n\nComparison\n----------\n\n\nModel: DrQA-it trained on SQuAD-it, EM: 56.1, F1 score: 65.9\nModel: This one, EM: 63.8, F1 score: 75.30"
] |
question-answering
|
transformers
|
# Question answering model for Estonian
This is a question answering model based on XLM-Roberta base model. It is fine-tuned subsequentially on:
1. English SQuAD v1.1
2. SQuAD v1.1 translated into Estonian
3. Small native Estonian dataset (800 samples)
The model has retained good multilingual properties and can be used for extractive QA tasks in all languages included in XLM-Roberta. The performance is best in the fine-tuning languages of Estonian and English.
| Tested on | F1 | EM |
| ----------- | --- | --- |
| EstQA test set | 82.4 | 75.3 |
| SQuAD v1.1 dev set | 86.9 | 77.9 |
The Estonian dataset used for fine-tuning and validating results is available in https://huggingface.co/datasets/anukaver/EstQA/ (version 1.0)
|
{"tags": ["question-answering"], "datasets": ["squad", "anukaver/EstQA"]}
|
anukaver/xlm-roberta-est-qa
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"dataset:squad",
"dataset:anukaver/EstQA",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #xlm-roberta #question-answering #dataset-squad #dataset-anukaver/EstQA #endpoints_compatible #region-us
|
Question answering model for Estonian
=====================================
This is a question answering model based on XLM-Roberta base model. It is fine-tuned subsequentially on:
1. English SQuAD v1.1
2. SQuAD v1.1 translated into Estonian
3. Small native Estonian dataset (800 samples)
The model has retained good multilingual properties and can be used for extractive QA tasks in all languages included in XLM-Roberta. The performance is best in the fine-tuning languages of Estonian and English.
Tested on: EstQA test set, F1: 82.4, EM: 75.3
Tested on: SQuAD v1.1 dev set, F1: 86.9, EM: 77.9
The Estonian dataset used for fine-tuning and validating results is available in URL (version 1.0)
|
[] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #question-answering #dataset-squad #dataset-anukaver/EstQA #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-as
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9068
- Wer: 0.6679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 240
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 5.7027 | 21.05 | 400 | 3.4157 | 1.0 |
| 1.1638 | 42.1 | 800 | 1.3498 | 0.7461 |
| 0.2266 | 63.15 | 1200 | 1.6147 | 0.7273 |
| 0.1473 | 84.21 | 1600 | 1.6649 | 0.7108 |
| 0.1043 | 105.26 | 2000 | 1.7691 | 0.7090 |
| 0.0779 | 126.31 | 2400 | 1.8300 | 0.7009 |
| 0.0613 | 147.36 | 2800 | 1.8681 | 0.6916 |
| 0.0471 | 168.41 | 3200 | 1.8567 | 0.6875 |
| 0.0343 | 189.46 | 3600 | 1.9054 | 0.6840 |
| 0.0265 | 210.51 | 4000 | 1.9020 | 0.6786 |
| 0.0219 | 231.56 | 4400 | 1.9068 | 0.6679 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-as --dataset mozilla-foundation/common_voice_7_0 --config as --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-as"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "as", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "জাহাজত তো তিশকুৰলৈ যাব কিন্তু জহাজিটো আহিপনে"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 67 | 56.995 |
|
{"language": ["as"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-as", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "as"}, "metrics": [{"type": "wer", "value": 56.995, "name": "Test WER"}, {"type": "cer", "value": 20.39, "name": "Test CER"}]}]}]}
|
anuragshas/wav2vec2-large-xls-r-300m-as
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"as",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"as"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #as #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-as
============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9068
* Wer: 0.6679
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.12
* num\_epochs: 240
### Training results
### Framework versions
* Transformers 4.16.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_7\_0' with split 'test'
### Inference With LM
### Eval results on Common Voice 7 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.12\n* num\\_epochs: 240",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 7 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #as #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.12\n* num\\_epochs: 240",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 7 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Bulgarian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2473
- Wer: 0.3002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1589 | 3.48 | 400 | 3.0830 | 1.0 |
| 2.8921 | 6.96 | 800 | 2.6605 | 0.9982 |
| 1.3049 | 10.43 | 1200 | 0.5069 | 0.5707 |
| 1.1349 | 13.91 | 1600 | 0.4159 | 0.5041 |
| 1.0686 | 17.39 | 2000 | 0.3815 | 0.4746 |
| 0.999 | 20.87 | 2400 | 0.3541 | 0.4343 |
| 0.945 | 24.35 | 2800 | 0.3266 | 0.4132 |
| 0.9058 | 27.83 | 3200 | 0.2969 | 0.3771 |
| 0.8672 | 31.3 | 3600 | 0.2802 | 0.3553 |
| 0.8313 | 34.78 | 4000 | 0.2662 | 0.3380 |
| 0.8068 | 38.26 | 4400 | 0.2528 | 0.3181 |
| 0.7796 | 41.74 | 4800 | 0.2537 | 0.3073 |
| 0.7621 | 45.22 | 5200 | 0.2503 | 0.3036 |
| 0.7611 | 48.7 | 5600 | 0.2477 | 0.2991 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-bg --dataset mozilla-foundation/common_voice_8_0 --config bg --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-bg --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-bg"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "bg", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "и надутият му ката блоонкурем взе да се събира"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 30.07 | 21.195 |
|
{"language": ["bg"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Bulgarian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "bg"}, "metrics": [{"type": "wer", "value": 21.195, "name": "Test WER"}, {"type": "cer", "value": 4.786, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "bg"}, "metrics": [{"type": "wer", "value": 32.667, "name": "Test WER"}, {"type": "cer", "value": 12.452, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "bg"}, "metrics": [{"type": "wer", "value": 31.03, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-large-xls-r-300m-bg
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"bg",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"bg"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #bg #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
XLS-R-300M - Bulgarian
======================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - BG dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2473
* Wer: 0.3002
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 50.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
### Inference With LM
### Eval results on Common Voice 8 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #bg #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Hausa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6094
- Wer: 0.5234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9599 | 6.56 | 400 | 2.8650 | 1.0 |
| 2.7357 | 13.11 | 800 | 2.7377 | 0.9951 |
| 1.3012 | 19.67 | 1200 | 0.6686 | 0.7111 |
| 1.0454 | 26.23 | 1600 | 0.5686 | 0.6137 |
| 0.9069 | 32.79 | 2000 | 0.5576 | 0.5815 |
| 0.82 | 39.34 | 2400 | 0.5502 | 0.5591 |
| 0.7413 | 45.9 | 2800 | 0.5970 | 0.5586 |
| 0.6872 | 52.46 | 3200 | 0.5817 | 0.5428 |
| 0.634 | 59.02 | 3600 | 0.5636 | 0.5314 |
| 0.6022 | 65.57 | 4000 | 0.5780 | 0.5229 |
| 0.5705 | 72.13 | 4400 | 0.6036 | 0.5323 |
| 0.5408 | 78.69 | 4800 | 0.6119 | 0.5336 |
| 0.5225 | 85.25 | 5200 | 0.6105 | 0.5270 |
| 0.5265 | 91.8 | 5600 | 0.6034 | 0.5231 |
| 0.5154 | 98.36 | 6000 | 0.6094 | 0.5234 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-ha-cv8 --dataset mozilla-foundation/common_voice_8_0 --config ha --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-ha-cv8"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ha", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "kakin hade ya ke da kyautar"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 47.821 | 36.295 |
|
{"language": ["ha"], "license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer"], "model-index": [{"name": "XLS-R-300M - Hausa", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ha"}, "metrics": [{"type": "wer", "value": 36.295, "name": "Test WER"}, {"type": "cer", "value": 11.073, "name": "Test CER"}]}]}]}
|
anuragshas/wav2vec2-large-xls-r-300m-ha-cv8
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"ha",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ha"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #ha #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
XLS-R-300M - Hausa
==================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6094
* Wer: 0.5234
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 13
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 100
### Training results
### Framework versions
* Transformers 4.16.1
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
### Inference With LM
### Eval results on Common Voice 8 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #ha #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 100",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4156
- Wer: 0.7181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.7703 | 2.72 | 400 | 2.2274 | 0.9259 |
| 0.6515 | 5.44 | 800 | 1.5812 | 0.7581 |
| 0.339 | 8.16 | 1200 | 2.0590 | 0.7825 |
| 0.2262 | 10.88 | 1600 | 2.0324 | 0.7603 |
| 0.1665 | 13.6 | 2000 | 2.1396 | 0.7481 |
| 0.1311 | 16.33 | 2400 | 2.2090 | 0.7379 |
| 0.1079 | 19.05 | 2800 | 2.3907 | 0.7612 |
| 0.0927 | 21.77 | 3200 | 2.5294 | 0.7478 |
| 0.0748 | 24.49 | 3600 | 2.5024 | 0.7452 |
| 0.0644 | 27.21 | 4000 | 2.4715 | 0.7307 |
| 0.0569 | 29.93 | 4400 | 2.4156 | 0.7181 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hi", "results": []}]}
|
anuragshas/wav2vec2-large-xls-r-300m-hi
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-hi
============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4156
* Wer: 0.7181
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-mr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5479
- Wer: 0.5740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.7378 | 18.18 | 400 | 3.5047 | 1.0 |
| 3.1707 | 36.36 | 800 | 2.6166 | 0.9912 |
| 1.4942 | 54.55 | 1200 | 0.5778 | 0.6927 |
| 1.2058 | 72.73 | 1600 | 0.5168 | 0.6362 |
| 1.0558 | 90.91 | 2000 | 0.5105 | 0.6069 |
| 0.9488 | 109.09 | 2400 | 0.5151 | 0.6089 |
| 0.8588 | 127.27 | 2800 | 0.5157 | 0.5989 |
| 0.7991 | 145.45 | 3200 | 0.5179 | 0.5740 |
| 0.7545 | 163.64 | 3600 | 0.5348 | 0.5740 |
| 0.7144 | 181.82 | 4000 | 0.5518 | 0.5724 |
| 0.7041 | 200.0 | 4400 | 0.5479 | 0.5740 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-mr --dataset mozilla-foundation/common_voice_8_0 --config mr --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-mr"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "mr", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "या पानास लेखाचे स्वरूप यायला हावे"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 49.177 | 32.811 |
|
{"language": ["mr"], "license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-mr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "mr"}, "metrics": [{"type": "wer", "value": 32.811, "name": "Test WER"}, {"type": "cer", "value": 7.692, "name": "Test CER"}]}]}]}
|
anuragshas/wav2vec2-large-xls-r-300m-mr
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"mr",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"mr"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #mr #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-mr
============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5479
* Wer: 0.5740
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 200
### Training results
### Framework versions
* Transformers 4.16.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.1
* Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
### Inference With LM
### Eval results on Common Voice 8 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #mr #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-or
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6618
- Wer: 0.5166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 240
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.0493 | 23.53 | 400 | 2.9728 | 1.0 |
| 0.5306 | 47.06 | 800 | 1.2895 | 0.6138 |
| 0.1253 | 70.59 | 1200 | 1.6854 | 0.5703 |
| 0.0763 | 94.12 | 1600 | 1.9433 | 0.5870 |
| 0.0552 | 117.65 | 2000 | 1.4393 | 0.5575 |
| 0.0382 | 141.18 | 2400 | 1.4665 | 0.5537 |
| 0.0286 | 164.71 | 2800 | 1.5441 | 0.5320 |
| 0.0212 | 188.24 | 3200 | 1.6502 | 0.5115 |
| 0.0168 | 211.76 | 3600 | 1.6411 | 0.5332 |
| 0.0129 | 235.29 | 4000 | 1.6618 | 0.5166 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-or --dataset mozilla-foundation/common_voice_7_0 --config or --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-or"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "or", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "ପରରାଏ ବାଲା ଗସ୍ତି ଫାଣ୍ଡି ଗୋପାଳ ପରଠାରୁ ଦେଢ଼କଶ ଦୂର"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 51.92 | 47.186 |
|
{"language": ["or"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-or", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "or"}, "metrics": [{"type": "wer", "value": 47.186, "name": "Test WER"}, {"type": "cer", "value": 11.82, "name": "Test CER"}]}]}]}
|
anuragshas/wav2vec2-large-xls-r-300m-or
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"or",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"or"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #or #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-or
============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6618
* Wer: 0.5166
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.12
* num\_epochs: 240
### Training results
### Framework versions
* Transformers 4.16.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.0
* Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_7\_0' with split 'test'
### Inference With LM
### Eval results on Common Voice 7 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.12\n* num\\_epochs: 240",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 7 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #or #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.12\n* num\\_epochs: 240",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 7 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Punjabi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2548
- Wer: 0.5677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.4804 | 16.65 | 400 | 1.8461 | 1.0 |
| 0.474 | 33.33 | 800 | 1.1018 | 0.6624 |
| 0.1389 | 49.98 | 1200 | 1.1918 | 0.6103 |
| 0.0919 | 66.65 | 1600 | 1.1889 | 0.6058 |
| 0.0657 | 83.33 | 2000 | 1.2266 | 0.5931 |
| 0.0479 | 99.98 | 2400 | 1.2512 | 0.5902 |
| 0.0355 | 116.65 | 2800 | 1.2548 | 0.5677 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-pa-in --dataset mozilla-foundation/common_voice_7_0 --config pa-IN --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-pa-in"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "pa-IN", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "ਉਨ੍ਹਾਂ ਨੇ ਸਾਰੇ ਤੇਅਰਵੇ ਵੱਖਰੀ ਕਿਸਮ ਦੇ ਕੀਤੇ ਹਨ"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 51.968 | 45.611 |
|
{"language": ["pa"], "license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer"], "model-index": [{"name": "XLS-R-300M - Punjabi", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "pa-IN"}, "metrics": [{"type": "wer", "value": 45.611, "name": "Test WER"}, {"type": "cer", "value": 15.584, "name": "Test CER"}]}]}]}
|
anuragshas/wav2vec2-large-xls-r-300m-pa-in
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"pa",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pa"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #pa #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
XLS-R-300M - Punjabi
====================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2548
* Wer: 0.5677
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.12
* num\_epochs: 120
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.0
* Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_7\_0' with split 'test'
### Inference With LM
### Eval results on Common Voice 7 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.12\n* num\\_epochs: 120\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 7 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #pa #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.12\n* num\\_epochs: 120\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 7 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ur-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1443
- Wer: 0.5677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.6269 | 15.98 | 400 | 3.3246 | 1.0 |
| 3.0546 | 31.98 | 800 | 2.8148 | 0.9963 |
| 1.4589 | 47.98 | 1200 | 1.0237 | 0.6584 |
| 1.0911 | 63.98 | 1600 | 0.9524 | 0.5966 |
| 0.8879 | 79.98 | 2000 | 0.9827 | 0.5822 |
| 0.7467 | 95.98 | 2400 | 0.9923 | 0.5840 |
| 0.6427 | 111.98 | 2800 | 0.9988 | 0.5714 |
| 0.5685 | 127.98 | 3200 | 1.0872 | 0.5807 |
| 0.5068 | 143.98 | 3600 | 1.1194 | 0.5822 |
| 0.463 | 159.98 | 4000 | 1.1138 | 0.5692 |
| 0.4212 | 175.98 | 4400 | 1.1232 | 0.5714 |
| 0.4056 | 191.98 | 4800 | 1.1443 | 0.5677 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-ur-cv8 --dataset mozilla-foundation/common_voice_8_0 --config ur --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-ur-cv8"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ur", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "اب نے ٹ پیس ان لیتے ہیں"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 52.146 | 42.376 |
|
{"language": ["ur"], "license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-ur-cv8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 42.376, "name": "Test WER"}, {"type": "cer", "value": 18.18, "name": "Test CER"}]}]}]}
|
anuragshas/wav2vec2-large-xls-r-300m-ur-cv8
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"ur",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ur"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #ur #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-ur-cv8
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1443
* Wer: 0.5677
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 200
### Training results
### Framework versions
* Transformers 4.16.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.1
* Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
### Inference With LM
### Eval results on Common Voice 8 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #ur #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ur
This model is a fine-tuned version of [anuragshas/wav2vec2-large-xls-r-300m-ur](https://huggingface.co/anuragshas/wav2vec2-large-xls-r-300m-ur) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0508
- Wer: 0.7328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 240
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.0719 | 66.67 | 400 | 1.8510 | 0.7432 |
| 0.0284 | 133.33 | 800 | 2.0088 | 0.7415 |
| 0.014 | 200.0 | 1200 | 2.0508 | 0.7328 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-ur", "results": []}]}
|
anuragshas/wav2vec2-large-xls-r-300m-ur
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-ur
============================
This model is a fine-tuned version of anuragshas/wav2vec2-large-xls-r-300m-ur on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0508
* Wer: 0.7328
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.12
* num\_epochs: 240
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.12\n* num\\_epochs: 240",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.12\n* num\\_epochs: 240",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Dhivehi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dhivehi using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "dv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-dv")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-dv")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Dhivehi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "dv", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-dv")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-dv")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\،\.\؟\–\'\’]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 55.68 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
{"language": "dv", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Anurag Singh XLSR Wav2Vec2 Large 53 Dhivehi", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice dv", "type": "common_voice", "args": "dv"}, "metrics": [{"type": "wer", "value": 55.68, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-large-xlsr-53-dv
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dv",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"dv"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dv #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Dhivehi
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Dhivehi using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Dhivehi test data of Common Voice.
Test Result: 55.68 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
|
[
"# Wav2Vec2-Large-XLSR-53-Dhivehi\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Dhivehi using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Dhivehi test data of Common Voice.\n\nTest Result: 55.68 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dv #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Dhivehi\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Dhivehi using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Dhivehi test data of Common Voice.\n\nTest Result: 55.68 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Sorbian, Upper
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Sorbian, Upper using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hsb", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-hsb")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-hsb")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Sorbian, Upper test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hsb", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-hsb")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-hsb")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\”\„\–\…\«\»]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 65.05 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
{"language": "hsb", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Anurag Singh XLSR Wav2Vec2 Large 53 Sorbian, Upper", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hsb", "type": "common_voice", "args": "hsb"}, "metrics": [{"type": "wer", "value": 65.05, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-large-xlsr-53-hsb
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hsb",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hsb"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hsb #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Sorbian, Upper
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Sorbian, Upper using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Sorbian, Upper test data of Common Voice.
Test Result: 65.05 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
|
[
"# Wav2Vec2-Large-XLSR-53-Sorbian, Upper\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Sorbian, Upper using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Sorbian, Upper test data of Common Voice.\n\nTest Result: 65.05 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hsb #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Sorbian, Upper\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Sorbian, Upper using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Sorbian, Upper test data of Common Voice.\n\nTest Result: 65.05 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Interlingua
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Interlingua using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ia", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-ia")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-ia")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Interlingua test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ia", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-ia")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-ia")
model.to("cuda")
chars_to_ignore_regex = '[\.\,\!\?\-\"\:\;\'\“\”]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 22.08 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
{"language": "ia", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Anurag Singh XLSR Wav2Vec2 Large 53 Interlingua", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ia", "type": "common_voice", "args": "ia"}, "metrics": [{"type": "wer", "value": 22.08, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-large-xlsr-53-ia
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ia",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ia"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ia #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Interlingua
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Interlingua using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Interlingua test data of Common Voice.
Test Result: 22.08 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
|
[
"# Wav2Vec2-Large-XLSR-53-Interlingua\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Interlingua using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Interlingua test data of Common Voice.\n\nTest Result: 22.08 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ia #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Interlingua\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Interlingua using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Interlingua test data of Common Voice.\n\nTest Result: 22.08 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Odia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Odia using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-odia")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-odia")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-odia")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-odia")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 57.10 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
{"language": "or", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Anurag Singh XLSR Wav2Vec2 Large 53 Odia", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice or", "type": "common_voice", "args": "or"}, "metrics": [{"type": "wer", "value": 57.1, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-large-xlsr-53-odia
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"or",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"or"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #or #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Wav2Vec2-Large-XLSR-53-Odia
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Odia using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Odia test data of Common Voice.
Test Result: 57.10 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
|
[
"# Wav2Vec2-Large-XLSR-53-Odia\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Odia using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Odia test data of Common Voice.\n\nTest Result: 57.10 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #or #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Odia\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Odia using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Odia test data of Common Voice.\n\nTest Result: 57.10 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Romansh Sursilv
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Romansh Sursilv using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "rm-sursilv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-rm-sursilv")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-rm-sursilv")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Romansh Sursilv test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "rm-sursilv", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-rm-sursilv")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-rm-sursilv")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\”\„\–\…\«\»]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 25.78 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
{"language": "rm-sursilv", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Anurag Singh XLSR Wav2Vec2 Large 53 Romansh Sursilv", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice rm-sursilv", "type": "common_voice", "args": "rm-sursilv"}, "metrics": [{"type": "wer", "value": 25.78, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-large-xlsr-53-rm-sursilv
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"rm-sursilv"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Romansh Sursilv
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Romansh Sursilv using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Romansh Sursilv test data of Common Voice.
Test Result: 25.78 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
|
[
"# Wav2Vec2-Large-XLSR-53-Romansh Sursilv\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Romansh Sursilv using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Romansh Sursilv test data of Common Voice.\n\nTest Result: 25.78 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Romansh Sursilv\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Romansh Sursilv using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Romansh Sursilv test data of Common Voice.\n\nTest Result: 25.78 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Romansh Vallader
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Romansh Vallader using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "rm-vallader", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-rm-vallader")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-rm-vallader")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Romansh Vallader test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "rm-vallader", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-rm-vallader")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-rm-vallader")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\”\„\–\…\«\»]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub('’ ',' ',batch["sentence"])
batch["sentence"] = re.sub(' ‘',' ',batch["sentence"])
batch["sentence"] = re.sub('’|‘','\'',batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 32.89 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
{"language": "rm-vallader", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Anurag Singh XLSR Wav2Vec2 Large 53 Romansh Vallader", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice rm-vallader", "type": "common_voice", "args": "rm-vallader"}, "metrics": [{"type": "wer", "value": 32.89, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-large-xlsr-53-rm-vallader
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"rm-vallader"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Romansh Vallader
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Romansh Vallader using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Romansh Vallader test data of Common Voice.
Test Result: 32.89 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
|
[
"# Wav2Vec2-Large-XLSR-53-Romansh Vallader\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Romansh Vallader using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Romansh Vallader test data of Common Voice.\n\nTest Result: 32.89 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Romansh Vallader\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Romansh Vallader using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Romansh Vallader test data of Common Voice.\n\nTest Result: 32.89 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Sakha
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Sakha using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sah", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-sah")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-sah")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Sakha test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sah", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-sah")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-sah")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\”\„\–\…\«\»]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 38.04 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
{"language": "sah", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Anurag Singh XLSR Wav2Vec2 Large 53 Sakha", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice sah", "type": "common_voice", "args": "sah"}, "metrics": [{"type": "wer", "value": 38.04, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-large-xlsr-53-sah
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"sah",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sah"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sah #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Sakha
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Sakha using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Sakha test data of Common Voice.
Test Result: 38.04 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
|
[
"# Wav2Vec2-Large-XLSR-53-Sakha\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Sakha using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Sakha test data of Common Voice.\n\nTest Result: 38.04 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sah #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Sakha\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Sakha using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Sakha test data of Common Voice.\n\nTest Result: 38.04 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Telugu
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Telugu using the [OpenSLR SLR66](http://openslr.org/66/) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import pandas as pd
# Evaluation notebook contains the procedure to download the data
df = pd.read_csv("/content/te/test.tsv", sep="\t")
df["path"] = "/content/te/clips/" + df["path"]
test_dataset = Dataset.from_pandas(df)
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-telugu")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-telugu")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
```python
import torch
import torchaudio
from datasets import Dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from sklearn.model_selection import train_test_split
import pandas as pd
# Evaluation notebook contains the procedure to download the data
df = pd.read_csv("/content/te/test.tsv", sep="\t")
df["path"] = "/content/te/clips/" + df["path"]
test_dataset = Dataset.from_pandas(df)
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-telugu")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-telugu")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\_\;\:\"\“\%\‘\”\।\’\'\&]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def normalizer(text):
# Use your custom normalizer
text = text.replace("\\n","\n")
text = ' '.join(text.split())
text = re.sub(r'''([a-z]+)''','',text,flags=re.IGNORECASE)
text = re.sub(r'''%'''," శాతం ", text)
text = re.sub(r'''(/|-|_)'''," ", text)
text = re.sub("ై","ై", text)
text = text.strip()
return text
def speech_file_to_array_fn(batch):
batch["sentence"] = normalizer(batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()+ " "
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 44.98%
## Training
70% of the OpenSLR Telugu dataset was used for training.
Train Split of annotations is [here](https://www.dropbox.com/s/xqc0wtour7f9h4c/train.tsv)
Test Split of annotations is [here](https://www.dropbox.com/s/qw1uy63oj4qdiu4/test.tsv)
Training Data Preparation notebook can be found [here](https://colab.research.google.com/drive/1_VR1QtY9qoiabyXBdJcOI29-xIKGdIzU?usp=sharing)
Training notebook can be found[here](https://colab.research.google.com/drive/14N-j4m0Ng_oktPEBN5wiUhDDbyrKYt8I?usp=sharing)
Evaluation notebook is [here](https://colab.research.google.com/drive/1SLEvbTWBwecIRTNqpQ0fFTqmr1-7MnSI?usp=sharing)
|
{"language": "te", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["openslr"], "metrics": ["wer"], "model-index": [{"name": "Anurag Singh XLSR Wav2Vec2 Large 53 Telugu", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "OpenSLR te", "type": "openslr", "args": "te"}, "metrics": [{"type": "wer", "value": 44.98, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-large-xlsr-53-telugu
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"te",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"te"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #te #dataset-openslr #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Wav2Vec2-Large-XLSR-53-Telugu
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Telugu using the OpenSLR SLR66 dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
Test Result: 44.98%
## Training
70% of the OpenSLR Telugu dataset was used for training.
Train Split of annotations is here
Test Split of annotations is here
Training Data Preparation notebook can be found here
Training notebook can be foundhere
Evaluation notebook is here
|
[
"# Wav2Vec2-Large-XLSR-53-Telugu\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Telugu using the OpenSLR SLR66 dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\n\nTest Result: 44.98%",
"## Training\n70% of the OpenSLR Telugu dataset was used for training.\n\nTrain Split of annotations is here\n\nTest Split of annotations is here\n\nTraining Data Preparation notebook can be found here\n\nTraining notebook can be foundhere\n\nEvaluation notebook is here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #te #dataset-openslr #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Telugu\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Telugu using the OpenSLR SLR66 dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\n\nTest Result: 44.98%",
"## Training\n70% of the OpenSLR Telugu dataset was used for training.\n\nTrain Split of annotations is here\n\nTest Split of annotations is here\n\nTraining Data Preparation notebook can be found here\n\nTraining notebook can be foundhere\n\nEvaluation notebook is here"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "vi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-vietnamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "vi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-53-vietnamese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 66.78 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
{"language": "vi", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Anurag Singh XLSR Wav2Vec2 Large 53 Vietnamese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice vi", "type": "common_voice", "args": "vi"}, "metrics": [{"type": "wer", "value": 66.78, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-large-xlsr-53-vietnamese
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"vi",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"vi"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #vi #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Vietnamese using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
Test Result: 66.78 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
|
[
"# Wav2Vec2-Large-XLSR-53-Vietnamese\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Vietnamese using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Vietnamese test data of Common Voice.\n\nTest Result: 66.78 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #vi #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Vietnamese\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Vietnamese using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Vietnamese test data of Common Voice.\n\nTest Result: 66.78 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Assamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Assamese using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "as", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-as")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-as")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Assamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "as", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-as")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-as")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\”\\়\\।]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub('’ ',' ',batch["sentence"])
batch["sentence"] = re.sub(' ‘',' ',batch["sentence"])
batch["sentence"] = re.sub('’|‘','\'',batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 69.63 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
{"language": "as", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Anurag Singh XLSR Wav2Vec2 Large 53 Assamese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice as", "type": "common_voice", "args": "as"}, "metrics": [{"type": "wer", "value": 69.63, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-large-xlsr-as
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"as",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"as"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #as #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Assamese
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Assamese using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Assamese test data of Common Voice.
Test Result: 69.63 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
|
[
"# Wav2Vec2-Large-XLSR-53-Assamese\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Assamese using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Assamese test data of Common Voice.\n\nTest Result: 69.63 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #as #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Assamese\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Assamese using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Assamese test data of Common Voice.\n\nTest Result: 69.63 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6780
- Wer: 0.3670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.514 | 2.07 | 400 | 1.4589 | 0.8531 |
| 1.4289 | 4.15 | 800 | 0.8940 | 0.6475 |
| 1.276 | 6.22 | 1200 | 0.7743 | 0.6089 |
| 1.2213 | 8.29 | 1600 | 0.6919 | 0.4973 |
| 1.1522 | 10.36 | 2000 | 0.6635 | 0.4588 |
| 1.0914 | 12.44 | 2400 | 0.6839 | 0.4586 |
| 1.0499 | 14.51 | 2800 | 0.7151 | 0.4467 |
| 1.0238 | 16.58 | 3200 | 0.6824 | 0.4436 |
| 0.9963 | 18.65 | 3600 | 0.6872 | 0.4437 |
| 0.9728 | 20.73 | 4000 | 0.7047 | 0.4244 |
| 0.9373 | 22.8 | 4400 | 0.6569 | 0.4189 |
| 0.9028 | 24.87 | 4800 | 0.6623 | 0.4094 |
| 0.8759 | 26.94 | 5200 | 0.6723 | 0.4152 |
| 0.8824 | 29.02 | 5600 | 0.6467 | 0.4017 |
| 0.8371 | 31.09 | 6000 | 0.6911 | 0.4080 |
| 0.8205 | 33.16 | 6400 | 0.7145 | 0.4063 |
| 0.7837 | 35.23 | 6800 | 0.7037 | 0.3930 |
| 0.7708 | 37.31 | 7200 | 0.6925 | 0.3840 |
| 0.7359 | 39.38 | 7600 | 0.7034 | 0.3829 |
| 0.7153 | 41.45 | 8000 | 0.7030 | 0.3794 |
| 0.7127 | 43.52 | 8400 | 0.6823 | 0.3761 |
| 0.6884 | 45.6 | 8800 | 0.6854 | 0.3711 |
| 0.6835 | 47.67 | 9200 | 0.6723 | 0.3665 |
| 0.6703 | 49.74 | 9600 | 0.6773 | 0.3668 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
|
anuragshas/wav2vec2-xls-r-1b-hi-cv8
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hi",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hi"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hi #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - HI dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6780
* Wer: 0.3670
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1500
* num\_epochs: 50.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hi #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-1B - Hindi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6921
- Wer: 0.3547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.0674 | 2.07 | 400 | 1.3411 | 0.8835 |
| 1.324 | 4.15 | 800 | 0.9311 | 0.7142 |
| 1.2023 | 6.22 | 1200 | 0.8060 | 0.6170 |
| 1.1573 | 8.29 | 1600 | 0.7415 | 0.4972 |
| 1.1117 | 10.36 | 2000 | 0.7248 | 0.4588 |
| 1.0672 | 12.44 | 2400 | 0.6729 | 0.4350 |
| 1.0336 | 14.51 | 2800 | 0.7117 | 0.4346 |
| 1.0025 | 16.58 | 3200 | 0.7019 | 0.4272 |
| 0.9578 | 18.65 | 3600 | 0.6792 | 0.4118 |
| 0.9272 | 20.73 | 4000 | 0.6863 | 0.4156 |
| 0.9321 | 22.8 | 4400 | 0.6535 | 0.3972 |
| 0.8802 | 24.87 | 4800 | 0.6766 | 0.3906 |
| 0.844 | 26.94 | 5200 | 0.6782 | 0.3949 |
| 0.8387 | 29.02 | 5600 | 0.6916 | 0.3921 |
| 0.8042 | 31.09 | 6000 | 0.6806 | 0.3797 |
| 0.793 | 33.16 | 6400 | 0.7120 | 0.3831 |
| 0.7567 | 35.23 | 6800 | 0.6862 | 0.3808 |
| 0.7463 | 37.31 | 7200 | 0.6893 | 0.3709 |
| 0.7053 | 39.38 | 7600 | 0.7096 | 0.3701 |
| 0.6906 | 41.45 | 8000 | 0.6921 | 0.3676 |
| 0.6891 | 43.52 | 8400 | 0.7167 | 0.3663 |
| 0.658 | 45.6 | 8800 | 0.6833 | 0.3580 |
| 0.6576 | 47.67 | 9200 | 0.6914 | 0.3569 |
| 0.6358 | 49.74 | 9600 | 0.6922 | 0.3551 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-1b-hi-with-lm --dataset mozilla-foundation/common_voice_8_0 --config hi --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-1b-hi-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "hi", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "तुम्हारे पास तीन महीने बचे हैं"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 26.209 | 15.899 |
|
{"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer"], "model-index": [{"name": "XLS-R-1B - Hindi", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 15.899, "name": "Test WER"}, {"type": "cer", "value": 5.83, "name": "Test CER"}]}]}]}
|
anuragshas/wav2vec2-xls-r-1b-hi-with-lm
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"hi",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hi"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #hi #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
XLS-R-1B - Hindi
================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - HI dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6921
* Wer: 0.3547
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1500
* num\_epochs: 50.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
### Inference With LM
### Eval results on Common Voice 8 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #hi #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-hi-cv7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5878
- Wer: 0.3419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.9859 | 2.72 | 400 | 1.1663 | 0.7948 |
| 1.2969 | 5.44 | 800 | 0.7725 | 0.6562 |
| 1.1954 | 8.16 | 1200 | 0.5940 | 0.4904 |
| 1.164 | 10.88 | 1600 | 0.5338 | 0.4316 |
| 1.1464 | 13.6 | 2000 | 0.5432 | 0.4226 |
| 1.1553 | 16.33 | 2400 | 0.5471 | 0.4260 |
| 1.0985 | 19.05 | 2800 | 0.5290 | 0.4076 |
| 1.0421 | 21.77 | 3200 | 0.5672 | 0.4181 |
| 0.9831 | 24.49 | 3600 | 0.5741 | 0.4141 |
| 0.9827 | 27.21 | 4000 | 0.5754 | 0.4179 |
| 0.9669 | 29.93 | 4400 | 0.5310 | 0.3889 |
| 0.9496 | 32.65 | 4800 | 0.5649 | 0.4062 |
| 0.9112 | 35.37 | 5200 | 0.5738 | 0.3926 |
| 0.8838 | 38.1 | 5600 | 0.5232 | 0.3768 |
| 0.8666 | 40.81 | 6000 | 0.5510 | 0.3852 |
| 0.8366 | 43.54 | 6400 | 0.5436 | 0.3837 |
| 0.7957 | 46.26 | 6800 | 0.5337 | 0.3775 |
| 0.7834 | 48.98 | 7200 | 0.5611 | 0.3844 |
| 0.7685 | 51.7 | 7600 | 0.5710 | 0.4008 |
| 0.7431 | 54.42 | 8000 | 0.5636 | 0.3726 |
| 0.7353 | 57.14 | 8400 | 0.5937 | 0.3836 |
| 0.7001 | 59.86 | 8800 | 0.5815 | 0.3858 |
| 0.6799 | 62.58 | 9200 | 0.5862 | 0.3696 |
| 0.6459 | 65.31 | 9600 | 0.6181 | 0.3762 |
| 0.6121 | 68.03 | 10000 | 0.5637 | 0.3590 |
| 0.5942 | 70.75 | 10400 | 0.6374 | 0.3882 |
| 0.5769 | 73.47 | 10800 | 0.6015 | 0.3640 |
| 0.5689 | 76.19 | 11200 | 0.5669 | 0.3508 |
| 0.5461 | 78.91 | 11600 | 0.5967 | 0.3621 |
| 0.5286 | 81.63 | 12000 | 0.5840 | 0.3605 |
| 0.5057 | 84.35 | 12400 | 0.5848 | 0.3489 |
| 0.482 | 87.07 | 12800 | 0.5860 | 0.3488 |
| 0.4655 | 89.79 | 13200 | 0.5780 | 0.3453 |
| 0.4523 | 92.52 | 13600 | 0.6150 | 0.3532 |
| 0.4422 | 95.24 | 14000 | 0.5930 | 0.3452 |
| 0.4436 | 97.96 | 14400 | 0.5867 | 0.3428 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-1b-hi --dataset mozilla-foundation/common_voice_7_0 --config hi --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-1b-hi"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "hi", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "तुम्हारे पास तीन महीने बचे हैं"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 28.942 | 18.504 |
|
{"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer"], "model-index": [{"name": "wav2vec2-xls-r-1b-hi-cv7", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 18.504, "name": "Test WER"}, {"type": "cer", "value": 6.655, "name": "Test CER"}]}]}]}
|
anuragshas/wav2vec2-xls-r-1b-hi
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hi"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #hi #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xls-r-1b-hi-cv7
========================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - HI dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5878
* Wer: 0.3419
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_7\_0' with split 'test'
### Inference With LM
### Eval results on Common Voice 7 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 7 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #hi #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_7\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 7 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Latvian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - LV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1660
- Wer: 0.1705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.489 | 2.56 | 400 | 3.3590 | 1.0 |
| 2.9903 | 5.13 | 800 | 2.9704 | 1.0001 |
| 1.6712 | 7.69 | 1200 | 0.6179 | 0.6566 |
| 1.2635 | 10.26 | 1600 | 0.3176 | 0.4531 |
| 1.0819 | 12.82 | 2000 | 0.2517 | 0.3508 |
| 1.0136 | 15.38 | 2400 | 0.2257 | 0.3124 |
| 0.9625 | 17.95 | 2800 | 0.1975 | 0.2311 |
| 0.901 | 20.51 | 3200 | 0.1986 | 0.2097 |
| 0.8842 | 23.08 | 3600 | 0.1904 | 0.2039 |
| 0.8542 | 25.64 | 4000 | 0.1847 | 0.1981 |
| 0.8244 | 28.21 | 4400 | 0.1805 | 0.1847 |
| 0.7689 | 30.77 | 4800 | 0.1736 | 0.1832 |
| 0.7825 | 33.33 | 5200 | 0.1698 | 0.1821 |
| 0.7817 | 35.9 | 5600 | 0.1758 | 0.1803 |
| 0.7488 | 38.46 | 6000 | 0.1663 | 0.1760 |
| 0.7171 | 41.03 | 6400 | 0.1636 | 0.1721 |
| 0.7222 | 43.59 | 6800 | 0.1663 | 0.1729 |
| 0.7156 | 46.15 | 7200 | 0.1633 | 0.1715 |
| 0.7121 | 48.72 | 7600 | 0.1666 | 0.1718 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config lv --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm --dataset speech-recognition-community-v2/dev_data --config lv --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "lv", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "domāju ka viņam viss labi"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 16.997 | 9.633 |
|
{"language": ["lv"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Latvian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "lv"}, "metrics": [{"type": "wer", "value": 9.633, "name": "Test WER"}, {"type": "cer", "value": 2.614, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "lv"}, "metrics": [{"type": "wer", "value": 36.11, "name": "Test WER"}, {"type": "cer", "value": 14.244, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "lv"}, "metrics": [{"type": "wer", "value": 44.12, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"lv",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"lv"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #lv #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
XLS-R-300M - Latvian
====================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - LV dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1660
* Wer: 0.1705
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 50.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
### Inference With LM
### Eval results on Common Voice 8 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #lv #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6693
- Wer: 0.5921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 500.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 4.9504 | 18.18 | 400 | 4.6730 | 1.0 |
| 3.3766 | 36.36 | 800 | 3.3464 | 1.0 |
| 3.1128 | 54.55 | 1200 | 3.0177 | 0.9980 |
| 1.7966 | 72.73 | 1600 | 0.8733 | 0.8039 |
| 1.4085 | 90.91 | 2000 | 0.5555 | 0.6458 |
| 1.1731 | 109.09 | 2400 | 0.4930 | 0.6438 |
| 1.0271 | 127.27 | 2800 | 0.4780 | 0.6093 |
| 0.9045 | 145.45 | 3200 | 0.4647 | 0.6578 |
| 0.807 | 163.64 | 3600 | 0.4505 | 0.5925 |
| 0.741 | 181.82 | 4000 | 0.4746 | 0.6025 |
| 0.6706 | 200.0 | 4400 | 0.5004 | 0.5844 |
| 0.6186 | 218.18 | 4800 | 0.4984 | 0.5997 |
| 0.5508 | 236.36 | 5200 | 0.5298 | 0.5636 |
| 0.5123 | 254.55 | 5600 | 0.5410 | 0.5110 |
| 0.4623 | 272.73 | 6000 | 0.5591 | 0.5383 |
| 0.4281 | 290.91 | 6400 | 0.5775 | 0.5600 |
| 0.4045 | 309.09 | 6800 | 0.5924 | 0.5580 |
| 0.3651 | 327.27 | 7200 | 0.5671 | 0.5684 |
| 0.343 | 345.45 | 7600 | 0.6083 | 0.5945 |
| 0.3085 | 363.64 | 8000 | 0.6243 | 0.5728 |
| 0.2941 | 381.82 | 8400 | 0.6245 | 0.5580 |
| 0.2735 | 400.0 | 8800 | 0.6458 | 0.5804 |
| 0.262 | 418.18 | 9200 | 0.6566 | 0.5824 |
| 0.2578 | 436.36 | 9600 | 0.6558 | 0.5965 |
| 0.2388 | 454.55 | 10000 | 0.6598 | 0.5993 |
| 0.2328 | 472.73 | 10400 | 0.6700 | 0.6041 |
| 0.2286 | 490.91 | 10800 | 0.6684 | 0.5957 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
{"language": ["mr"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
|
anuragshas/wav2vec2-xls-r-300m-mr-cv8-with-lm
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"mr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"mr"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #mr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - MR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6693
* Wer: 0.5921
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 500.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.4.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 500.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #mr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 500.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Maltese
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1895
- Wer: 0.1984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4219 | 3.6 | 400 | 3.3127 | 1.0 |
| 3.0399 | 7.21 | 800 | 3.0330 | 1.0 |
| 1.5756 | 10.81 | 1200 | 0.6108 | 0.5724 |
| 1.0995 | 14.41 | 1600 | 0.3091 | 0.3154 |
| 0.9639 | 18.02 | 2000 | 0.2596 | 0.2841 |
| 0.9032 | 21.62 | 2400 | 0.2270 | 0.2514 |
| 0.8145 | 25.23 | 2800 | 0.2172 | 0.2483 |
| 0.7845 | 28.83 | 3200 | 0.2084 | 0.2333 |
| 0.7694 | 32.43 | 3600 | 0.1974 | 0.2234 |
| 0.7333 | 36.04 | 4000 | 0.2020 | 0.2185 |
| 0.693 | 39.64 | 4400 | 0.1947 | 0.2148 |
| 0.6802 | 43.24 | 4800 | 0.1960 | 0.2102 |
| 0.667 | 46.85 | 5200 | 0.1904 | 0.2072 |
| 0.6486 | 50.45 | 5600 | 0.1881 | 0.2009 |
| 0.6339 | 54.05 | 6000 | 0.1877 | 0.1989 |
| 0.6254 | 57.66 | 6400 | 0.1893 | 0.2003 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-mt-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config mt --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-300m-mt-cv8-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "mt", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "għadu jilagħbu ċirku tant bilfondi"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 19.853 | 15.967 |
|
{"language": ["mt"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer"], "model-index": [{"name": "XLS-R-300M - Maltese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "mt"}, "metrics": [{"type": "wer", "value": 15.967, "name": "Test WER"}, {"type": "cer", "value": 3.657, "name": "Test CER"}]}]}]}
|
anuragshas/wav2vec2-xls-r-300m-mt-cv8-with-lm
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"mt",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"mt"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #mt #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
XLS-R-300M - Maltese
====================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - MT dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1895
* Wer: 0.1984
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 60.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
### Inference With LM
### Eval results on Common Voice 8 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 60.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #mt #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 60.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6864
- Wer: 0.6707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 4.3322 | 14.81 | 400 | 3.7450 | 1.0 |
| 3.2662 | 29.63 | 800 | 3.2571 | 0.9996 |
| 1.6408 | 44.44 | 1200 | 0.9098 | 0.8162 |
| 1.2289 | 59.26 | 1600 | 0.6757 | 0.7099 |
| 1.0551 | 74.07 | 2000 | 0.6417 | 0.7044 |
| 0.966 | 88.89 | 2400 | 0.6365 | 0.6789 |
| 0.8713 | 103.7 | 2800 | 0.6617 | 0.6954 |
| 0.8055 | 118.52 | 3200 | 0.6371 | 0.6762 |
| 0.7489 | 133.33 | 3600 | 0.6798 | 0.6911 |
| 0.7073 | 148.15 | 4000 | 0.6567 | 0.6731 |
| 0.6609 | 162.96 | 4400 | 0.6742 | 0.6840 |
| 0.6435 | 177.78 | 4800 | 0.6862 | 0.6633 |
| 0.6282 | 192.59 | 5200 | 0.6865 | 0.6731 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
{"language": ["pa-IN"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
|
anuragshas/wav2vec2-xls-r-300m-pa-IN-cv8-with-lm
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pa-IN"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6864
* Wer: 0.6707
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 200.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.4.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Slovak
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3067
- Wer: 0.2678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.175 | 2.41 | 400 | 4.6909 | 1.0 |
| 3.3785 | 4.82 | 800 | 3.3080 | 1.0 |
| 2.6964 | 7.23 | 1200 | 2.0651 | 1.1055 |
| 1.3008 | 9.64 | 1600 | 0.5845 | 0.6207 |
| 1.1185 | 12.05 | 2000 | 0.4195 | 0.4193 |
| 1.0252 | 14.46 | 2400 | 0.3824 | 0.3570 |
| 0.935 | 16.87 | 2800 | 0.3693 | 0.3462 |
| 0.8818 | 19.28 | 3200 | 0.3587 | 0.3318 |
| 0.8534 | 21.69 | 3600 | 0.3420 | 0.3180 |
| 0.8137 | 24.1 | 4000 | 0.3426 | 0.3130 |
| 0.7968 | 26.51 | 4400 | 0.3349 | 0.3102 |
| 0.7558 | 28.92 | 4800 | 0.3216 | 0.3019 |
| 0.7313 | 31.33 | 5200 | 0.3451 | 0.3060 |
| 0.7358 | 33.73 | 5600 | 0.3272 | 0.2967 |
| 0.718 | 36.14 | 6000 | 0.3315 | 0.2882 |
| 0.6991 | 38.55 | 6400 | 0.3299 | 0.2830 |
| 0.6529 | 40.96 | 6800 | 0.3140 | 0.2836 |
| 0.6225 | 43.37 | 7200 | 0.3128 | 0.2751 |
| 0.633 | 45.78 | 7600 | 0.3211 | 0.2774 |
| 0.5876 | 48.19 | 8000 | 0.3162 | 0.2764 |
| 0.588 | 50.6 | 8400 | 0.3082 | 0.2722 |
| 0.5915 | 53.01 | 8800 | 0.3120 | 0.2681 |
| 0.5798 | 55.42 | 9200 | 0.3133 | 0.2709 |
| 0.5736 | 57.83 | 9600 | 0.3086 | 0.2676 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config sk --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm --dataset speech-recognition-community-v2/dev_data --config sk --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "sk", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => ""
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 26.707 | 18.609 |
|
{"language": ["sk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Slovak", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sk"}, "metrics": [{"type": "wer", "value": 18.609, "name": "Test WER"}, {"type": "cer", "value": 5.488, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sk"}, "metrics": [{"type": "wer", "value": 40.548, "name": "Test WER"}, {"type": "cer", "value": 17.733, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sk"}, "metrics": [{"type": "wer", "value": 44.1, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"sk",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sk"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #sk #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
XLS-R-300M - Slovak
===================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - SK dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3067
* Wer: 0.2678
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1500
* num\_epochs: 60.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.4.dev0
* Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
### Inference With LM
### Eval results on Common Voice 8 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 60.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #sk #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1500\n* num\\_epochs: 60.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Slovenian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2578
- Wer: 0.2273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1829 | 4.88 | 400 | 3.1228 | 1.0 |
| 2.8675 | 9.76 | 800 | 2.8616 | 0.9993 |
| 1.583 | 14.63 | 1200 | 0.6392 | 0.6239 |
| 1.1959 | 19.51 | 1600 | 0.3602 | 0.3651 |
| 1.0276 | 24.39 | 2000 | 0.3021 | 0.2981 |
| 0.9671 | 29.27 | 2400 | 0.2872 | 0.2739 |
| 0.873 | 34.15 | 2800 | 0.2593 | 0.2459 |
| 0.8513 | 39.02 | 3200 | 0.2617 | 0.2473 |
| 0.8132 | 43.9 | 3600 | 0.2548 | 0.2426 |
| 0.7935 | 48.78 | 4000 | 0.2637 | 0.2353 |
| 0.7565 | 53.66 | 4400 | 0.2629 | 0.2322 |
| 0.7359 | 58.54 | 4800 | 0.2579 | 0.2253 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-sl-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config sl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-sl-cv8-with-lm --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-300m-sl-cv8-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "sl", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "zmago je divje od letel s helikopterjem visoko vzrak"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 19.938 | 12.736 |
|
{"language": ["sl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "XLS-R-300M - Slovenian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sl"}, "metrics": [{"type": "wer", "value": 12.736, "name": "Test WER"}, {"type": "cer", "value": 3.605, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sl"}, "metrics": [{"type": "wer", "value": 45.587, "name": "Test WER"}, {"type": "cer", "value": 20.886, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "sl"}, "metrics": [{"type": "wer", "value": 45.42, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-xls-r-300m-sl-cv8-with-lm
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"sl",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sl"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #sl #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
XLS-R-300M - Slovenian
======================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - SL dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2578
* Wer: 0.2273
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 60.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
### Inference With LM
### Eval results on Common Voice 8 "test" (WER):
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 60.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #sl #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 60.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'\n2. To evaluate on 'speech-recognition-community-v2/dev\\_data'",
"### Inference With LM",
"### Eval results on Common Voice 8 \"test\" (WER):"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Punjabi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Punjabi using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pa-IN", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-xlsr-53-pa-in")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-xlsr-53-pa-in")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Punjabi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pa-IN", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-xlsr-53-pa-in")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-xlsr-53-pa-in")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\।\’\'\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 58.05 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
{"language": "pa-IN", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Anurag Singh XLSR Wav2Vec2 Large 53 Punjabi", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice pa-IN", "type": "common_voice", "args": "pa-IN"}, "metrics": [{"type": "wer", "value": 58.05, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-xlsr-53-pa-in
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pa-IN"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Wav2Vec2-Large-XLSR-53-Punjabi
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Punjabi using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Punjabi test data of Common Voice.
Test Result: 58.05 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
|
[
"# Wav2Vec2-Large-XLSR-53-Punjabi\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Punjabi using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Punjabi test data of Common Voice.\n\nTest Result: 58.05 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Punjabi\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Punjabi using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Punjabi test data of Common Voice.\n\nTest Result: 58.05 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-rm-vallader-with-lm
This model is a fine-tuned version of [anuragshas/wav2vec2-large-xlsr-53-rm-vallader](https://huggingface.co/anuragshas/wav2vec2-large-xlsr-53-rm-vallader) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4552
- Wer: 0.3206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.112
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2379 | 3.12 | 100 | 0.4041 | 0.3396 |
| 0.103 | 6.25 | 200 | 0.4400 | 0.3337 |
| 0.0664 | 9.38 | 300 | 0.4239 | 0.3315 |
| 0.0578 | 12.5 | 400 | 0.4303 | 0.3267 |
| 0.0446 | 15.62 | 500 | 0.4575 | 0.3274 |
| 0.041 | 18.75 | 600 | 0.4451 | 0.3223 |
| 0.0402 | 21.88 | 700 | 0.4507 | 0.3206 |
| 0.0374 | 25.0 | 800 | 0.4649 | 0.3208 |
| 0.0371 | 28.12 | 900 | 0.4552 | 0.3206 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xlsr-53-rm-vallader-with-lm", "results": []}]}
|
anuragshas/wav2vec2-xlsr-53-rm-vallader-with-lm
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-xlsr-53-rm-vallader-with-lm
====================================
This model is a fine-tuned version of anuragshas/wav2vec2-large-xlsr-53-rm-vallader on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4552
* Wer: 0.3206
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.112
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.112\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.112\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-xlsr-53-tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Tamil test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-xlsr-53-tamil")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\।\’\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 71.87 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
{"language": "ta", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "Anurag Singh XLSR Wav2Vec2 Large 53 Tamil", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ta", "type": "common_voice", "args": "ta"}, "metrics": [{"type": "wer", "value": 71.87, "name": "Test WER"}]}]}]}
|
anuragshas/wav2vec2-xlsr-53-tamil
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ta",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ta"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Tamil using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Tamil test data of Common Voice.
Test Result: 71.87 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
|
[
"# Wav2Vec2-Large-XLSR-53-Tamil\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Tamil using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Tamil test data of Common Voice.\n\nTest Result: 71.87 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ta #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Tamil\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Tamil using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Tamil test data of Common Voice.\n\nTest Result: 71.87 %",
"## Training\nThe Common Voice 'train' and 'validation' datasets were used for training."
] |
text-generation
|
transformers
|
# Chandler DialoGPT Model
|
{"tags": ["conversational"]}
|
anweasha/DialoGPT-small-Chandler
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Chandler DialoGPT Model
|
[
"# Chandler DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Chandler DialoGPT Model"
] |
text-generation
|
transformers
|
# Jake Peralta DialoGPT Model
|
{"tags": ["conversational"]}
|
anweasha/DialoGPT-small-Jake
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Jake Peralta DialoGPT Model
|
[
"# Jake Peralta DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Jake Peralta DialoGPT Model"
] |
translation
|
transformers
|
## [google/t5-v1_1-small](google/t5-v1_1-small) model
### pretrained on [anzorq/kbd-ru-1.67M-temp](https://huggingface.co/datasets/anzorq/kbd-ru-1.67M-temp)
### fine-tuned on **17753** Russian-Kabardian word/sentence pairs
kbd text uses custom latin script for optimization reasons.
Translation input should start with '**ru->kbd:** '.
**Tokenizer**: T5 sentencepiece, char, cased.
|
{"language": ["ru", "kbd"], "tags": ["translation"], "datasets": ["anzorq/kbd-ru-1.67M-temp", "17753 Russian-Kabardian pairs of text"], "widget": [{"text": "ru->kbd: \u042f \u0438\u0434\u0443 \u0434\u043e\u043c\u043e\u0439.", "example_title": "\u042f \u0438\u0434\u0443 \u0434\u043e\u043c\u043e\u0439."}, {"text": "ru->kbd: \u0414\u0435\u0442\u0438 \u0438\u0433\u0440\u0430\u044e\u0442 \u0432\u043e \u0434\u0432\u043e\u0440\u0435.", "example_title": "\u0414\u0435\u0442\u0438 \u0438\u0433\u0440\u0430\u044e\u0442 \u0432\u043e \u0434\u0432\u043e\u0440\u0435."}, {"text": "ru->kbd: \u0421\u043a\u043e\u043b\u044c\u043a\u043e \u0442\u0435\u0431\u0435 \u043b\u0435\u0442?", "example_title": "\u0421\u043a\u043e\u043b\u044c\u043a\u043e \u0442\u0435\u0431\u0435 \u043b\u0435\u0442?"}]}
|
anzorq/t5-v1_1-small-ru_kbd-cased
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"ru",
"kbd",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ru",
"kbd"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #translation #ru #kbd #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## google/t5-v1_1-small model
### pretrained on anzorq/kbd-ru-1.67M-temp
### fine-tuned on 17753 Russian-Kabardian word/sentence pairs
kbd text uses custom latin script for optimization reasons.
Translation input should start with 'ru->kbd: '.
Tokenizer: T5 sentencepiece, char, cased.
|
[
"## google/t5-v1_1-small model",
"### pretrained on anzorq/kbd-ru-1.67M-temp",
"### fine-tuned on 17753 Russian-Kabardian word/sentence pairs\n\nkbd text uses custom latin script for optimization reasons.\n\nTranslation input should start with 'ru->kbd: '.\n\nTokenizer: T5 sentencepiece, char, cased."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #translation #ru #kbd #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## google/t5-v1_1-small model",
"### pretrained on anzorq/kbd-ru-1.67M-temp",
"### fine-tuned on 17753 Russian-Kabardian word/sentence pairs\n\nkbd text uses custom latin script for optimization reasons.\n\nTranslation input should start with 'ru->kbd: '.\n\nTokenizer: T5 sentencepiece, char, cased."
] |
fill-mask
|
transformers
|
# BERT L-10 H-512 fine-tuned on MLM (CORD-19 2020/06/16)
BERT model with [10 Transformer layers and hidden embedding of size 512](https://huggingface.co/google/bert_uncased_L-10_H-512_A-8), referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962), fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).
## Training the model
```bash
python run_language_modeling.py
--model_type bert
--model_name_or_path google/bert_uncased_L-10_H-512_A-8
--do_train
--train_data_file {cord19-200616-dataset}
--mlm
--mlm_probability 0.2
--line_by_line
--block_size 512
--per_device_train_batch_size 10
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-10_H-512_A-8_cord19-200616
|
{}
|
aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"arxiv:1908.08962",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.08962"
] |
[] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #arxiv-1908.08962 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT L-10 H-512 fine-tuned on MLM (CORD-19 2020/06/16)
BERT model with 10 Transformer layers and hidden embedding of size 512, referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).
## Training the model
'''bash
python run_language_modeling.py
--model_type bert
--model_name_or_path google/bert_uncased_L-10_H-512_A-8
--do_train
--train_data_file {cord19-200616-dataset}
--mlm
--mlm_probability 0.2
--line_by_line
--block_size 512
--per_device_train_batch_size 10
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-10_H-512_A-8_cord19-200616
|
[
"# BERT L-10 H-512 fine-tuned on MLM (CORD-19 2020/06/16)\n\nBERT model with 10 Transformer layers and hidden embedding of size 512, referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).",
"## Training the model\n\n'''bash\npython run_language_modeling.py\n --model_type bert\n --model_name_or_path google/bert_uncased_L-10_H-512_A-8\n --do_train\n --train_data_file {cord19-200616-dataset}\n --mlm\n --mlm_probability 0.2\n --line_by_line\n --block_size 512\n --per_device_train_batch_size 10\n --learning_rate 3e-5\n --num_train_epochs 2\n --output_dir bert_uncased_L-10_H-512_A-8_cord19-200616"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #arxiv-1908.08962 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT L-10 H-512 fine-tuned on MLM (CORD-19 2020/06/16)\n\nBERT model with 10 Transformer layers and hidden embedding of size 512, referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).",
"## Training the model\n\n'''bash\npython run_language_modeling.py\n --model_type bert\n --model_name_or_path google/bert_uncased_L-10_H-512_A-8\n --do_train\n --train_data_file {cord19-200616-dataset}\n --mlm\n --mlm_probability 0.2\n --line_by_line\n --block_size 512\n --per_device_train_batch_size 10\n --learning_rate 3e-5\n --num_train_epochs 2\n --output_dir bert_uncased_L-10_H-512_A-8_cord19-200616"
] |
question-answering
|
transformers
|
# BERT L-10 H-512 CORD-19 (2020/06/16) fine-tuned on SQuAD v2.0
BERT model with [10 Transformer layers and hidden embedding of size 512](https://huggingface.co/google/bert_uncased_L-10_H-512_A-8), referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962), [fine-tuned for MLM](https://huggingface.co/aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616) on CORD-19 dataset (as released on 2020/06/16) and fine-tuned for QA on SQuAD v2.0.
## Training the model
```bash
python run_squad.py
--model_type bert
--model_name_or_path aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616
--train_file 'train-v2.0.json'
--predict_file 'dev-v2.0.json'
--do_train
--do_eval
--do_lower_case
--version_2_with_negative
--max_seq_length 384
--per_gpu_train_batch_size 10
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-10_H-512_A-8_cord19-200616_squad2
|
{"datasets": ["squad_v2"]}
|
aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616_squad2
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"dataset:squad_v2",
"arxiv:1908.08962",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.08962"
] |
[] |
TAGS
#transformers #pytorch #jax #bert #question-answering #dataset-squad_v2 #arxiv-1908.08962 #endpoints_compatible #region-us
|
# BERT L-10 H-512 CORD-19 (2020/06/16) fine-tuned on SQuAD v2.0
BERT model with 10 Transformer layers and hidden embedding of size 512, referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16) and fine-tuned for QA on SQuAD v2.0.
## Training the model
'''bash
python run_squad.py
--model_type bert
--model_name_or_path aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616
--train_file 'train-v2.0.json'
--predict_file 'dev-v2.0.json'
--do_train
--do_eval
--do_lower_case
--version_2_with_negative
--max_seq_length 384
--per_gpu_train_batch_size 10
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-10_H-512_A-8_cord19-200616_squad2
|
[
"# BERT L-10 H-512 CORD-19 (2020/06/16) fine-tuned on SQuAD v2.0\n\nBERT model with 10 Transformer layers and hidden embedding of size 512, referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16) and fine-tuned for QA on SQuAD v2.0.",
"## Training the model\n\n'''bash\npython run_squad.py\n --model_type bert\n --model_name_or_path aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616\n --train_file 'train-v2.0.json'\n --predict_file 'dev-v2.0.json'\n --do_train\n --do_eval\n --do_lower_case\n --version_2_with_negative\n --max_seq_length 384\n --per_gpu_train_batch_size 10\n --learning_rate 3e-5\n --num_train_epochs 2\n --output_dir bert_uncased_L-10_H-512_A-8_cord19-200616_squad2"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #question-answering #dataset-squad_v2 #arxiv-1908.08962 #endpoints_compatible #region-us \n",
"# BERT L-10 H-512 CORD-19 (2020/06/16) fine-tuned on SQuAD v2.0\n\nBERT model with 10 Transformer layers and hidden embedding of size 512, referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16) and fine-tuned for QA on SQuAD v2.0.",
"## Training the model\n\n'''bash\npython run_squad.py\n --model_type bert\n --model_name_or_path aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616\n --train_file 'train-v2.0.json'\n --predict_file 'dev-v2.0.json'\n --do_train\n --do_eval\n --do_lower_case\n --version_2_with_negative\n --max_seq_length 384\n --per_gpu_train_batch_size 10\n --learning_rate 3e-5\n --num_train_epochs 2\n --output_dir bert_uncased_L-10_H-512_A-8_cord19-200616_squad2"
] |
fill-mask
|
transformers
|
# BERT L-2 H-512 fine-tuned on MLM (CORD-19 2020/06/16)
BERT model with [2 Transformer layers and hidden embedding of size 512](https://huggingface.co/google/bert_uncased_L-2_H-512_A-8), referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962), fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).
## Training the model
```bash
python run_language_modeling.py
--model_type bert
--model_name_or_path google/bert_uncased_L-2_H-512_A-8
--do_train
--train_data_file {cord19-200616-dataset}
--mlm
--mlm_probability 0.2
--line_by_line
--block_size 512
--per_device_train_batch_size 20
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-2_H-512_A-8_cord19-200616
|
{}
|
aodiniz/bert_uncased_L-2_H-512_A-8_cord19-200616
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"arxiv:1908.08962",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.08962"
] |
[] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #arxiv-1908.08962 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT L-2 H-512 fine-tuned on MLM (CORD-19 2020/06/16)
BERT model with 2 Transformer layers and hidden embedding of size 512, referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).
## Training the model
'''bash
python run_language_modeling.py
--model_type bert
--model_name_or_path google/bert_uncased_L-2_H-512_A-8
--do_train
--train_data_file {cord19-200616-dataset}
--mlm
--mlm_probability 0.2
--line_by_line
--block_size 512
--per_device_train_batch_size 20
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-2_H-512_A-8_cord19-200616
|
[
"# BERT L-2 H-512 fine-tuned on MLM (CORD-19 2020/06/16)\n\nBERT model with 2 Transformer layers and hidden embedding of size 512, referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).",
"## Training the model\n\n'''bash\npython run_language_modeling.py\n --model_type bert\n --model_name_or_path google/bert_uncased_L-2_H-512_A-8\n --do_train\n --train_data_file {cord19-200616-dataset}\n --mlm\n --mlm_probability 0.2\n --line_by_line\n --block_size 512\n --per_device_train_batch_size 20\n --learning_rate 3e-5\n --num_train_epochs 2\n --output_dir bert_uncased_L-2_H-512_A-8_cord19-200616"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #arxiv-1908.08962 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT L-2 H-512 fine-tuned on MLM (CORD-19 2020/06/16)\n\nBERT model with 2 Transformer layers and hidden embedding of size 512, referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).",
"## Training the model\n\n'''bash\npython run_language_modeling.py\n --model_type bert\n --model_name_or_path google/bert_uncased_L-2_H-512_A-8\n --do_train\n --train_data_file {cord19-200616-dataset}\n --mlm\n --mlm_probability 0.2\n --line_by_line\n --block_size 512\n --per_device_train_batch_size 20\n --learning_rate 3e-5\n --num_train_epochs 2\n --output_dir bert_uncased_L-2_H-512_A-8_cord19-200616"
] |
fill-mask
|
transformers
|
# BERT L-4 H-256 fine-tuned on MLM (CORD-19 2020/06/16)
BERT model with [4 Transformer layers and hidden embedding of size 256](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4), referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962), fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).
## Training the model
```bash
python run_language_modeling.py
--model_type bert
--model_name_or_path google/bert_uncased_L-4_H-256_A-4
--do_train
--train_data_file {cord19-200616-dataset}
--mlm
--mlm_probability 0.2
--line_by_line
--block_size 256
--per_device_train_batch_size 20
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-4_H-256_A-4_cord19-200616
|
{}
|
aodiniz/bert_uncased_L-4_H-256_A-4_cord19-200616
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"arxiv:1908.08962",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.08962"
] |
[] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #arxiv-1908.08962 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT L-4 H-256 fine-tuned on MLM (CORD-19 2020/06/16)
BERT model with 4 Transformer layers and hidden embedding of size 256, referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).
## Training the model
'''bash
python run_language_modeling.py
--model_type bert
--model_name_or_path google/bert_uncased_L-4_H-256_A-4
--do_train
--train_data_file {cord19-200616-dataset}
--mlm
--mlm_probability 0.2
--line_by_line
--block_size 256
--per_device_train_batch_size 20
--learning_rate 3e-5
--num_train_epochs 2
--output_dir bert_uncased_L-4_H-256_A-4_cord19-200616
|
[
"# BERT L-4 H-256 fine-tuned on MLM (CORD-19 2020/06/16)\n\nBERT model with 4 Transformer layers and hidden embedding of size 256, referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).",
"## Training the model\n\n'''bash\npython run_language_modeling.py\n --model_type bert\n --model_name_or_path google/bert_uncased_L-4_H-256_A-4\n --do_train\n --train_data_file {cord19-200616-dataset}\n --mlm\n --mlm_probability 0.2\n --line_by_line\n --block_size 256\n --per_device_train_batch_size 20\n --learning_rate 3e-5\n --num_train_epochs 2\n --output_dir bert_uncased_L-4_H-256_A-4_cord19-200616"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #arxiv-1908.08962 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT L-4 H-256 fine-tuned on MLM (CORD-19 2020/06/16)\n\nBERT model with 4 Transformer layers and hidden embedding of size 256, referenced in Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).",
"## Training the model\n\n'''bash\npython run_language_modeling.py\n --model_type bert\n --model_name_or_path google/bert_uncased_L-4_H-256_A-4\n --do_train\n --train_data_file {cord19-200616-dataset}\n --mlm\n --mlm_probability 0.2\n --line_by_line\n --block_size 256\n --per_device_train_batch_size 20\n --learning_rate 3e-5\n --num_train_epochs 2\n --output_dir bert_uncased_L-4_H-256_A-4_cord19-200616"
] |
null | null |
# Building a HuggingFace Transformer NLP Model
## Running this Repo
|
{}
|
aogara/slai_transformer
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Building a HuggingFace Transformer NLP Model
## Running this Repo
|
[
"# Building a HuggingFace Transformer NLP Model",
"## Running this Repo"
] |
[
"TAGS\n#region-us \n",
"# Building a HuggingFace Transformer NLP Model",
"## Running this Repo"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-new-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "model-index": [{"name": "my-new-model", "results": []}]}
|
aozorahime/my-new-model
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-xsum #license-apache-2.0 #endpoints_compatible #region-us
|
# my-new-model
This model is a fine-tuned version of bert-base-uncased on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
[
"# my-new-model\n\nThis model is a fine-tuned version of bert-base-uncased on the xsum dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.12.3\n- Pytorch 1.9.1\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-xsum #license-apache-2.0 #endpoints_compatible #region-us \n",
"# my-new-model\n\nThis model is a fine-tuned version of bert-base-uncased on the xsum dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.12.3\n- Pytorch 1.9.1\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Aladdin Bot
|
{"tags": ["conversational"]}
|
aplnestrella/Aladdin-Bot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Aladdin Bot
|
[
"# Aladdin Bot"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Aladdin Bot"
] |
text-to-image
|
transformers
|
## DALL·E mini - Generate images from text
<img style="text-align:center; display:block;" src="https://raw.githubusercontent.com/borisdayma/dalle-mini/main/img/logo.png" width="200">
* [Technical Report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA)
* [Demo](https://huggingface.co/spaces/flax-community/dalle-mini)
### Model Description
This is an attempt to replicate OpenAI's [DALL·E](https://openai.com/blog/dall-e/), a model capable of generating arbitrary images from a text prompt that describes the desired result.

This model's architecture is a simplification of the original, and leverages previous open source efforts and available pre-trained models. Results have lower quality than OpenAI's, but the model can be trained and used on less demanding hardware. Our training was performed on a single TPU v3-8 for a few days.
### Components of the Architecture
The system relies on the Flax/JAX infrastructure, which are ideal for TPU training. TPUs are not required, both Flax and JAX run very efficiently on GPU backends.
The main components of the architecture include:
* An encoder, based on [BART](https://arxiv.org/abs/1910.13461). The encoder transforms a sequence of input text tokens to a sequence of image tokens. The input tokens are extracted from the text prompt by using the model's tokenizer. The image tokens are a fixed-length sequence, and they represent indices in a VQGAN-based pre-trained codebook.
* A decoder, which converts the image tokens to image pixels. As mentioned above, the decoder is based on a [VQGAN model](https://compvis.github.io/taming-transformers/).
The model definition we use for the encoder can be downloaded from our [Github repo](https://github.com/borisdayma/dalle-mini). The encoder is represented by the class `CustomFlaxBartForConditionalGeneration`.
To use the decoder, you need to follow the instructions in our accompanying VQGAN model in the hub, [flax-community/vqgan_f16_16384](https://huggingface.co/flax-community/vqgan_f16_16384).
### How to Use
The easiest way to get familiar with the code and the models is to follow the inference notebook we provide in our [github repo](https://github.com/borisdayma/dalle-mini/blob/main/dev/inference/inference_pipeline.ipynb). For your convenience, you can open it in Google Colaboratory: [](https://colab.research.google.com/github/borisdayma/dalle-mini/blob/main/dev/inference/inference_pipeline.ipynb)
If you just want to test the trained model and see what it comes up with, please visit [our demo](https://huggingface.co/spaces/flax-community/dalle-mini), available in 🤗 Spaces.
### Additional Details
Our [report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini--Vmlldzo4NjIxODA) contains more details about how the model was trained and shows many examples that demonstrate its capabilities.
|
{"language": ["en"], "pipeline_tag": "text-to-image", "inference": false}
|
apol/dalle-mini
| null |
[
"transformers",
"jax",
"bart",
"text2text-generation",
"text-to-image",
"en",
"arxiv:1910.13461",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1910.13461"
] |
[
"en"
] |
TAGS
#transformers #jax #bart #text2text-generation #text-to-image #en #arxiv-1910.13461 #autotrain_compatible #region-us
|
## DALL·E mini - Generate images from text
<img style="text-align:center; display:block;" src="URL width="200">
* Technical Report
* Demo
### Model Description
This is an attempt to replicate OpenAI's DALL·E, a model capable of generating arbitrary images from a text prompt that describes the desired result.
!DALL·E mini demo screenshot
This model's architecture is a simplification of the original, and leverages previous open source efforts and available pre-trained models. Results have lower quality than OpenAI's, but the model can be trained and used on less demanding hardware. Our training was performed on a single TPU v3-8 for a few days.
### Components of the Architecture
The system relies on the Flax/JAX infrastructure, which are ideal for TPU training. TPUs are not required, both Flax and JAX run very efficiently on GPU backends.
The main components of the architecture include:
* An encoder, based on BART. The encoder transforms a sequence of input text tokens to a sequence of image tokens. The input tokens are extracted from the text prompt by using the model's tokenizer. The image tokens are a fixed-length sequence, and they represent indices in a VQGAN-based pre-trained codebook.
* A decoder, which converts the image tokens to image pixels. As mentioned above, the decoder is based on a VQGAN model.
The model definition we use for the encoder can be downloaded from our Github repo. The encoder is represented by the class 'CustomFlaxBartForConditionalGeneration'.
To use the decoder, you need to follow the instructions in our accompanying VQGAN model in the hub, flax-community/vqgan_f16_16384.
### How to Use
The easiest way to get familiar with the code and the models is to follow the inference notebook we provide in our github repo. For your convenience, you can open it in Google Colaboratory: . Entity representation was set to the title, and description was used to disambiguate if 2 entities had the same title. If still no disambiguation was possible, we used the wikidata ID (eg. Q123456).
We trained the model on WikiKG90Mv2 for approx 1.5 epochs on 4x1080Ti GPUs. The training time for 1 epoch was approx 5.5 days.
To evaluate the model, we sample 300 times from the decoder for each input (s,r) pair. We then remove predictions which do not map back to a valid entity, and then rank the predictions by their log probabilities. Filtering was performed subsequently. We achieve 0.22 validation MRR (the full leaderboard is here https://ogb.stanford.edu/docs/lsc/leaderboards/#wikikg90mv2)
You can try the following code in an ipython notebook to evaluate the pre-trained model. The full procedure of mapping entity to ids, filtering etc. is not included here for sake of simplicity but can be provided on request if needed. Please contact Apoorv (apoorvumang@gmail.com) for clarifications/details.
---------
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("apoorvumang/kgt5-wikikg90mv2")
model = AutoModelForSeq2SeqLM.from_pretrained("apoorvumang/kgt5-wikikg90mv2")
```
```
import torch
def getScores(ids, scores, pad_token_id):
"""get sequence scores from model.generate output"""
scores = torch.stack(scores, dim=1)
log_probs = torch.log_softmax(scores, dim=2)
# remove start token
ids = ids[:,1:]
# gather needed probs
x = ids.unsqueeze(-1).expand(log_probs.shape)
needed_logits = torch.gather(log_probs, 2, x)
final_logits = needed_logits[:, :, 0]
padded_mask = (ids == pad_token_id)
final_logits[padded_mask] = 0
final_scores = final_logits.sum(dim=-1)
return final_scores.cpu().detach().numpy()
def topkSample(input, model, tokenizer,
num_samples=5,
num_beams=1,
max_output_length=30):
tokenized = tokenizer(input, return_tensors="pt")
out = model.generate(**tokenized,
do_sample=True,
num_return_sequences = num_samples,
num_beams = num_beams,
eos_token_id = tokenizer.eos_token_id,
pad_token_id = tokenizer.pad_token_id,
output_scores = True,
return_dict_in_generate=True,
max_length=max_output_length,)
out_tokens = out.sequences
out_str = tokenizer.batch_decode(out_tokens, skip_special_tokens=True)
out_scores = getScores(out_tokens, out.scores, tokenizer.pad_token_id)
pair_list = [(x[0], x[1]) for x in zip(out_str, out_scores)]
sorted_pair_list = sorted(pair_list, key=lambda x:x[1], reverse=True)
return sorted_pair_list
def greedyPredict(input, model, tokenizer):
input_ids = tokenizer([input], return_tensors="pt").input_ids
out_tokens = model.generate(input_ids)
out_str = tokenizer.batch_decode(out_tokens, skip_special_tokens=True)
return out_str[0]
```
```
# an example from validation set that the model predicts correctly
# you can try your own examples here. what's your noble title?
input = "Sophie Valdemarsdottir| noble title"
out = topkSample(input, model, tokenizer, num_samples=5)
out
```
You can further load the list of entity aliases, then filter only those predictions which are valid entities then create a reverse mapping from alias -> integer id to get final predictions in required format.
However, loading these aliases in memory as a dictionary requires a lot of RAM + you need to download the aliases file (made available here https://storage.googleapis.com/kgt5-wikikg90mv2/ent_alias_list.pickle) (relation file: https://storage.googleapis.com/kgt5-wikikg90mv2/rel_alias_list.pickle)
The submitted validation/test results for were obtained by sampling 300 times for each input, then applying above procedure, followed by filtering known entities. The final MRR can vary slightly due to this sampling nature (we found that although beam search gives deterministic output, the results are inferior to sampling large number of times).
```
# download valid.txt. you can also try same url with test.txt. however test does not contain the correct tails
!wget https://storage.googleapis.com/kgt5-wikikg90mv2/valid.txt
```
```
fname = 'valid.txt'
valid_lines = []
f = open(fname)
for line in f:
valid_lines.append(line.rstrip())
f.close()
print(valid_lines[0])
```
```
from tqdm.auto import tqdm
# try unfiltered hits@k. this is approximation since model can sample same seq multiple times
# you should run this on gpu if you want to evaluate on all points with 300 samples each
k = 1
count_at_k = 0
max_predictions = k
max_points = 1000
for line in tqdm(valid_lines[:max_points]):
input, target = line.split('\t')
model_output = topkSample(input, model, tokenizer, num_samples=max_predictions)
prediction_strings = [x[0] for x in model_output]
if target in prediction_strings:
count_at_k += 1
print('Hits at {0} unfiltered: {1}'.format(k, count_at_k/max_points))
```
|
{"license": "mit", "widget": [{"text": "Apoorv Umang Saxena| family name", "example_title": "Family name prediction"}, {"text": "Apoorv Saxena| country", "example_title": "Country prediction"}, {"text": "World War 2| followed by", "example_title": "followed by"}]}
|
apoorvumang/kgt5-wikikg90mv2
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
This is a t5-small model trained from scratch on WikiKG90Mv2 dataset. Please see URL for more details on the method.
This model was trained on the tail entity prediction task ie. given subject entity and relation, predict the object entity. Input should be provided in the form of "\<entity text\>| \<relation text\>".
We used the raw text title and descriptions to get entity and relation textual representations. These raw texts were obtained from ogb dataset itself (dataset/wikikg90m-v2/mapping/URL and URL). Entity representation was set to the title, and description was used to disambiguate if 2 entities had the same title. If still no disambiguation was possible, we used the wikidata ID (eg. Q123456).
We trained the model on WikiKG90Mv2 for approx 1.5 epochs on 4x1080Ti GPUs. The training time for 1 epoch was approx 5.5 days.
To evaluate the model, we sample 300 times from the decoder for each input (s,r) pair. We then remove predictions which do not map back to a valid entity, and then rank the predictions by their log probabilities. Filtering was performed subsequently. We achieve 0.22 validation MRR (the full leaderboard is here URL
You can try the following code in an ipython notebook to evaluate the pre-trained model. The full procedure of mapping entity to ids, filtering etc. is not included here for sake of simplicity but can be provided on request if needed. Please contact Apoorv (apoorvumang@URL) for clarifications/details.
---------
You can further load the list of entity aliases, then filter only those predictions which are valid entities then create a reverse mapping from alias -> integer id to get final predictions in required format.
However, loading these aliases in memory as a dictionary requires a lot of RAM + you need to download the aliases file (made available here URL (relation file: URL
The submitted validation/test results for were obtained by sampling 300 times for each input, then applying above procedure, followed by filtering known entities. The final MRR can vary slightly due to this sampling nature (we found that although beam search gives deterministic output, the results are inferior to sampling large number of times).
|
[] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | null |
1
|
{}
|
app-test-user/test-tensorboard
| null |
[
"tensorboard",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#tensorboard #region-us
|
1
|
[] |
[
"TAGS\n#tensorboard #region-us \n"
] |
text-generation
|
transformers
|
# DialoGPT-medium-simpsons
This is a version of [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) fine-tuned on The Simpsons scripts.
|
{"tags": ["conversational"]}
|
arampacha/DialoGPT-medium-simpsons
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# DialoGPT-medium-simpsons
This is a version of DialoGPT-medium fine-tuned on The Simpsons scripts.
|
[
"# DialoGPT-medium-simpsons\n\nThis is a version of DialoGPT-medium fine-tuned on The Simpsons scripts."
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# DialoGPT-medium-simpsons\n\nThis is a version of DialoGPT-medium fine-tuned on The Simpsons scripts."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Chech
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cs", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-czech")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-czech")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Czech test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cs", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-czech")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-czech")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", '«', '»', '—', '…', '(', ')', '*', '”', '“']
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
# Note: this models is trained ignoring accents on letters as below
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().strip()
batch["sentence"] = re.sub(re.compile('[äá]'), 'a', batch['sentence'])
batch["sentence"] = re.sub(re.compile('[öó]'), 'o', batch['sentence'])
batch["sentence"] = re.sub(re.compile('[èé]'), 'e', batch['sentence'])
batch["sentence"] = re.sub(re.compile("[ïí]"), 'i', batch['sentence'])
batch["sentence"] = re.sub(re.compile("[üů]"), 'u', batch['sentence'])
batch['sentence'] = re.sub(' ', ' ', batch['sentence'])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 24.56
## Training
The Common Voice `train`, `validation`.
The script used for training will be available [here](https://github.com/arampacha/hf-sprint-xlsr) soon.
|
{"language": "cs", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "metrics": "wer", "dataset": "common_voice", "model-index": [{"name": "Czech XLSR Wav2Vec2 Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice cs", "type": "common_voice", "args": "cs"}, "metrics": [{"type": "wer", "value": 24.56, "name": "Test WER"}]}]}]}
|
arampacha/wav2vec2-large-xlsr-czech
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"cs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"cs"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #cs #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Chech
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Czech using the Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Czech test data of Common Voice.
Test Result: 24.56
## Training
The Common Voice 'train', 'validation'.
The script used for training will be available here soon.
|
[
"# Wav2Vec2-Large-XLSR-53-Chech\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Czech using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Czech test data of Common Voice.\n\n\n\n\nTest Result: 24.56",
"## Training\n\nThe Common Voice 'train', 'validation'.\n\nThe script used for training will be available here soon."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #cs #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Chech\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Czech using the Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Czech test data of Common Voice.\n\n\n\n\nTest Result: 24.56",
"## Training\n\nThe Common Voice 'train', 'validation'.\n\nThe script used for training will be available here soon."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Ukrainian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Ukrainian using the [Common Voice](https://huggingface.co/datasets/common_voice) and sample of [M-AILABS Ukrainian Corpus](https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/) datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "uk", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "uk", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
model = Wav2Vec2ForCTC.from_pretrained("arampacha/wav2vec2-large-xlsr-ukrainian")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", '«', '»', '—', '…', '(', ')', '*', '”', '“']
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays and normalize charecters
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(re.compile("['`]"), '’', batch['sentence'])
batch["sentence"] = re.sub(re.compile(chars_to_ignore_regex), '', batch["sentence"]).lower().strip()
batch["sentence"] = re.sub(re.compile('i'), 'і', batch['sentence'])
batch["sentence"] = re.sub(re.compile('o'), 'о', batch['sentence'])
batch["sentence"] = re.sub(re.compile('a'), 'а', batch['sentence'])
batch["sentence"] = re.sub(re.compile('ы'), 'и', batch['sentence'])
batch["sentence"] = re.sub(re.compile("–"), '', batch['sentence'])
batch['sentence'] = re.sub(' ', ' ', batch['sentence'])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.89
## Training
The Common Voice `train`, `validation` and the M-AILABS Ukrainian corpus.
The script used for training will be available [here](https://github.com/arampacha/hf-sprint-xlsr) soon.
|
{"language": "uk", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "metrics": "wer", "dataset": "common_voice", "model-index": [{"name": "Ukrainian XLSR Wav2Vec2 Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice uk", "type": "common_voice", "args": "uk"}, "metrics": [{"type": "wer", "value": 29.89, "name": "Test WER"}]}]}]}
|
arampacha/wav2vec2-large-xlsr-ukrainian
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"uk",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"uk"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #uk #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Ukrainian
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Ukrainian using the Common Voice and sample of M-AILABS Ukrainian Corpus datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Ukrainian test data of Common Voice.
Test Result: 29.89
## Training
The Common Voice 'train', 'validation' and the M-AILABS Ukrainian corpus.
The script used for training will be available here soon.
|
[
"# Wav2Vec2-Large-XLSR-53-Ukrainian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Ukrainian using the Common Voice and sample of M-AILABS Ukrainian Corpus datasets.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Ukrainian test data of Common Voice.\n\n\n\nTest Result: 29.89",
"## Training\n\nThe Common Voice 'train', 'validation' and the M-AILABS Ukrainian corpus.\n\nThe script used for training will be available here soon."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #uk #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Ukrainian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Ukrainian using the Common Voice and sample of M-AILABS Ukrainian Corpus datasets.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Ukrainian test data of Common Voice.\n\n\n\nTest Result: 29.89",
"## Training\n\nThe Common Voice 'train', 'validation' and the M-AILABS Ukrainian corpus.\n\nThe script used for training will be available here soon."
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
- Loss: **0.4521**
- Wer: **0.5141**
- Cer: **0.1100**
- Wer+LM: **0.2756**
- Cer+LM: **0.0866**
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: tristage
- lr_scheduler_ratios: [0.1, 0.4, 0.5]
- training_steps: 1400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 6.1298 | 19.87 | 100 | 3.1204 | 1.0 | 1.0 |
| 2.7269 | 39.87 | 200 | 0.6200 | 0.7592 | 0.1755 |
| 1.4643 | 59.87 | 300 | 0.4796 | 0.5921 | 0.1277 |
| 1.1242 | 79.87 | 400 | 0.4637 | 0.5359 | 0.1145 |
| 0.9592 | 99.87 | 500 | 0.4521 | 0.5141 | 0.1100 |
| 0.8704 | 119.87 | 600 | 0.4736 | 0.4914 | 0.1045 |
| 0.7908 | 139.87 | 700 | 0.5394 | 0.5250 | 0.1124 |
| 0.7049 | 159.87 | 800 | 0.4822 | 0.4754 | 0.0985 |
| 0.6299 | 179.87 | 900 | 0.4890 | 0.4809 | 0.1028 |
| 0.5832 | 199.87 | 1000 | 0.5233 | 0.4813 | 0.1028 |
| 0.5145 | 219.87 | 1100 | 0.5350 | 0.4781 | 0.0994 |
| 0.4604 | 239.87 | 1200 | 0.5223 | 0.4715 | 0.0984 |
| 0.4226 | 259.87 | 1300 | 0.5167 | 0.4625 | 0.0953 |
| 0.3946 | 279.87 | 1400 | 0.5248 | 0.4614 | 0.0950 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["hy"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hy", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-1b-hy-cv", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hy-AM", "type": "mozilla-foundation/common_voice_8_0", "args": "hy-AM"}, "metrics": [{"type": "wer", "value": 0.2755659640905542, "name": "WER LM"}, {"type": "cer", "value": 0.08659585230146687, "name": "CER LM"}]}]}]}
|
arampacha/wav2vec2-xls-r-1b-hy-cv
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hy",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hy"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hy #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4521
* Wer: 0.5141
* Cer: 0.1100
* Wer+LM: 0.2756
* Cer+LM: 0.0866
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 8e-05
* train\_batch\_size: 16
* eval\_batch\_size: 64
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
* lr\_scheduler\_type: tristage
* lr\_scheduler\_ratios: [0.1, 0.4, 0.5]
* training\_steps: 1400
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 8e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n* lr\\_scheduler\\_type: tristage\n* lr\\_scheduler\\_ratios: [0.1, 0.4, 0.5]\n* training\\_steps: 1400\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hy #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 8e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n* lr\\_scheduler\\_type: tristage\n* lr\\_scheduler\\_ratios: [0.1, 0.4, 0.5]\n* training\\_steps: 1400\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the /WORKSPACE/DATA/HY/NOIZY_STUDENT_4/ - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1693
- Wer: 0.2373
- Cer: 0.0429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 842
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.255 | 7.24 | 500 | 0.2978 | 0.4294 | 0.0758 |
| 1.0058 | 14.49 | 1000 | 0.1883 | 0.2838 | 0.0483 |
| 0.9371 | 21.73 | 1500 | 0.1813 | 0.2627 | 0.0457 |
| 0.8999 | 28.98 | 2000 | 0.1693 | 0.2373 | 0.0429 |
| 0.8814 | 36.23 | 2500 | 0.1760 | 0.2420 | 0.0435 |
| 0.8364 | 43.47 | 3000 | 0.1765 | 0.2416 | 0.0419 |
| 0.8019 | 50.72 | 3500 | 0.1758 | 0.2311 | 0.0398 |
| 0.7665 | 57.96 | 4000 | 0.1745 | 0.2240 | 0.0399 |
| 0.7376 | 65.22 | 4500 | 0.1717 | 0.2190 | 0.0385 |
| 0.716 | 72.46 | 5000 | 0.1700 | 0.2147 | 0.0382 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
{"language": ["hy"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hy", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-1b-hy-cv", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hy-AM", "type": "mozilla-foundation/common_voice_8_0", "args": "hy-AM"}, "metrics": [{"type": "wer", "value": 10.811865729898516, "name": "WER LM"}, {"type": "cer", "value": 2.2205361659079412, "name": "CER LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hy"}, "metrics": [{"type": "wer", "value": 18.219363037089988, "name": "Test WER"}, {"type": "cer", "value": 7.075988867335752, "name": "Test CER"}]}]}]}
|
arampacha/wav2vec2-xls-r-1b-hy
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"hy",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hy"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hy #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the /WORKSPACE/DATA/HY/NOIZY\_STUDENT\_4/ - NA dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1693
* Wer: 0.2373
* Cer: 0.0429
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 64
* seed: 842
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 5000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2
* Datasets 1.18.4.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 842\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hy #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 842\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 5000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-ka
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the /WORKSPACE/DATA/KA/NOIZY_STUDENT_2/ - KA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1022
- Wer: 0.1527
- Cer: 0.0221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.2839 | 6.45 | 400 | 0.2229 | 0.3609 | 0.0557 |
| 0.9775 | 12.9 | 800 | 0.1271 | 0.2202 | 0.0317 |
| 0.9045 | 19.35 | 1200 | 0.1268 | 0.2030 | 0.0294 |
| 0.8652 | 25.8 | 1600 | 0.1211 | 0.1940 | 0.0287 |
| 0.8505 | 32.26 | 2000 | 0.1192 | 0.1912 | 0.0276 |
| 0.8168 | 38.7 | 2400 | 0.1086 | 0.1763 | 0.0260 |
| 0.7737 | 45.16 | 2800 | 0.1098 | 0.1753 | 0.0256 |
| 0.744 | 51.61 | 3200 | 0.1054 | 0.1646 | 0.0239 |
| 0.7114 | 58.06 | 3600 | 0.1034 | 0.1573 | 0.0228 |
| 0.6773 | 64.51 | 4000 | 0.1022 | 0.1527 | 0.0221 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
{"language": ["ka"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-1b-ka", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ka", "type": "mozilla-foundation/common_voice_8_0", "args": "ka"}, "metrics": [{"type": "wer", "value": 7.39778066580026, "name": "WER LM"}, {"type": "cer", "value": 1.1882089427096434, "name": "CER LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ka"}, "metrics": [{"type": "wer", "value": 22.61, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ka"}, "metrics": [{"type": "wer", "value": 21.58, "name": "Test WER"}]}]}]}
|
arampacha/wav2vec2-xls-r-1b-ka
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"ka",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ka"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #ka #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xls-r-1b-ka
====================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the /WORKSPACE/DATA/KA/NOIZY\_STUDENT\_2/ - KA dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1022
* Wer: 0.1527
* Cer: 0.0221
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7e-05
* train\_batch\_size: 16
* eval\_batch\_size: 64
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 4000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2
* Datasets 1.18.4.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #ka #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1747
- Wer: 0.2107
- Cer: 0.0408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.3719 | 4.35 | 500 | 0.3389 | 0.4236 | 0.0833 |
| 1.1361 | 8.7 | 1000 | 0.2309 | 0.3162 | 0.0630 |
| 1.0517 | 13.04 | 1500 | 0.2166 | 0.3056 | 0.0597 |
| 1.0118 | 17.39 | 2000 | 0.2141 | 0.2784 | 0.0557 |
| 0.9922 | 21.74 | 2500 | 0.2231 | 0.2941 | 0.0594 |
| 0.9929 | 26.09 | 3000 | 0.2171 | 0.2892 | 0.0587 |
| 0.9485 | 30.43 | 3500 | 0.2236 | 0.2956 | 0.0599 |
| 0.9573 | 34.78 | 4000 | 0.2314 | 0.3043 | 0.0616 |
| 0.9195 | 39.13 | 4500 | 0.2169 | 0.2812 | 0.0580 |
| 0.8915 | 43.48 | 5000 | 0.2109 | 0.2780 | 0.0560 |
| 0.8449 | 47.83 | 5500 | 0.2050 | 0.2534 | 0.0514 |
| 0.8028 | 52.17 | 6000 | 0.2032 | 0.2456 | 0.0492 |
| 0.7881 | 56.52 | 6500 | 0.1890 | 0.2380 | 0.0469 |
| 0.7423 | 60.87 | 7000 | 0.1816 | 0.2245 | 0.0442 |
| 0.7248 | 65.22 | 7500 | 0.1789 | 0.2165 | 0.0422 |
| 0.6993 | 69.57 | 8000 | 0.1747 | 0.2107 | 0.0408 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["uk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-1b-hy-cv", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice uk", "type": "mozilla-foundation/common_voice_8_0", "args": "uk"}, "metrics": [{"type": "wer", "value": 12.246920571994902, "name": "WER LM"}, {"type": "cer", "value": 2.513653497966816, "name": "CER LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "uk"}, "metrics": [{"type": "wer", "value": 46.56, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "uk"}, "metrics": [{"type": "wer", "value": 35.98, "name": "Test WER"}]}]}]}
|
arampacha/wav2vec2-xls-r-1b-uk-cv
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"uk",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"uk"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #uk #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - UK dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1747
* Wer: 0.2107
* Cer: 0.0408
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 8e-05
* train\_batch\_size: 16
* eval\_batch\_size: 64
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 8000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 8e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 8000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #uk #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 8e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 8000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the /WORKSPACE/DATA/UK/COMPOSED_DATASET/ - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1092
- Wer: 0.1752
- Cer: 0.0323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 12000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 1.7005 | 1.61 | 500 | 0.4082 | 0.5584 | 0.1164 |
| 1.1555 | 3.22 | 1000 | 0.2020 | 0.2953 | 0.0557 |
| 1.0927 | 4.82 | 1500 | 0.1708 | 0.2584 | 0.0480 |
| 1.0707 | 6.43 | 2000 | 0.1563 | 0.2405 | 0.0450 |
| 1.0728 | 8.04 | 2500 | 0.1620 | 0.2442 | 0.0463 |
| 1.0268 | 9.65 | 3000 | 0.1588 | 0.2378 | 0.0458 |
| 1.0328 | 11.25 | 3500 | 0.1466 | 0.2352 | 0.0442 |
| 1.0249 | 12.86 | 4000 | 0.1552 | 0.2341 | 0.0449 |
| 1.016 | 14.47 | 4500 | 0.1602 | 0.2435 | 0.0473 |
| 1.0164 | 16.08 | 5000 | 0.1491 | 0.2337 | 0.0444 |
| 0.9935 | 17.68 | 5500 | 0.1539 | 0.2373 | 0.0458 |
| 0.9626 | 19.29 | 6000 | 0.1458 | 0.2305 | 0.0434 |
| 0.9505 | 20.9 | 6500 | 0.1368 | 0.2157 | 0.0407 |
| 0.9389 | 22.51 | 7000 | 0.1437 | 0.2231 | 0.0426 |
| 0.9129 | 24.12 | 7500 | 0.1313 | 0.2076 | 0.0394 |
| 0.9118 | 25.72 | 8000 | 0.1292 | 0.2040 | 0.0384 |
| 0.8848 | 27.33 | 8500 | 0.1299 | 0.2028 | 0.0384 |
| 0.8667 | 28.94 | 9000 | 0.1228 | 0.1945 | 0.0367 |
| 0.8641 | 30.55 | 9500 | 0.1223 | 0.1939 | 0.0364 |
| 0.8516 | 32.15 | 10000 | 0.1184 | 0.1876 | 0.0349 |
| 0.8379 | 33.76 | 10500 | 0.1137 | 0.1821 | 0.0338 |
| 0.8235 | 35.37 | 11000 | 0.1127 | 0.1779 | 0.0331 |
| 0.8112 | 36.98 | 11500 | 0.1103 | 0.1766 | 0.0327 |
| 0.8069 | 38.59 | 12000 | 0.1092 | 0.1752 | 0.0323 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
{"language": ["uk"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-1b-hy", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice uk", "type": "mozilla-foundation/common_voice_8_0", "args": "uk"}, "metrics": [{"type": "wer", "value": 10.406342913776015, "name": "WER LM"}, {"type": "cer", "value": 2.0387492208601703, "name": "CER LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "uk"}, "metrics": [{"type": "wer", "value": 40.57, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "uk"}, "metrics": [{"type": "wer", "value": 28.95, "name": "Test WER"}]}]}]}
|
arampacha/wav2vec2-xls-r-1b-uk
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"uk",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"uk"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #uk #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the /WORKSPACE/DATA/UK/COMPOSED\_DATASET/ - NA dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1092
* Wer: 0.1752
* Cer: 0.0323
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 64
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 12000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2
* Datasets 1.18.4.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 12000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #uk #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 12000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5891
- Wer: 0.6569
**Note**: If you aim for best performance use [this model](https://huggingface.co/arampacha/wav2vec2-xls-r-300m-hy). It is trained using noizy student procedure and achieves considerably better results.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 9.167 | 16.67 | 100 | 3.5599 | 1.0 |
| 3.2645 | 33.33 | 200 | 3.1771 | 1.0 |
| 3.1509 | 50.0 | 300 | 3.1321 | 1.0 |
| 3.0757 | 66.67 | 400 | 2.8594 | 1.0 |
| 2.5274 | 83.33 | 500 | 1.5286 | 0.9797 |
| 1.6826 | 100.0 | 600 | 0.8058 | 0.7974 |
| 1.2868 | 116.67 | 700 | 0.6713 | 0.7279 |
| 1.1262 | 133.33 | 800 | 0.6308 | 0.7034 |
| 1.0408 | 150.0 | 900 | 0.6056 | 0.6745 |
| 0.9617 | 166.67 | 1000 | 0.5891 | 0.6569 |
| 0.9196 | 183.33 | 1100 | 0.5913 | 0.6432 |
| 0.8853 | 200.0 | 1200 | 0.5924 | 0.6347 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": ["hy-AM"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hy"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
|
arampacha/wav2vec2-xls-r-300m-hy-cv
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hy",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hy-AM"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hy #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5891
* Wer: 0.6569
Note: If you aim for best performance use this model. It is trained using noizy student procedure and achieves considerably better results.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 1200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 1200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #hy #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 1200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the /WORKSPACE/DATA/HY/NOIZY_STUDENT_3/ - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2293
- Wer: 0.3333
- Cer: 0.0602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 842
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.1471 | 7.02 | 400 | 3.1599 | 1.0 | 1.0 |
| 1.8691 | 14.04 | 800 | 0.7674 | 0.7361 | 0.1686 |
| 1.3227 | 21.05 | 1200 | 0.3849 | 0.5336 | 0.1007 |
| 1.163 | 28.07 | 1600 | 0.3015 | 0.4559 | 0.0823 |
| 1.0768 | 35.09 | 2000 | 0.2721 | 0.4032 | 0.0728 |
| 1.0224 | 42.11 | 2400 | 0.2586 | 0.3825 | 0.0691 |
| 0.9817 | 49.12 | 2800 | 0.2458 | 0.3653 | 0.0653 |
| 0.941 | 56.14 | 3200 | 0.2306 | 0.3388 | 0.0605 |
| 0.9235 | 63.16 | 3600 | 0.2315 | 0.3380 | 0.0615 |
| 0.9141 | 70.18 | 4000 | 0.2293 | 0.3333 | 0.0602 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
{"language": ["hy"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hy", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-hy", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hy-AM", "type": "mozilla-foundation/common_voice_8_0", "args": "hy-AM"}, "metrics": [{"type": "wer", "value": 13.192818110850899, "name": "WER LM"}, {"type": "cer", "value": 2.787051087506323, "name": "CER LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "hy"}, "metrics": [{"type": "wer", "value": 22.246048764990867, "name": "Test WER"}, {"type": "cer", "value": 7.59406739840239, "name": "Test CER"}]}]}]}
|
arampacha/wav2vec2-xls-r-300m-hy
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hy",
"hf-asr-leaderboard",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hy"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hy #hf-asr-leaderboard #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the /WORKSPACE/DATA/HY/NOIZY\_STUDENT\_3/ - NA dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2293
* Wer: 0.3333
* Cer: 0.0602
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 842
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 4000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2
* Datasets 1.18.4.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 842\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hy #hf-asr-leaderboard #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 842\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0"
] |
question-answering
|
transformers
|
---
datasets:
- squad
widget:
- text: "Which name is also used to describe the Amazon rainforest in English?"
context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."
- text: "How many square kilometers of rainforest is covered in the basin?"
context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."
|
{}
|
aravind-812/roberta-train-json
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #roberta #question-answering #endpoints_compatible #region-us
|
---
datasets:
- squad
widget:
- text: "Which name is also used to describe the Amazon rainforest in English?"
context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."
- text: "How many square kilometers of rainforest is covered in the basin?"
context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."
|
[] |
[
"TAGS\n#transformers #pytorch #jax #roberta #question-answering #endpoints_compatible #region-us \n"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "results", "results": []}]}
|
arawat/pegasus-custom-xsum
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# results
This model is a fine-tuned version of google/pegasus-large on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
[
"# results\n\nThis model is a fine-tuned version of google/pegasus-large on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# results\n\nThis model is a fine-tuned version of google/pegasus-large on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
#HourAI bot based on DialoGPT
|
{"tags": ["conversational"]}
|
archmagos/HourAI
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#HourAI bot based on DialoGPT
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
#Mini-Me
|
{"tags": ["conversational"]}
|
ardatasc/miniMe-version1
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Mini-Me
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-dataset_20-input_64
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4335
- Bleu: 8.6652
- Gen Len: 18.2596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6351 | 1.0 | 7629 | 1.4335 | 8.6652 | 18.2596 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-en-to-ro-dataset_20-input_64", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 8.6652, "name": "Bleu"}]}]}]}
|
aretw0/t5-small-finetuned-en-to-ro-dataset_20-input_64
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-en-to-ro-dataset\_20-input\_64
=================================================
This model is a fine-tuned version of t5-small on the wmt16 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4335
* Bleu: 8.6652
* Gen Len: 18.2596
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-dataset_20
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4052
- Bleu: 7.3293
- Gen Len: 18.2556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6029 | 1.0 | 7629 | 1.4052 | 7.3293 | 18.2556 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-en-to-ro-dataset_20", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 7.3293, "name": "Bleu"}]}]}]}
|
aretw0/t5-small-finetuned-en-to-ro-dataset_20
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-en-to-ro-dataset\_20
=======================================
This model is a fine-tuned version of t5-small on the wmt16 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4052
* Bleu: 7.3293
* Gen Len: 18.2556
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-ro-epoch.04375
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4137
- Bleu: 7.3292
- Gen Len: 18.2541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.04375
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 0.6211 | 0.04 | 1669 | 1.4137 | 7.3292 | 18.2541 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-en-to-ro-epoch.04375", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 7.3292, "name": "Bleu"}]}]}]}
|
aretw0/t5-small-finetuned-en-to-ro-epoch.04375
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-en-to-ro-epoch.04375
=======================================
This model is a fine-tuned version of t5-small on the wmt16 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4137
* Bleu: 7.3292
* Gen Len: 18.2541
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 0.04375
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 0.04375\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 0.04375\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
feature-extraction
|
transformers
|
hello
|
{}
|
argv947059/example-based-ner-bert
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us
|
hello
|
[] |
[
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
# citizenlab/distilbert-base-multilingual-cased-toxicity
This is multilingual Distil-Bert model sequence classifier trained based on [JIGSAW Toxic Comment Classification Challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) dataset.
## How to use it
```python
from transformers import pipeline
model_path = "citizenlab/distilbert-base-multilingual-cased-toxicity"
toxicity_classifier = pipeline("text-classification", model=model_path, tokenizer=model_path)
toxicity_classifier("this is a lovely message")
> [{'label': 'not_toxic', 'score': 0.9954179525375366}]
toxicity_classifier("you are an idiot and you and your family should go back to your country")
> [{'label': 'toxic', 'score': 0.9948776960372925}]
```
## Evaluation
### Accuracy
```
Accuracy Score = 0.9425
F1 Score (Micro) = 0.9450549450549449
F1 Score (Macro) = 0.8491432341169309
```
|
{"language": ["en", "nl", "fr", "pt", "it", "es", "de", "da", "pl", "af"], "datasets": ["jigsaw_toxicity_pred"], "metrics": ["F1 Accuracy"], "pipeline_type": "text-classification", "widget": [{"text": "this is a lovely message", "example_title": "Example 1", "multi_class": false}, {"text": "you are an idiot and you and your family should go back to your country", "example_title": "Example 2", "multi_class": false}]}
|
citizenlab/distilbert-base-multilingual-cased-toxicity
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"nl",
"fr",
"pt",
"it",
"es",
"de",
"da",
"pl",
"af",
"dataset:jigsaw_toxicity_pred",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en",
"nl",
"fr",
"pt",
"it",
"es",
"de",
"da",
"pl",
"af"
] |
TAGS
#transformers #pytorch #distilbert #text-classification #en #nl #fr #pt #it #es #de #da #pl #af #dataset-jigsaw_toxicity_pred #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# citizenlab/distilbert-base-multilingual-cased-toxicity
This is multilingual Distil-Bert model sequence classifier trained based on JIGSAW Toxic Comment Classification Challenge dataset.
## How to use it
## Evaluation
### Accuracy
|
[
"# citizenlab/distilbert-base-multilingual-cased-toxicity\n\nThis is multilingual Distil-Bert model sequence classifier trained based on JIGSAW Toxic Comment Classification Challenge dataset.",
"## How to use it",
"## Evaluation",
"### Accuracy"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #en #nl #fr #pt #it #es #de #da #pl #af #dataset-jigsaw_toxicity_pred #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# citizenlab/distilbert-base-multilingual-cased-toxicity\n\nThis is multilingual Distil-Bert model sequence classifier trained based on JIGSAW Toxic Comment Classification Challenge dataset.",
"## How to use it",
"## Evaluation",
"### Accuracy"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7751
- Accuracy: 0.9113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.315 | 1.0 | 318 | 3.3087 | 0.74 |
| 2.6371 | 2.0 | 636 | 1.8833 | 0.8381 |
| 1.5388 | 3.0 | 954 | 1.1547 | 0.8929 |
| 1.0076 | 4.0 | 1272 | 0.8590 | 0.9071 |
| 0.79 | 5.0 | 1590 | 0.7751 | 0.9113 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["clinc_oos"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9112903225806451, "name": "Accuracy"}]}]}]}
|
arianpasquali/distilbert-base-uncased-finetuned-clinc
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-clinc
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the clinc\_oos dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7751
* Accuracy: 0.9113
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 48
* eval\_batch\_size: 48
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.7.1
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.7.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.7.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
# citizenlab/twitter-xlm-roberta-base-sentiment-finetunned
This is multilingual XLM-Roberta model sequence classifier fine tunned and based on [Cardiff NLP Group](cardiffnlp/twitter-roberta-base-sentiment) sentiment classification model.
## How to use it
```python
from transformers import pipeline
model_path = "citizenlab/twitter-xlm-roberta-base-sentiment-finetunned"
sentiment_classifier = pipeline("text-classification", model=model_path, tokenizer=model_path)
sentiment_classifier("this is a lovely message")
> [{'label': 'Positive', 'score': 0.9918450713157654}]
sentiment_classifier("you are an idiot and you and your family should go back to your country")
> [{'label': 'Negative', 'score': 0.9849833846092224}]
```
## Evaluation
```
precision recall f1-score support
Negative 0.57 0.14 0.23 28
Neutral 0.78 0.94 0.86 132
Positive 0.89 0.80 0.85 51
accuracy 0.80 211
macro avg 0.75 0.63 0.64 211
weighted avg 0.78 0.80 0.77 211
```
|
{"language": ["en", "nl", "fr", "pt", "it", "es", "de", "da", "pl", "af"], "datasets": ["jigsaw_toxicity_pred"], "metrics": ["F1 Accuracy"], "pipeline_type": "text-classification", "widget": [{"text": "this is a lovely message", "example_title": "Example 1", "multi_class": false}, {"text": "you are an idiot and you and your family should go back to your country", "example_title": "Example 2", "multi_class": false}]}
|
citizenlab/twitter-xlm-roberta-base-sentiment-finetunned
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"nl",
"fr",
"pt",
"it",
"es",
"de",
"da",
"pl",
"af",
"dataset:jigsaw_toxicity_pred",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en",
"nl",
"fr",
"pt",
"it",
"es",
"de",
"da",
"pl",
"af"
] |
TAGS
#transformers #pytorch #xlm-roberta #text-classification #en #nl #fr #pt #it #es #de #da #pl #af #dataset-jigsaw_toxicity_pred #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# citizenlab/twitter-xlm-roberta-base-sentiment-finetunned
This is multilingual XLM-Roberta model sequence classifier fine tunned and based on Cardiff NLP Group sentiment classification model.
## How to use it
## Evaluation
|
[
"# citizenlab/twitter-xlm-roberta-base-sentiment-finetunned\n\nThis is multilingual XLM-Roberta model sequence classifier fine tunned and based on Cardiff NLP Group sentiment classification model.",
"## How to use it",
"## Evaluation"
] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #en #nl #fr #pt #it #es #de #da #pl #af #dataset-jigsaw_toxicity_pred #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# citizenlab/twitter-xlm-roberta-base-sentiment-finetunned\n\nThis is multilingual XLM-Roberta model sequence classifier fine tunned and based on Cardiff NLP Group sentiment classification model.",
"## How to use it",
"## Evaluation"
] |
text-generation
|
transformers
|
# Rick DialoGPT Model
|
{"tags": ["conversational"]}
|
arifbhrn/DialogGPT-small-Rickk
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick DialoGPT Model
|
[
"# Rick DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick DialoGPT Model"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Bengali
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using a subset of 40,000 utterances from [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/). Tested WER using ~4200 held out from training.
When using this model, make sure that your speech input is sampled at 16kHz.
Train Script can be Found at : train.py
Data Prep Notebook : https://colab.research.google.com/drive/1JMlZPU-DrezXjZ2t7sOVqn7CJjZhdK2q?usp=sharing
Inference Notebook : https://colab.research.google.com/drive/1uKC2cK9JfUPDTUHbrNdOYqKtNozhxqgZ?usp=sharing
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
model = Wav2Vec2ForCTC.from_pretrained("arijitx/wav2vec2-large-xlsr-bengali")
# model = model.to("cuda")
resampler = torchaudio.transforms.Resample(TEST_AUDIO_SR, 16_000)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch)
speech = resampler(speech_array).squeeze().numpy()
return speech
speech_array = speech_file_to_array_fn("test_file.wav")
inputs = processor(speech_array, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
preds = processor.batch_decode(predicted_ids)[0]
print(preds.replace("[PAD]",""))
```
**Test Result**: WER on ~4200 utterance : 32.45 %
|
{"language": "Bengali", "license": "cc-by-sa-4.0", "tags": ["bn", "audio", "automatic-speech-recognition", "speech"], "datasets": ["OpenSLR"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Bengali by Arijit", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "OpenSLR", "type": "OpenSLR", "args": "ben"}, "metrics": [{"type": "wer", "value": 32.45, "name": "Test WER"}]}]}]}
|
arijitx/wav2vec2-large-xlsr-bengali
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"audio",
"speech",
"dataset:OpenSLR",
"license:cc-by-sa-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"Bengali"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #bn #audio #speech #dataset-OpenSLR #license-cc-by-sa-4.0 #model-index #endpoints_compatible #has_space #region-us
|
# Wav2Vec2-Large-XLSR-Bengali
Fine-tuned facebook/wav2vec2-large-xlsr-53 Bengali using a subset of 40,000 utterances from Bengali ASR training data set containing ~196K utterances. Tested WER using ~4200 held out from training.
When using this model, make sure that your speech input is sampled at 16kHz.
Train Script can be Found at : URL
Data Prep Notebook : URL
Inference Notebook : URL
## Usage
The model can be used directly (without a language model) as follows:
Test Result: WER on ~4200 utterance : 32.45 %
|
[
"# Wav2Vec2-Large-XLSR-Bengali\nFine-tuned facebook/wav2vec2-large-xlsr-53 Bengali using a subset of 40,000 utterances from Bengali ASR training data set containing ~196K utterances. Tested WER using ~4200 held out from training.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\nTrain Script can be Found at : URL \n\n Data Prep Notebook : URL\n Inference Notebook : URL",
"## Usage\n\nThe model can be used directly (without a language model) as follows:\n\nTest Result: WER on ~4200 utterance : 32.45 %"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #bn #audio #speech #dataset-OpenSLR #license-cc-by-sa-4.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Wav2Vec2-Large-XLSR-Bengali\nFine-tuned facebook/wav2vec2-large-xlsr-53 Bengali using a subset of 40,000 utterances from Bengali ASR training data set containing ~196K utterances. Tested WER using ~4200 held out from training.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\nTrain Script can be Found at : URL \n\n Data Prep Notebook : URL\n Inference Notebook : URL",
"## Usage\n\nThe model can be used directly (without a language model) as follows:\n\nTest Result: WER on ~4200 utterance : 32.45 %"
] |
automatic-speech-recognition
|
transformers
|
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the OPENSLR_SLR53 - bengali dataset.
It achieves the following results on the evaluation set.
Without language model :
- WER: 0.21726385291857586
- CER: 0.04725010353701041
With 5 gram language model trained on 30M sentences randomly chosen from [AI4Bharat IndicCorp](https://indicnlp.ai4bharat.org/corpora/) dataset :
- WER: 0.15322879016421437
- CER: 0.03413696666806267
Note : 5% of a total 10935 samples have been used for evaluation. Evaluation set has 10935 examples which was not part of training training was done on first 95% and eval was done on last 5%. Training was stopped after 180k steps. Output predictions are available under files section.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset_name="openslr"
- model_name_or_path="facebook/wav2vec2-xls-r-300m"
- dataset_config_name="SLR53"
- output_dir="./wav2vec2-xls-r-300m-bengali"
- overwrite_output_dir
- num_train_epochs="50"
- per_device_train_batch_size="32"
- per_device_eval_batch_size="32"
- gradient_accumulation_steps="1"
- learning_rate="7.5e-5"
- warmup_steps="2000"
- length_column_name="input_length"
- evaluation_strategy="steps"
- text_column_name="sentence"
- chars_to_ignore , ? . ! \- \; \: \" “ % ‘ ” � — ’ … –
- save_steps="2000"
- eval_steps="3000"
- logging_steps="100"
- layerdrop="0.0"
- activation_dropout="0.1"
- save_total_limit="3"
- freeze_feature_encoder
- feat_proj_dropout="0.0"
- mask_time_prob="0.75"
- mask_time_length="10"
- mask_feature_prob="0.25"
- mask_feature_length="64"
- preprocessing_num_workers 32
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
Notes
- Training and eval code modified from : https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event.
- Bengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used.
- Minimum audio duration of 0.5s has been used to filter the training data which excluded may be 10-20 samples.
- OpenSLR53 transcripts are *not* part of LM training and LM used to evaluate.
|
{"language": ["bn"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "bn", "hf-asr-leaderboard", "openslr_SLR53", "robust-speech-event"], "datasets": ["openslr", "SLR53", "AI4Bharat/IndicCorp"], "metrics": ["wer", "cer"], "model-index": [{"name": "arijitx/wav2vec2-xls-r-300m-bengali", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Open SLR", "type": "openslr", "args": "SLR53"}, "metrics": [{"type": "wer", "value": 0.21726385291857586, "name": "Test WER"}, {"type": "cer", "value": 0.04725010353701041, "name": "Test CER"}, {"type": "wer", "value": 0.15322879016421437, "name": "Test WER with lm"}, {"type": "cer", "value": 0.03413696666806267, "name": "Test CER with lm"}]}]}]}
|
arijitx/wav2vec2-xls-r-300m-bengali
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"hf-asr-leaderboard",
"openslr_SLR53",
"robust-speech-event",
"dataset:openslr",
"dataset:SLR53",
"dataset:AI4Bharat/IndicCorp",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"bn"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #bn #hf-asr-leaderboard #openslr_SLR53 #robust-speech-event #dataset-openslr #dataset-SLR53 #dataset-AI4Bharat/IndicCorp #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the OPENSLR_SLR53 - bengali dataset.
It achieves the following results on the evaluation set.
Without language model :
- WER: 0.21726385291857586
- CER: 0.04725010353701041
With 5 gram language model trained on 30M sentences randomly chosen from AI4Bharat IndicCorp dataset :
- WER: 0.15322879016421437
- CER: 0.03413696666806267
Note : 5% of a total 10935 samples have been used for evaluation. Evaluation set has 10935 examples which was not part of training training was done on first 95% and eval was done on last 5%. Training was stopped after 180k steps. Output predictions are available under files section.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset_name="openslr"
- model_name_or_path="facebook/wav2vec2-xls-r-300m"
- dataset_config_name="SLR53"
- output_dir="./wav2vec2-xls-r-300m-bengali"
- overwrite_output_dir
- num_train_epochs="50"
- per_device_train_batch_size="32"
- per_device_eval_batch_size="32"
- gradient_accumulation_steps="1"
- learning_rate="7.5e-5"
- warmup_steps="2000"
- length_column_name="input_length"
- evaluation_strategy="steps"
- text_column_name="sentence"
- chars_to_ignore , ? . ! \- \; \: \" “ % ‘ ” � — ’ … –
- save_steps="2000"
- eval_steps="3000"
- logging_steps="100"
- layerdrop="0.0"
- activation_dropout="0.1"
- save_total_limit="3"
- freeze_feature_encoder
- feat_proj_dropout="0.0"
- mask_time_prob="0.75"
- mask_time_length="10"
- mask_feature_prob="0.25"
- mask_feature_length="64"
- preprocessing_num_workers 32
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
Notes
- Training and eval code modified from : URL
- Bengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used.
- Minimum audio duration of 0.5s has been used to filter the training data which excluded may be 10-20 samples.
- OpenSLR53 transcripts are *not* part of LM training and LM used to evaluate.
|
[
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n\n- dataset_name=\"openslr\" \t\n- model_name_or_path=\"facebook/wav2vec2-xls-r-300m\" \t\n- dataset_config_name=\"SLR53\" \t\n- output_dir=\"./wav2vec2-xls-r-300m-bengali\" \t\n- overwrite_output_dir \t\n- num_train_epochs=\"50\" \t\n- per_device_train_batch_size=\"32\" \t\n- per_device_eval_batch_size=\"32\" \t\n- gradient_accumulation_steps=\"1\" \t\n- learning_rate=\"7.5e-5\" \t\n- warmup_steps=\"2000\" \t\n- length_column_name=\"input_length\" \t\n- evaluation_strategy=\"steps\" \t\n- text_column_name=\"sentence\" \t\n- chars_to_ignore , ? . ! \\- \\; \\: \\\" “ % ‘ ” � — ’ … – \t\n- save_steps=\"2000\" \t\n- eval_steps=\"3000\" \t\n- logging_steps=\"100\" \t\n- layerdrop=\"0.0\" \t\n- activation_dropout=\"0.1\" \t\n- save_total_limit=\"3\" \t\n- freeze_feature_encoder \t\n- feat_proj_dropout=\"0.0\" \t\n- mask_time_prob=\"0.75\" \t\n- mask_time_length=\"10\" \t\n- mask_feature_prob=\"0.25\" \t\n- mask_feature_length=\"64\" \n- preprocessing_num_workers 32",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0\n\nNotes\n- Training and eval code modified from : URL \n- Bengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used.\n- Minimum audio duration of 0.5s has been used to filter the training data which excluded may be 10-20 samples.\n- OpenSLR53 transcripts are *not* part of LM training and LM used to evaluate."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #bn #hf-asr-leaderboard #openslr_SLR53 #robust-speech-event #dataset-openslr #dataset-SLR53 #dataset-AI4Bharat/IndicCorp #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n\n- dataset_name=\"openslr\" \t\n- model_name_or_path=\"facebook/wav2vec2-xls-r-300m\" \t\n- dataset_config_name=\"SLR53\" \t\n- output_dir=\"./wav2vec2-xls-r-300m-bengali\" \t\n- overwrite_output_dir \t\n- num_train_epochs=\"50\" \t\n- per_device_train_batch_size=\"32\" \t\n- per_device_eval_batch_size=\"32\" \t\n- gradient_accumulation_steps=\"1\" \t\n- learning_rate=\"7.5e-5\" \t\n- warmup_steps=\"2000\" \t\n- length_column_name=\"input_length\" \t\n- evaluation_strategy=\"steps\" \t\n- text_column_name=\"sentence\" \t\n- chars_to_ignore , ? . ! \\- \\; \\: \\\" “ % ‘ ” � — ’ … – \t\n- save_steps=\"2000\" \t\n- eval_steps=\"3000\" \t\n- logging_steps=\"100\" \t\n- layerdrop=\"0.0\" \t\n- activation_dropout=\"0.1\" \t\n- save_total_limit=\"3\" \t\n- freeze_feature_encoder \t\n- feat_proj_dropout=\"0.0\" \t\n- mask_time_prob=\"0.75\" \t\n- mask_time_length=\"10\" \t\n- mask_feature_prob=\"0.25\" \t\n- mask_feature_length=\"64\" \n- preprocessing_num_workers 32",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0\n\nNotes\n- Training and eval code modified from : URL \n- Bengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used.\n- Minimum audio duration of 0.5s has been used to filter the training data which excluded may be 10-20 samples.\n- OpenSLR53 transcripts are *not* part of LM training and LM used to evaluate."
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-xsum
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the wsj_markets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8497
- Rouge1: 15.3934
- Rouge2: 7.0378
- Rougel: 13.9522
- Rougelsum: 14.3541
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.0964 | 1.0 | 1735 | 0.9365 | 18.703 | 12.7539 | 18.1293 | 18.5397 | 20.0 |
| 0.95 | 2.0 | 3470 | 0.8871 | 19.5223 | 13.0938 | 18.9148 | 18.8363 | 20.0 |
| 0.8687 | 3.0 | 5205 | 0.8587 | 15.0915 | 7.142 | 13.6693 | 14.5975 | 20.0 |
| 0.7989 | 4.0 | 6940 | 0.8569 | 18.243 | 11.4495 | 17.4326 | 17.489 | 20.0 |
| 0.7493 | 5.0 | 8675 | 0.8497 | 15.3934 | 7.0378 | 13.9522 | 14.3541 | 20.0 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.10.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["wsj_markets"], "metrics": ["rouge"], "model_index": [{"name": "bart-large-finetuned-xsum", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}, "dataset": {"name": "wsj_markets", "type": "wsj_markets", "args": "default"}, "metric": {"name": "Rouge1", "type": "rouge", "value": 15.3934}}]}]}
|
aristotletan/bart-large-finetuned-xsum
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:wsj_markets",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #dataset-wsj_markets #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
bart-large-finetuned-xsum
=========================
This model is a fine-tuned version of facebook/bart-large on the wsj\_markets dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8497
* Rouge1: 15.3934
* Rouge2: 7.0378
* Rougel: 13.9522
* Rougelsum: 14.3541
* Gen Len: 20.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.8.2
* Pytorch 1.9.0+cu102
* Datasets 1.10.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bart #text2text-generation #generated_from_trainer #dataset-wsj_markets #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sst2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the scim dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4632
- Accuracy: 0.9111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 90 | 2.0273 | 0.6667 |
| No log | 2.0 | 180 | 0.8802 | 0.8556 |
| No log | 3.0 | 270 | 0.5908 | 0.8889 |
| No log | 4.0 | 360 | 0.4632 | 0.9111 |
| No log | 5.0 | 450 | 0.4294 | 0.9111 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["scim"], "metrics": ["accuracy"], "model_index": [{"name": "roberta-base-finetuned-sst2", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "scim", "type": "scim", "args": "eod"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9111111111111111}}]}]}
|
aristotletan/roberta-base-finetuned-sst2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:scim",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-scim #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
roberta-base-finetuned-sst2
===========================
This model is a fine-tuned version of roberta-base on the scim dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4632
* Accuracy: 0.9111
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.9.1
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-scim #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wsj_markets dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1447
- Rouge1: 10.4492
- Rouge2: 3.9563
- Rougel: 9.3368
- Rougelsum: 9.9828
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 2.2742 | 1.0 | 868 | 1.3135 | 9.4644 | 2.618 | 8.4048 | 8.9764 | 19.0 |
| 1.4607 | 2.0 | 1736 | 1.2134 | 9.6327 | 3.8535 | 9.0703 | 9.2466 | 19.0 |
| 1.3579 | 3.0 | 2604 | 1.1684 | 10.1616 | 3.5498 | 9.2294 | 9.4507 | 19.0 |
| 1.3314 | 4.0 | 3472 | 1.1514 | 10.0621 | 3.6907 | 9.1635 | 9.4955 | 19.0 |
| 1.3084 | 5.0 | 4340 | 1.1447 | 10.4492 | 3.9563 | 9.3368 | 9.9828 | 19.0 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.10.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wsj_markets"], "metrics": ["rouge"], "model_index": [{"name": "t5-small-finetuned-xsum", "results": [{"task": {"name": "Sequence-to-sequence Language Modeling", "type": "text2text-generation"}, "dataset": {"name": "wsj_markets", "type": "wsj_markets", "args": "default"}, "metric": {"name": "Rouge1", "type": "rouge", "value": 10.4492}}]}]}
|
aristotletan/t5-small-finetuned-xsum
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wsj_markets",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wsj_markets #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-xsum
=======================
This model is a fine-tuned version of t5-small on the wsj\_markets dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1447
* Rouge1: 10.4492
* Rouge2: 3.9563
* Rougel: 9.3368
* Rougelsum: 9.9828
* Gen Len: 19.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.8.2
* Pytorch 1.9.0+cu102
* Datasets 1.10.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wsj_markets #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.0\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 15892673
## Validation Metrics
- Loss: 1.3661842346191406
- Rouge1: 50.8868
- Rouge2: 26.996
- RougeL: 42.9088
- RougeLsum: 46.6748
- Gen Len: 20.716
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/arjun3816/autonlp-pegas_large_samsum-15892673
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["arjun3816/autonlp-data-pegas_large_samsum"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
arjun3816/autonlp-pegas_large_samsum-15892673
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"unk",
"dataset:arjun3816/autonlp-data-pegas_large_samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-arjun3816/autonlp-data-pegas_large_samsum #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 15892673
## Validation Metrics
- Loss: 1.3661842346191406
- Rouge1: 50.8868
- Rouge2: 26.996
- RougeL: 42.9088
- RougeLsum: 46.6748
- Gen Len: 20.716
## Usage
You can use cURL to access this model:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 15892673",
"## Validation Metrics\n\n- Loss: 1.3661842346191406\n- Rouge1: 50.8868\n- Rouge2: 26.996\n- RougeL: 42.9088\n- RougeLsum: 46.6748\n- Gen Len: 20.716",
"## Usage\n\nYou can use cURL to access this model:"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-arjun3816/autonlp-data-pegas_large_samsum #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 15892673",
"## Validation Metrics\n\n- Loss: 1.3661842346191406\n- Rouge1: 50.8868\n- Rouge2: 26.996\n- RougeL: 42.9088\n- RougeLsum: 46.6748\n- Gen Len: 20.716",
"## Usage\n\nYou can use cURL to access this model:"
] |
text2text-generation
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 15492651
## Validation Metrics
- Loss: 1.4060134887695312
- Rouge1: 50.9953
- Rouge2: 35.9204
- RougeL: 43.5673
- RougeLsum: 46.445
- Gen Len: 58.0193
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/arjun3816/autonlp-sam_summarization1-15492651
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["arjun3816/autonlp-data-sam_summarization1"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
arjun3816/autonlp-sam_summarization1-15492651
| null |
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"unk",
"dataset:arjun3816/autonlp-data-sam_summarization1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-arjun3816/autonlp-data-sam_summarization1 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 15492651
## Validation Metrics
- Loss: 1.4060134887695312
- Rouge1: 50.9953
- Rouge2: 35.9204
- RougeL: 43.5673
- RougeLsum: 46.445
- Gen Len: 58.0193
## Usage
You can use cURL to access this model:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 15492651",
"## Validation Metrics\n\n- Loss: 1.4060134887695312\n- Rouge1: 50.9953\n- Rouge2: 35.9204\n- RougeL: 43.5673\n- RougeLsum: 46.445\n- Gen Len: 58.0193",
"## Usage\n\nYou can use cURL to access this model:"
] |
[
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #autonlp #unk #dataset-arjun3816/autonlp-data-sam_summarization1 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 15492651",
"## Validation Metrics\n\n- Loss: 1.4060134887695312\n- Rouge1: 50.9953\n- Rouge2: 35.9204\n- RougeL: 43.5673\n- RougeLsum: 46.445\n- Gen Len: 58.0193",
"## Usage\n\nYou can use cURL to access this model:"
] |
null | null |
# Noise2Recon
> **Noise2Recon: A Semi-Supervised Framework for Joint MRI Reconstruction and Denoising**\
> Arjun Desai, Batu Ozturkler, Christopher Sandino, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\
> https://arxiv.org/abs/2110.00075
This repository contains the artifacts for the Noise2Recon paper. To use our code
and artifacts in your research, please use the [Meddlr](https://github.com/ad12/meddlr) package.
|
{"language": "en", "license": "apache-2.0", "tags": ["mri", "reconstruction", "denoising"]}
|
arjundd/noise2recon-release
| null |
[
"mri",
"reconstruction",
"denoising",
"en",
"arxiv:2110.00075",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2110.00075"
] |
[
"en"
] |
TAGS
#mri #reconstruction #denoising #en #arxiv-2110.00075 #license-apache-2.0 #region-us
|
# Noise2Recon
> Noise2Recon: A Semi-Supervised Framework for Joint MRI Reconstruction and Denoising\
> Arjun Desai, Batu Ozturkler, Christopher Sandino, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\
> URL
This repository contains the artifacts for the Noise2Recon paper. To use our code
and artifacts in your research, please use the Meddlr package.
|
[
"# Noise2Recon\r\n\r\n> Noise2Recon: A Semi-Supervised Framework for Joint MRI Reconstruction and Denoising\\\r\n> Arjun Desai, Batu Ozturkler, Christopher Sandino, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\\\r\n> URL\r\n\r\nThis repository contains the artifacts for the Noise2Recon paper. To use our code\r\nand artifacts in your research, please use the Meddlr package."
] |
[
"TAGS\n#mri #reconstruction #denoising #en #arxiv-2110.00075 #license-apache-2.0 #region-us \n",
"# Noise2Recon\r\n\r\n> Noise2Recon: A Semi-Supervised Framework for Joint MRI Reconstruction and Denoising\\\r\n> Arjun Desai, Batu Ozturkler, Christopher Sandino, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\\\r\n> URL\r\n\r\nThis repository contains the artifacts for the Noise2Recon paper. To use our code\r\nand artifacts in your research, please use the Meddlr package."
] |
null | null |
# VORTEX
<div align="center">
<img src="https://drive.google.com/uc?export=view&id=1q0jAm6Kg5ZhRg3h0w0ZbtIgcRF3_-Vgb" alt="Vortex Schematic" width="700px" />
</div>
> **VORTEX: Physics-Driven Data Augmentations for Consistency Training for Robust Accelerated MRI Reconstruction**\
> Arjun Desai, Beliz Gunel, Batu Ozturkler, Harris Beg, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\
> https://arxiv.org/abs/2111.02549
This repository contains the artifacts for the VORTEX paper. To use our code
and artifacts in your research, please use the [Meddlr](https://github.com/ad12/meddlr) package.
|
{"language": "en", "license": "apache-2.0", "tags": ["mri", "reconstruction", "artifact correction"]}
|
arjundd/vortex-release
| null |
[
"mri",
"reconstruction",
"artifact correction",
"en",
"arxiv:2111.02549",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2111.02549"
] |
[
"en"
] |
TAGS
#mri #reconstruction #artifact correction #en #arxiv-2111.02549 #license-apache-2.0 #region-us
|
# VORTEX
<div align="center">
<img src="URL alt="Vortex Schematic" width="700px" />
</div>
> VORTEX: Physics-Driven Data Augmentations for Consistency Training for Robust Accelerated MRI Reconstruction\
> Arjun Desai, Beliz Gunel, Batu Ozturkler, Harris Beg, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\
> URL
This repository contains the artifacts for the VORTEX paper. To use our code
and artifacts in your research, please use the Meddlr package.
|
[
"# VORTEX\r\n\r\n<div align=\"center\">\r\n <img src=\"URL alt=\"Vortex Schematic\" width=\"700px\" />\r\n</div>\r\n\r\n> VORTEX: Physics-Driven Data Augmentations for Consistency Training for Robust Accelerated MRI Reconstruction\\\r\n> Arjun Desai, Beliz Gunel, Batu Ozturkler, Harris Beg, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\\\r\n> URL\r\n\r\nThis repository contains the artifacts for the VORTEX paper. To use our code\r\nand artifacts in your research, please use the Meddlr package."
] |
[
"TAGS\n#mri #reconstruction #artifact correction #en #arxiv-2111.02549 #license-apache-2.0 #region-us \n",
"# VORTEX\r\n\r\n<div align=\"center\">\r\n <img src=\"URL alt=\"Vortex Schematic\" width=\"700px\" />\r\n</div>\r\n\r\n> VORTEX: Physics-Driven Data Augmentations for Consistency Training for Robust Accelerated MRI Reconstruction\\\r\n> Arjun Desai, Beliz Gunel, Batu Ozturkler, Harris Beg, Shreyas Vasanawala, Brian Hargreaves, Christopher Ré, John Pauly, Akshay Chaudhari\\\r\n> URL\r\n\r\nThis repository contains the artifacts for the VORTEX paper. To use our code\r\nand artifacts in your research, please use the Meddlr package."
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sentiment-2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5882
- Accuracy: 0.7614
- F1: 0.7614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-multilingual-cased-sentiment-2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "en"}, "metrics": [{"type": "accuracy", "value": 0.7614, "name": "Accuracy"}, {"type": "f1", "value": 0.7614, "name": "F1"}]}]}]}
|
arjuntheprogrammer/distilbert-base-multilingual-cased-sentiment-2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# distilbert-base-multilingual-cased-sentiment-2
This model is a fine-tuned version of distilbert-base-multilingual-cased on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5882
- Accuracy: 0.7614
- F1: 0.7614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00024
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- distributed_type: sagemaker_data_parallel
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
[
"# distilbert-base-multilingual-cased-sentiment-2\n\nThis model is a fine-tuned version of distilbert-base-multilingual-cased on the amazon_reviews_multi dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5882\n- Accuracy: 0.7614\n- F1: 0.7614",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.00024\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 33\n- distributed_type: sagemaker_data_parallel\n- num_devices: 8\n- total_train_batch_size: 128\n- total_eval_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.12.3\n- Pytorch 1.9.1\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilbert-base-multilingual-cased-sentiment-2\n\nThis model is a fine-tuned version of distilbert-base-multilingual-cased on the amazon_reviews_multi dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5882\n- Accuracy: 0.7614\n- F1: 0.7614",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.00024\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 33\n- distributed_type: sagemaker_data_parallel\n- num_devices: 8\n- total_train_batch_size: 128\n- total_eval_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.12.3\n- Pytorch 1.9.1\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
BERTweet-FA: A pre-trained language model for Persian (a.k.a Farsi) Tweets
---
BERTweet-FA is a transformer-based model trained on 20665964 Persian tweets. The model has been trained on the data only for 1 epoch (322906 steps), and yet it has the ability to recognize the meaning of most of the conversational sentences used in Farsi. Note that the architecture of this model follows the original BERT [[Devlin et al.](https://arxiv.org/abs/1810.04805)].
How to use the Model
---
```python
from transformers import BertForMaskedLM, BertTokenizer, pipeline
model = BertForMaskedLM.from_pretrained('arm-on/BERTweet-FA')
tokenizer = BertTokenizer.from_pretrained('arm-on/BERTweet-FA')
fill_sentence = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_sentence('اینجا جمله مورد نظر خود را بنویسید و کلمه موردنظر را [MASK] کنید')
```
The Training Data
---
The first version of the model was trained on the "[Large Scale Colloquial Persian Dataset](https://iasbs.ac.ir/~ansari/lscp/)" containing more than 20 million tweets in Farsi, gathered by Khojasteh et al., and published on 2020.
Evaluation
---
| Training Loss | Epoch | Step |
|:-------------:|:-----:|:-----:|
| 0.0036 | 1.0 | 322906 |
Contributors
---
- Arman Malekzadeh [[Github](https://github.com/arm-on)]
|
{"language": "fa", "license": "apache-2.0", "tags": ["BERTweet"], "widget": [{"text": "\u0627\u06cc\u0646 \u0628\u0648\u062f [MASK] \u0647\u0627\u06cc \u0645\u0627\u061f"}, {"text": "\u062f\u0627\u062f\u0627\u0686 \u062f\u0627\u0631\u06cc [MASK] \u0645\u06cc\u0632\u0646\u06cc"}, {"text": "\u0628\u0647 \u0639\u0644\u06cc [MASK] \u0645\u06cc\u06af\u0641\u062a\u0646 \u062c\u0627\u062f\u0648\u06af\u0631"}, {"text": "\u0622\u062e\u0647 \u0645\u062d\u0633\u0646 [MASK] \u0647\u0645 \u0634\u062f \u062e\u0648\u0627\u0646\u0646\u062f\u0647\u061f"}, {"text": "\u067e\u0633\u0631 \u0639\u062c\u0628 [MASK] \u0632\u062f"}], "model-index": [{"name": "BERTweet-FA", "results": []}]}
|
arm-on/BERTweet-FA
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"BERTweet",
"fa",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1810.04805"
] |
[
"fa"
] |
TAGS
#transformers #pytorch #bert #fill-mask #BERTweet #fa #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
BERTweet-FA: A pre-trained language model for Persian (a.k.a Farsi) Tweets
--------------------------------------------------------------------------
BERTweet-FA is a transformer-based model trained on 20665964 Persian tweets. The model has been trained on the data only for 1 epoch (322906 steps), and yet it has the ability to recognize the meaning of most of the conversational sentences used in Farsi. Note that the architecture of this model follows the original BERT [Devlin et al.].
How to use the Model
--------------------
The Training Data
-----------------
The first version of the model was trained on the "Large Scale Colloquial Persian Dataset" containing more than 20 million tweets in Farsi, gathered by Khojasteh et al., and published on 2020.
Evaluation
----------
Contributors
------------
* Arman Malekzadeh [Github]
|
[] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #BERTweet #fa #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-xxlarge-v2-squad2-covid-qa-deepset
This model is a fine-tuned version of [mfeb/albert-xxlarge-v2-squad2](https://huggingface.co/mfeb/albert-xxlarge-v2-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "albert-xxlarge-v2-squad2-covid-qa-deepset", "results": []}]}
|
armageddon/albert-xxlarge-v2-squad2-covid-qa-deepset
| null |
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us
|
# albert-xxlarge-v2-squad2-covid-qa-deepset
This model is a fine-tuned version of mfeb/albert-xxlarge-v2-squad2 on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
[
"# albert-xxlarge-v2-squad2-covid-qa-deepset\n\nThis model is a fine-tuned version of mfeb/albert-xxlarge-v2-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us \n",
"# albert-xxlarge-v2-squad2-covid-qa-deepset\n\nThis model is a fine-tuned version of mfeb/albert-xxlarge-v2-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_bert_base_uncased_squad2
This model is a fine-tuned version of [twmkn9/bert-base-uncased-squad2](https://huggingface.co/twmkn9/bert-base-uncased-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "covid_qa_analysis_bert_base_uncased_squad2", "results": []}]}
|
armageddon/bert-base-uncased-squad2-covid-qa-deepset
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us
|
# covid_qa_analysis_bert_base_uncased_squad2
This model is a fine-tuned version of twmkn9/bert-base-uncased-squad2 on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
[
"# covid_qa_analysis_bert_base_uncased_squad2\n\nThis model is a fine-tuned version of twmkn9/bert-base-uncased-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us \n",
"# covid_qa_analysis_bert_base_uncased_squad2\n\nThis model is a fine-tuned version of twmkn9/bert-base-uncased-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-squad2-covid-qa-deepset
This model is a fine-tuned version of [phiyodr/bert-large-finetuned-squad2](https://huggingface.co/phiyodr/bert-large-finetuned-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "bert-large-uncased-squad2-covid-qa-deepset", "results": []}]}
|
armageddon/bert-large-uncased-squad2-covid-qa-deepset
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us
|
# bert-large-uncased-squad2-covid-qa-deepset
This model is a fine-tuned version of phiyodr/bert-large-finetuned-squad2 on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
[
"# bert-large-uncased-squad2-covid-qa-deepset\n\nThis model is a fine-tuned version of phiyodr/bert-large-finetuned-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us \n",
"# bert-large-uncased-squad2-covid-qa-deepset\n\nThis model is a fine-tuned version of phiyodr/bert-large-finetuned-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_albert_base_squad_v2
This model is a fine-tuned version of [abhilash1910/albert-squad-v2](https://huggingface.co/abhilash1910/albert-squad-v2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "covid_qa_analysis_albert_base_squad_v2", "results": []}]}
|
armageddon/albert-squad-v2-covid-qa-deepset
| null |
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #license-apache-2.0 #endpoints_compatible #region-us
|
# covid_qa_analysis_albert_base_squad_v2
This model is a fine-tuned version of abhilash1910/albert-squad-v2 on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
[
"# covid_qa_analysis_albert_base_squad_v2\n\nThis model is a fine-tuned version of abhilash1910/albert-squad-v2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #license-apache-2.0 #endpoints_compatible #region-us \n",
"# covid_qa_analysis_albert_base_squad_v2\n\nThis model is a fine-tuned version of abhilash1910/albert-squad-v2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_roberta-base-squad2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "covid_qa_analysis_roberta-base-squad2", "results": []}]}
|
armageddon/roberta-base-squad2-covid-qa-deepset
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #dataset-covid_qa_deepset #license-cc-by-4.0 #endpoints_compatible #region-us
|
# covid_qa_analysis_roberta-base-squad2
This model is a fine-tuned version of deepset/roberta-base-squad2 on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
[
"# covid_qa_analysis_roberta-base-squad2\n\nThis model is a fine-tuned version of deepset/roberta-base-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #dataset-covid_qa_deepset #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"# covid_qa_analysis_roberta-base-squad2\n\nThis model is a fine-tuned version of deepset/roberta-base-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid_qa_analysis_roberta-large-squad2
This model is a fine-tuned version of [deepset/roberta-large-squad2](https://huggingface.co/deepset/roberta-large-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "covid_qa_analysis_roberta-large-squad2", "results": []}]}
|
armageddon/roberta-large-squad2-covid-qa-deepset
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us
|
# covid_qa_analysis_roberta-large-squad2
This model is a fine-tuned version of deepset/roberta-large-squad2 on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
[
"# covid_qa_analysis_roberta-large-squad2\n\nThis model is a fine-tuned version of deepset/roberta-large-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us \n",
"# covid_qa_analysis_roberta-large-squad2\n\nThis model is a fine-tuned version of deepset/roberta-large-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad2-covid-qa-deepset
This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "distilbert-base-uncased-squad2-covid-qa-deepset", "results": []}]}
|
armageddon/distilbert-base-uncased-squad2-covid-qa-deepset
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us
|
# distilbert-base-uncased-squad2-covid-qa-deepset
This model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
[
"# distilbert-base-uncased-squad2-covid-qa-deepset\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-covid_qa_deepset #endpoints_compatible #region-us \n",
"# distilbert-base-uncased-squad2-covid-qa-deepset\n\nThis model is a fine-tuned version of twmkn9/distilbert-base-uncased-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-squad2-covid-qa-deepset
This model is a fine-tuned version of [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2) on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["covid_qa_deepset"], "model-index": [{"name": "electra-base-squad2-covid-qa-deepset", "results": []}]}
|
armageddon/electra-base-squad2-covid-qa-deepset
| null |
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:covid_qa_deepset",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #electra #question-answering #generated_from_trainer #dataset-covid_qa_deepset #license-cc-by-4.0 #endpoints_compatible #region-us
|
# electra-base-squad2-covid-qa-deepset
This model is a fine-tuned version of deepset/electra-base-squad2 on the covid_qa_deepset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
[
"# electra-base-squad2-covid-qa-deepset\n\nThis model is a fine-tuned version of deepset/electra-base-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #electra #question-answering #generated_from_trainer #dataset-covid_qa_deepset #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"# electra-base-squad2-covid-qa-deepset\n\nThis model is a fine-tuned version of deepset/electra-base-squad2 on the covid_qa_deepset dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: tpu\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0963 | 1.0 | 2346 | 7.0570 |
| 6.9063 | 2.0 | 4692 | 6.8721 |
| 6.8585 | 3.0 | 7038 | 6.8931 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-cased-wikitext2", "results": []}]}
|
arman0320/bert-base-cased-wikitext2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-cased-wikitext2
=========================
This model is a fine-tuned version of bert-base-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 6.8596
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
**A casual chatbot**
This is a dialogpt medium fine tuned to talk like Tony Stark, Currently its only trained upon the script of Iron man 3
|
{"language": ["en"], "license": "MIT", "tags": ["conversational"]}
|
arnav7633/DialoGPT-medium-tony_stark
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
A casual chatbot
This is a dialogpt medium fine tuned to talk like Tony Stark, Currently its only trained upon the script of Iron man 3
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
token-classification
|
transformers
|
# Model description
**bert-base-uncased-kin** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-kin**| 75.00 |80.09|77.47
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/bert-base-uncased-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu w’Umurundi"
ner_results = nlp(example)
print(ner_results)
```
|
{"language": ["kin"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n\u2019u Rwanda, bushingiye nanone ku bufatanye hagati y\u2019imigabane ya Afurika n\u2019u Burayi."}]}
|
arnolfokam/bert-base-uncased-kin
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"kin",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"kin"
] |
TAGS
#transformers #pytorch #bert #token-classification #NER #kin #dataset-masakhaner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Model description
=================
bert-base-uncased-kin is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
* dates & time (DATE)
* Location (LOC)
* Organizations (ORG)
* Person (PER)
Intended Use
============
* Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
* Not intended for practical purposes.
Training Data
=============
This model was fine-tuned on the Kinyarwanda corpus (kin) of the MasakhaNER dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
Training procedure
==================
This model was trained on a single NVIDIA P5000 from Paperspace
#### Hyperparameters
* Learning Rate: 5e-5
* Batch Size: 32
* Maximum Sequence Length: 164
* Epochs: 30
Evaluation Data
===============
We evaluated this model on the test split of the Kinyarwandan corpus (kin) present in the MasakhaNER with no thresholding.
Metrics
=======
* Precision
* Recall
* F1-score
Limitations
===========
* The size of the pre-trained language model prevents its usage in anything other than research.
* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
Caveats and Recommendations
===========================
* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.
Results
=======
Usage
=====
|
[
"#### Hyperparameters\n\n\n* Learning Rate: 5e-5\n* Batch Size: 32\n* Maximum Sequence Length: 164\n* Epochs: 30\n\n\nEvaluation Data\n===============\n\n\nWe evaluated this model on the test split of the Kinyarwandan corpus (kin) present in the MasakhaNER with no thresholding.\n\n\nMetrics\n=======\n\n\n* Precision\n* Recall\n* F1-score\n\n\nLimitations\n===========\n\n\n* The size of the pre-trained language model prevents its usage in anything other than research.\n* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.\n* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.\n\n\nCaveats and Recommendations\n===========================\n\n\n* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.\n\n\nResults\n=======\n\n\n\nUsage\n====="
] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #NER #kin #dataset-masakhaner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Hyperparameters\n\n\n* Learning Rate: 5e-5\n* Batch Size: 32\n* Maximum Sequence Length: 164\n* Epochs: 30\n\n\nEvaluation Data\n===============\n\n\nWe evaluated this model on the test split of the Kinyarwandan corpus (kin) present in the MasakhaNER with no thresholding.\n\n\nMetrics\n=======\n\n\n* Precision\n* Recall\n* F1-score\n\n\nLimitations\n===========\n\n\n* The size of the pre-trained language model prevents its usage in anything other than research.\n* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.\n* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.\n\n\nCaveats and Recommendations\n===========================\n\n\n* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.\n\n\nResults\n=======\n\n\n\nUsage\n====="
] |
token-classification
|
transformers
|
# Model description
**bert-base-uncased-pcm** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Nigerian Pidgin corpus **(pcm)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(pcm)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-pcm**| 88.61 | 84.17 | 86.33
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-pcm")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/bert-base-uncased-pcm")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."
ner_results = nlp(example)
print(ner_results)
```
|
{"language": ["pcm"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Mixed Martial Arts joinbodi, Ultimate Fighting Championship, UFC don decide say dem go enta back di octagon on Saturday, 9 May, for Jacksonville, Florida."}]}
|
arnolfokam/bert-base-uncased-pcm
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"pcm",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pcm"
] |
TAGS
#transformers #pytorch #bert #token-classification #NER #pcm #dataset-masakhaner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Model description
=================
bert-base-uncased-pcm is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
* dates & time (DATE)
* Location (LOC)
* Organizations (ORG)
* Person (PER)
Intended Use
============
* Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
* Not intended for practical purposes.
Training Data
=============
This model was fine-tuned on the Nigerian Pidgin corpus (pcm) of the MasakhaNER dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
Training procedure
==================
This model was trained on a single NVIDIA P5000 from Paperspace
#### Hyperparameters
* Learning Rate: 5e-5
* Batch Size: 32
* Maximum Sequence Length: 164
* Epochs: 30
Evaluation Data
===============
We evaluated this model on the test split of the Swahili corpus (pcm) present in the MasakhaNER with no thresholding.
Metrics
=======
* Precision
* Recall
* F1-score
Limitations
===========
* The size of the pre-trained language model prevents its usage in anything other than research.
* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
Caveats and Recommendations
===========================
* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.
Results
=======
Usage
=====
|
[
"#### Hyperparameters\n\n\n* Learning Rate: 5e-5\n* Batch Size: 32\n* Maximum Sequence Length: 164\n* Epochs: 30\n\n\nEvaluation Data\n===============\n\n\nWe evaluated this model on the test split of the Swahili corpus (pcm) present in the MasakhaNER with no thresholding.\n\n\nMetrics\n=======\n\n\n* Precision\n* Recall\n* F1-score\n\n\nLimitations\n===========\n\n\n* The size of the pre-trained language model prevents its usage in anything other than research.\n* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.\n* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.\n\n\nCaveats and Recommendations\n===========================\n\n\n* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.\n\n\nResults\n=======\n\n\n\nUsage\n====="
] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #NER #pcm #dataset-masakhaner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Hyperparameters\n\n\n* Learning Rate: 5e-5\n* Batch Size: 32\n* Maximum Sequence Length: 164\n* Epochs: 30\n\n\nEvaluation Data\n===============\n\n\nWe evaluated this model on the test split of the Swahili corpus (pcm) present in the MasakhaNER with no thresholding.\n\n\nMetrics\n=======\n\n\n* Precision\n* Recall\n* F1-score\n\n\nLimitations\n===========\n\n\n* The size of the pre-trained language model prevents its usage in anything other than research.\n* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.\n* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.\n\n\nCaveats and Recommendations\n===========================\n\n\n* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.\n\n\nResults\n=======\n\n\n\nUsage\n====="
] |
token-classification
|
transformers
|
# Model description
**bert-base-uncased-swa** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Swahili corpus **(swa)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Swahili corpus **(swa)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-swa**| 83.38 | 89.32 | 86.26
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-swa")
model = AutoModelForTokenClassification.from_pretrained("bert-base-uncased-swa")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."
ner_results = nlp(example)
print(ner_results)
```
|
{"language": ["swa"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Wizara ya afya ya Tanzania imeripoti Jumatatu kuwa, watu takriban 14 zaidi wamepata maambukizi ya Covid-19."}]}
|
arnolfokam/bert-base-uncased-swa
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"swa",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"swa"
] |
TAGS
#transformers #pytorch #bert #token-classification #NER #swa #dataset-masakhaner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Model description
=================
bert-base-uncased-swa is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
* dates & time (DATE)
* Location (LOC)
* Organizations (ORG)
* Person (PER)
Intended Use
============
* Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
* Not intended for practical purposes.
Training Data
=============
This model was fine-tuned on the Swahili corpus (swa) of the MasakhaNER dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
Training procedure
==================
This model was trained on a single NVIDIA P5000 from Paperspace
#### Hyperparameters
* Learning Rate: 5e-5
* Batch Size: 32
* Maximum Sequence Length: 164
* Epochs: 30
Evaluation Data
===============
We evaluated this model on the test split of the Swahili corpus (swa) present in the MasakhaNER with no thresholding.
Metrics
=======
* Precision
* Recall
* F1-score
Limitations
===========
* The size of the pre-trained language model prevents its usage in anything other than research.
* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
Caveats and Recommendations
===========================
* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.
Results
=======
Usage
=====
|
[
"#### Hyperparameters\n\n\n* Learning Rate: 5e-5\n* Batch Size: 32\n* Maximum Sequence Length: 164\n* Epochs: 30\n\n\nEvaluation Data\n===============\n\n\nWe evaluated this model on the test split of the Swahili corpus (swa) present in the MasakhaNER with no thresholding.\n\n\nMetrics\n=======\n\n\n* Precision\n* Recall\n* F1-score\n\n\nLimitations\n===========\n\n\n* The size of the pre-trained language model prevents its usage in anything other than research.\n* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.\n* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.\n\n\nCaveats and Recommendations\n===========================\n\n\n* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.\n\n\nResults\n=======\n\n\n\nUsage\n====="
] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #NER #swa #dataset-masakhaner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Hyperparameters\n\n\n* Learning Rate: 5e-5\n* Batch Size: 32\n* Maximum Sequence Length: 164\n* Epochs: 30\n\n\nEvaluation Data\n===============\n\n\nWe evaluated this model on the test split of the Swahili corpus (swa) present in the MasakhaNER with no thresholding.\n\n\nMetrics\n=======\n\n\n* Precision\n* Recall\n* F1-score\n\n\nLimitations\n===========\n\n\n* The size of the pre-trained language model prevents its usage in anything other than research.\n* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.\n* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.\n\n\nCaveats and Recommendations\n===========================\n\n\n* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.\n\n\nResults\n=======\n\n\n\nUsage\n====="
] |
token-classification
|
transformers
|
# Model description
**mbert-base-uncased-kin** is a model based on the fine-tuned multilingual BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-kin**| 81.35 | 83.98 | 82.64
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu w’Umurundi"
ner_results = nlp(example)
print(ner_results)
```
|
{"language": ["kin"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n\u2019u Rwanda, bushingiye nanone ku bufatanye hagati y\u2019imigabane ya Afurika n\u2019u Burayi."}]}
|
arnolfokam/mbert-base-uncased-kin
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"kin",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"kin"
] |
TAGS
#transformers #pytorch #bert #token-classification #NER #kin #dataset-masakhaner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Model description
=================
mbert-base-uncased-kin is a model based on the fine-tuned multilingual BERT base uncased model. It has been trained to recognize four types of entities:
* dates & time (DATE)
* Location (LOC)
* Organizations (ORG)
* Person (PER)
Intended Use
============
* Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
* Not intended for practical purposes.
Training Data
=============
This model was fine-tuned on the Kinyarwanda corpus (kin) of the MasakhaNER dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
Training procedure
==================
This model was trained on a single NVIDIA P5000 from Paperspace
#### Hyperparameters
* Learning Rate: 5e-5
* Batch Size: 32
* Maximum Sequence Length: 164
* Epochs: 30
Evaluation Data
===============
We evaluated this model on the test split of the Kinyarwandan corpus (kin) present in the MasakhaNER with no thresholding.
Metrics
=======
* Precision
* Recall
* F1-score
Limitations
===========
* The size of the pre-trained language model prevents its usage in anything other than research.
* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
Caveats and Recommendations
===========================
* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.
Results
=======
Usage
=====
|
[
"#### Hyperparameters\n\n\n* Learning Rate: 5e-5\n* Batch Size: 32\n* Maximum Sequence Length: 164\n* Epochs: 30\n\n\nEvaluation Data\n===============\n\n\nWe evaluated this model on the test split of the Kinyarwandan corpus (kin) present in the MasakhaNER with no thresholding.\n\n\nMetrics\n=======\n\n\n* Precision\n* Recall\n* F1-score\n\n\nLimitations\n===========\n\n\n* The size of the pre-trained language model prevents its usage in anything other than research.\n* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.\n* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.\n\n\nCaveats and Recommendations\n===========================\n\n\n* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.\n\n\nResults\n=======\n\n\n\nUsage\n====="
] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #NER #kin #dataset-masakhaner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Hyperparameters\n\n\n* Learning Rate: 5e-5\n* Batch Size: 32\n* Maximum Sequence Length: 164\n* Epochs: 30\n\n\nEvaluation Data\n===============\n\n\nWe evaluated this model on the test split of the Kinyarwandan corpus (kin) present in the MasakhaNER with no thresholding.\n\n\nMetrics\n=======\n\n\n* Precision\n* Recall\n* F1-score\n\n\nLimitations\n===========\n\n\n* The size of the pre-trained language model prevents its usage in anything other than research.\n* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.\n* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.\n\n\nCaveats and Recommendations\n===========================\n\n\n* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.\n\n\nResults\n=======\n\n\n\nUsage\n====="
] |
token-classification
|
transformers
|
# Model description
**mbert-base-uncased-ner-kin** is a model based on the fine-tuned Multilingual BERT base uncased model, previously fine-tuned for Named Entity Recognition using 10 high-resourced languages. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**mbert-base-uncased-ner-kin**| 81.95 |81.55 |81.75
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/mbert-base-uncased-ner-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/mbert-base-uncased-ner-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu w’Umurundi"
ner_results = nlp(example)
print(ner_results)
```
|
{"language": ["kin"], "license": "apache-2.0", "tags": ["NER"], "datasets": ["masakhaner"], "metrics": ["f1", "precision", "recall"], "widget": [{"text": "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n\u2019u Rwanda, bushingiye nanone ku bufatanye hagati y\u2019imigabane ya Afurika n\u2019u Burayi."}]}
|
arnolfokam/mbert-base-uncased-ner-kin
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"kin",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"kin"
] |
TAGS
#transformers #pytorch #bert #token-classification #NER #kin #dataset-masakhaner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Model description
=================
mbert-base-uncased-ner-kin is a model based on the fine-tuned Multilingual BERT base uncased model, previously fine-tuned for Named Entity Recognition using 10 high-resourced languages. It has been trained to recognize four types of entities:
* dates & time (DATE)
* Location (LOC)
* Organizations (ORG)
* Person (PER)
Intended Use
============
* Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
* Not intended for practical purposes.
Training Data
=============
This model was fine-tuned on the Kinyarwanda corpus (kin) of the MasakhaNER dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
Training procedure
==================
This model was trained on a single NVIDIA P5000 from Paperspace
#### Hyperparameters
* Learning Rate: 5e-5
* Batch Size: 32
* Maximum Sequence Length: 164
* Epochs: 30
Evaluation Data
===============
We evaluated this model on the test split of the Kinyarwandan corpus (kin) present in the MasakhaNER with no thresholding.
Metrics
=======
* Precision
* Recall
* F1-score
Limitations
===========
* The size of the pre-trained language model prevents its usage in anything other than research.
* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
Caveats and Recommendations
===========================
* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.
Results
=======
Usage
=====
|
[
"#### Hyperparameters\n\n\n* Learning Rate: 5e-5\n* Batch Size: 32\n* Maximum Sequence Length: 164\n* Epochs: 30\n\n\nEvaluation Data\n===============\n\n\nWe evaluated this model on the test split of the Kinyarwandan corpus (kin) present in the MasakhaNER with no thresholding.\n\n\nMetrics\n=======\n\n\n* Precision\n* Recall\n* F1-score\n\n\nLimitations\n===========\n\n\n* The size of the pre-trained language model prevents its usage in anything other than research.\n* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.\n* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.\n\n\nCaveats and Recommendations\n===========================\n\n\n* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.\n\n\nResults\n=======\n\n\n\nUsage\n====="
] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #NER #kin #dataset-masakhaner #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Hyperparameters\n\n\n* Learning Rate: 5e-5\n* Batch Size: 32\n* Maximum Sequence Length: 164\n* Epochs: 30\n\n\nEvaluation Data\n===============\n\n\nWe evaluated this model on the test split of the Kinyarwandan corpus (kin) present in the MasakhaNER with no thresholding.\n\n\nMetrics\n=======\n\n\n* Precision\n* Recall\n* F1-score\n\n\nLimitations\n===========\n\n\n* The size of the pre-trained language model prevents its usage in anything other than research.\n* Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.\n* The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.\n\n\nCaveats and Recommendations\n===========================\n\n\n* The topics in the dataset corpus are centered around News. Future training could be done with a more diverse corpus.\n\n\nResults\n=======\n\n\n\nUsage\n====="
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.