modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 06:34:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 06:33:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_fast_10pct
|
scasutt
| 2022-03-28T18:53:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-28T12:30:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_fast_10pct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_fast_10pct
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6983
- Wer: 0.5026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3619 | 1.05 | 250 | 3.4334 | 1.0 |
| 3.0818 | 2.1 | 500 | 3.4914 | 1.0 |
| 2.3245 | 3.15 | 750 | 1.6483 | 0.9486 |
| 1.0233 | 4.2 | 1000 | 0.8817 | 0.7400 |
| 0.7522 | 5.25 | 1250 | 0.7374 | 0.6529 |
| 0.5343 | 6.3 | 1500 | 0.6972 | 0.6068 |
| 0.4452 | 7.35 | 1750 | 0.6757 | 0.5740 |
| 0.4275 | 8.4 | 2000 | 0.6789 | 0.5551 |
| 0.3688 | 9.45 | 2250 | 0.6468 | 0.5394 |
| 0.3363 | 10.5 | 2500 | 0.6798 | 0.5358 |
| 0.3036 | 11.55 | 2750 | 0.6439 | 0.5265 |
| 0.3173 | 12.6 | 3000 | 0.6898 | 0.5196 |
| 0.2985 | 13.65 | 3250 | 0.6791 | 0.5169 |
| 0.288 | 14.7 | 3500 | 0.6442 | 0.5090 |
| 0.2673 | 15.75 | 3750 | 0.6984 | 0.5119 |
| 0.2575 | 16.81 | 4000 | 0.7146 | 0.5084 |
| 0.239 | 17.86 | 4250 | 0.6847 | 0.5040 |
| 0.2266 | 18.91 | 4500 | 0.6900 | 0.5028 |
| 0.22 | 19.96 | 4750 | 0.6983 | 0.5026 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
kingabzpro/CELEB-GANs
|
kingabzpro
| 2022-03-28T18:08:29Z | 0 | 2 | null |
[
"huggan",
"gan",
"dcgans",
"dataset:huggan/CelebA-faces",
"license:apache-2.0",
"region:us"
] | null | 2022-03-28T16:05:34Z |
---
tags:
- huggan
- gan
- dcgans
task: image-generation
license: apache-2.0
datasets:
- huggan/CelebA-faces
---
# Fake Faces with DCGANs
## Model description
Describe the model here (what it does, what it's used for, etc.)
## Intended uses & limitations
#### How to use
```python
# You can include sample code which will be formatted
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
Describe the data you used to train the model.
If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
## Training procedure
Preprocessing, hardware used, hyperparameters...
## Eval results
- Generator_loss: 22.7
- Discriminator_loss: 7.9
## Generated Images
You can embed local or remote images using ``
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020}
}
```
|
aapot/wav2vec2-large-xlsr-53-finnish
|
aapot
| 2022-03-28T17:56:36Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fi",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fi
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Finnish by Aapo Tanskanen
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 32.378771
---
# NOTE: this is an old model and should not be used anymore!! There are a lot better newer models available at our orgnization hub: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2) and [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm)
# Wav2Vec2-Large-XLSR-53-Finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Finnish using the [Common Voice](https://huggingface.co/datasets/common_voice), [CSS10 Finnish](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset) and [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Finnish test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\'\...\…\–\é]'
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 32.378771 %
## Training
The Common Voice `train`, `validation` and `other` datasets were used for training as well as `CSS10 Finnish` and `Finnish parliament session 2` datasets.
The script used for training can be found from [Google Colab](https://colab.research.google.com/drive/1vnEGC9BnNRmVyIHj-0UsVulh_cUYSGWA?usp=sharing)
|
aapot/wav2vec2-xlsr-1b-finnish-v2
|
aapot
| 2022-03-28T17:49:48Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 9.73
- name: Test CER
type: cer
value: 1.65
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
**Note**: there is a version with KenLM language model used in the decoding phase producing better transcriptions: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-v2/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.7778 | 0.17 | 500 | 0.2851 | 0.3572 |
| 0.5506 | 0.34 | 1000 | 0.1595 | 0.2130 |
| 0.6569 | 0.5 | 1500 | 0.1458 | 0.2046 |
| 0.5997 | 0.67 | 2000 | 0.1374 | 0.1975 |
| 0.542 | 0.84 | 2500 | 0.1390 | 0.1956 |
| 0.4815 | 1.01 | 3000 | 0.1266 | 0.1813 |
| 0.6982 | 1.17 | 3500 | 0.1441 | 0.1965 |
| 0.4522 | 1.34 | 4000 | 0.1232 | 0.1822 |
| 0.4655 | 1.51 | 4500 | 0.1209 | 0.1702 |
| 0.4069 | 1.68 | 5000 | 0.1149 | 0.1688 |
| 0.4226 | 1.84 | 5500 | 0.1121 | 0.1560 |
| 0.3993 | 2.01 | 6000 | 0.1091 | 0.1557 |
| 0.406 | 2.18 | 6500 | 0.1115 | 0.1553 |
| 0.4098 | 2.35 | 7000 | 0.1144 | 0.1560 |
| 0.3995 | 2.51 | 7500 | 0.1028 | 0.1476 |
| 0.4101 | 2.68 | 8000 | 0.1129 | 0.1511 |
| 0.3636 | 2.85 | 8500 | 0.1025 | 0.1517 |
| 0.3534 | 3.02 | 9000 | 0.1068 | 0.1480 |
| 0.3836 | 3.18 | 9500 | 0.1072 | 0.1459 |
| 0.3531 | 3.35 | 10000 | 0.0928 | 0.1367 |
| 0.3649 | 3.52 | 10500 | 0.1042 | 0.1426 |
| 0.3645 | 3.69 | 11000 | 0.0979 | 0.1433 |
| 0.3685 | 3.85 | 11500 | 0.0947 | 0.1346 |
| 0.3325 | 4.02 | 12000 | 0.0991 | 0.1352 |
| 0.3497 | 4.19 | 12500 | 0.0919 | 0.1358 |
| 0.3303 | 4.36 | 13000 | 0.0888 | 0.1272 |
| 0.3323 | 4.52 | 13500 | 0.0888 | 0.1277 |
| 0.3452 | 4.69 | 14000 | 0.0894 | 0.1279 |
| 0.337 | 4.86 | 14500 | 0.0917 | 0.1289 |
| 0.3114 | 5.03 | 15000 | 0.0942 | 0.1313 |
| 0.3099 | 5.19 | 15500 | 0.0902 | 0.1239 |
| 0.3079 | 5.36 | 16000 | 0.0871 | 0.1256 |
| 0.3293 | 5.53 | 16500 | 0.0861 | 0.1263 |
| 0.3123 | 5.7 | 17000 | 0.0876 | 0.1203 |
| 0.3093 | 5.86 | 17500 | 0.0848 | 0.1226 |
| 0.2903 | 6.03 | 18000 | 0.0914 | 0.1221 |
| 0.297 | 6.2 | 18500 | 0.0841 | 0.1185 |
| 0.2797 | 6.37 | 19000 | 0.0858 | 0.1165 |
| 0.2878 | 6.53 | 19500 | 0.0874 | 0.1161 |
| 0.2974 | 6.7 | 20000 | 0.0835 | 0.1173 |
| 0.3051 | 6.87 | 20500 | 0.0835 | 0.1178 |
| 0.2941 | 7.04 | 21000 | 0.0852 | 0.1155 |
| 0.258 | 7.21 | 21500 | 0.0832 | 0.1132 |
| 0.2778 | 7.37 | 22000 | 0.0829 | 0.1110 |
| 0.2751 | 7.54 | 22500 | 0.0822 | 0.1069 |
| 0.2887 | 7.71 | 23000 | 0.0819 | 0.1103 |
| 0.2509 | 7.88 | 23500 | 0.0787 | 0.1055 |
| 0.2501 | 8.04 | 24000 | 0.0807 | 0.1076 |
| 0.2399 | 8.21 | 24500 | 0.0784 | 0.1052 |
| 0.2539 | 8.38 | 25000 | 0.0772 | 0.1075 |
| 0.248 | 8.55 | 25500 | 0.0772 | 0.1055 |
| 0.2689 | 8.71 | 26000 | 0.0763 | 0.1027 |
| 0.2855 | 8.88 | 26500 | 0.0756 | 0.1035 |
| 0.2421 | 9.05 | 27000 | 0.0771 | 0.0998 |
| 0.2497 | 9.22 | 27500 | 0.0756 | 0.0971 |
| 0.2367 | 9.38 | 28000 | 0.0741 | 0.0974 |
| 0.2473 | 9.55 | 28500 | 0.0739 | 0.0982 |
| 0.2396 | 9.72 | 29000 | 0.0756 | 0.0991 |
| 0.2602 | 9.89 | 29500 | 0.0737 | 0.0975 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-v2 --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the first row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
aapot/wav2vec2-xlsr-1b-finnish-lm
|
aapot
| 2022-03-28T17:31:03Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 5.65
- name: Test CER
type: cer
value: 1.2
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 259.57 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization.
**Note**: there is a better V2 version of this model which has been fine-tuned longer with 16 hours of more data: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects. It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 259.57 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:----------------------------------------------------------------------------------------------------------------------------------|:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.74 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 5.94 h | 2.29 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.98 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 87.84 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 2.07 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.968 | 0.18 | 500 | 0.4870 | 0.4720 |
| 0.6557 | 0.36 | 1000 | 0.2450 | 0.2931 |
| 0.647 | 0.54 | 1500 | 0.1818 | 0.2255 |
| 0.5297 | 0.72 | 2000 | 0.1698 | 0.2354 |
| 0.5802 | 0.9 | 2500 | 0.1581 | 0.2355 |
| 0.6351 | 1.07 | 3000 | 0.1689 | 0.2336 |
| 0.4626 | 1.25 | 3500 | 0.1719 | 0.3099 |
| 0.4526 | 1.43 | 4000 | 0.1434 | 0.2069 |
| 0.4692 | 1.61 | 4500 | 0.1645 | 0.2192 |
| 0.4584 | 1.79 | 5000 | 0.1483 | 0.1987 |
| 0.4234 | 1.97 | 5500 | 0.1499 | 0.2178 |
| 0.4243 | 2.15 | 6000 | 0.1345 | 0.2070 |
| 0.4108 | 2.33 | 6500 | 0.1383 | 0.1850 |
| 0.4048 | 2.51 | 7000 | 0.1338 | 0.1811 |
| 0.4085 | 2.69 | 7500 | 0.1290 | 0.1780 |
| 0.4026 | 2.87 | 8000 | 0.1239 | 0.1650 |
| 0.4033 | 3.04 | 8500 | 0.1346 | 0.1657 |
| 0.3986 | 3.22 | 9000 | 0.1310 | 0.1850 |
| 0.3867 | 3.4 | 9500 | 0.1273 | 0.1741 |
| 0.3658 | 3.58 | 10000 | 0.1219 | 0.1672 |
| 0.382 | 3.76 | 10500 | 0.1306 | 0.1698 |
| 0.3847 | 3.94 | 11000 | 0.1230 | 0.1577 |
| 0.3691 | 4.12 | 11500 | 0.1310 | 0.1615 |
| 0.3593 | 4.3 | 12000 | 0.1296 | 0.1622 |
| 0.3619 | 4.48 | 12500 | 0.1285 | 0.1601 |
| 0.3361 | 4.66 | 13000 | 0.1261 | 0.1569 |
| 0.3603 | 4.84 | 13500 | 0.1235 | 0.1533 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
aapot/wav2vec2-xlsr-1b-finnish-lm-v2
|
aapot
| 2022-03-28T17:26:57Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-lm-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 4.09
- name: Test CER
type: cer
value: 0.88
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization.
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm-v2/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.7778 | 0.17 | 500 | 0.2851 | 0.3572 |
| 0.5506 | 0.34 | 1000 | 0.1595 | 0.2130 |
| 0.6569 | 0.5 | 1500 | 0.1458 | 0.2046 |
| 0.5997 | 0.67 | 2000 | 0.1374 | 0.1975 |
| 0.542 | 0.84 | 2500 | 0.1390 | 0.1956 |
| 0.4815 | 1.01 | 3000 | 0.1266 | 0.1813 |
| 0.6982 | 1.17 | 3500 | 0.1441 | 0.1965 |
| 0.4522 | 1.34 | 4000 | 0.1232 | 0.1822 |
| 0.4655 | 1.51 | 4500 | 0.1209 | 0.1702 |
| 0.4069 | 1.68 | 5000 | 0.1149 | 0.1688 |
| 0.4226 | 1.84 | 5500 | 0.1121 | 0.1560 |
| 0.3993 | 2.01 | 6000 | 0.1091 | 0.1557 |
| 0.406 | 2.18 | 6500 | 0.1115 | 0.1553 |
| 0.4098 | 2.35 | 7000 | 0.1144 | 0.1560 |
| 0.3995 | 2.51 | 7500 | 0.1028 | 0.1476 |
| 0.4101 | 2.68 | 8000 | 0.1129 | 0.1511 |
| 0.3636 | 2.85 | 8500 | 0.1025 | 0.1517 |
| 0.3534 | 3.02 | 9000 | 0.1068 | 0.1480 |
| 0.3836 | 3.18 | 9500 | 0.1072 | 0.1459 |
| 0.3531 | 3.35 | 10000 | 0.0928 | 0.1367 |
| 0.3649 | 3.52 | 10500 | 0.1042 | 0.1426 |
| 0.3645 | 3.69 | 11000 | 0.0979 | 0.1433 |
| 0.3685 | 3.85 | 11500 | 0.0947 | 0.1346 |
| 0.3325 | 4.02 | 12000 | 0.0991 | 0.1352 |
| 0.3497 | 4.19 | 12500 | 0.0919 | 0.1358 |
| 0.3303 | 4.36 | 13000 | 0.0888 | 0.1272 |
| 0.3323 | 4.52 | 13500 | 0.0888 | 0.1277 |
| 0.3452 | 4.69 | 14000 | 0.0894 | 0.1279 |
| 0.337 | 4.86 | 14500 | 0.0917 | 0.1289 |
| 0.3114 | 5.03 | 15000 | 0.0942 | 0.1313 |
| 0.3099 | 5.19 | 15500 | 0.0902 | 0.1239 |
| 0.3079 | 5.36 | 16000 | 0.0871 | 0.1256 |
| 0.3293 | 5.53 | 16500 | 0.0861 | 0.1263 |
| 0.3123 | 5.7 | 17000 | 0.0876 | 0.1203 |
| 0.3093 | 5.86 | 17500 | 0.0848 | 0.1226 |
| 0.2903 | 6.03 | 18000 | 0.0914 | 0.1221 |
| 0.297 | 6.2 | 18500 | 0.0841 | 0.1185 |
| 0.2797 | 6.37 | 19000 | 0.0858 | 0.1165 |
| 0.2878 | 6.53 | 19500 | 0.0874 | 0.1161 |
| 0.2974 | 6.7 | 20000 | 0.0835 | 0.1173 |
| 0.3051 | 6.87 | 20500 | 0.0835 | 0.1178 |
| 0.2941 | 7.04 | 21000 | 0.0852 | 0.1155 |
| 0.258 | 7.21 | 21500 | 0.0832 | 0.1132 |
| 0.2778 | 7.37 | 22000 | 0.0829 | 0.1110 |
| 0.2751 | 7.54 | 22500 | 0.0822 | 0.1069 |
| 0.2887 | 7.71 | 23000 | 0.0819 | 0.1103 |
| 0.2509 | 7.88 | 23500 | 0.0787 | 0.1055 |
| 0.2501 | 8.04 | 24000 | 0.0807 | 0.1076 |
| 0.2399 | 8.21 | 24500 | 0.0784 | 0.1052 |
| 0.2539 | 8.38 | 25000 | 0.0772 | 0.1075 |
| 0.248 | 8.55 | 25500 | 0.0772 | 0.1055 |
| 0.2689 | 8.71 | 26000 | 0.0763 | 0.1027 |
| 0.2855 | 8.88 | 26500 | 0.0756 | 0.1035 |
| 0.2421 | 9.05 | 27000 | 0.0771 | 0.0998 |
| 0.2497 | 9.22 | 27500 | 0.0756 | 0.0971 |
| 0.2367 | 9.38 | 28000 | 0.0741 | 0.0974 |
| 0.2473 | 9.55 | 28500 | 0.0739 | 0.0982 |
| 0.2396 | 9.72 | 29000 | 0.0756 | 0.0991 |
| 0.2602 | 9.89 | 29500 | 0.0737 | 0.0975 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-lm-v2 --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the first row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
ntoldalagi/C0_LID_DEV
|
ntoldalagi
| 2022-03-28T15:46:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-21T21:34:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: C0_LID_DEV
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# C0_LID_DEV
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.8267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.0 | 25 | inf | 0.8426 |
| 1.5354 | 0.17 | 2000 | inf | 0.8198 |
| 1.5688 | 0.33 | 4000 | inf | 0.8271 |
| 1.5294 | 0.5 | 6000 | inf | 0.8339 |
| 1.1947 | 0.67 | 8000 | inf | 0.8260 |
| 1.1534 | 0.83 | 10000 | inf | 0.8267 |
| 1.1484 | 1.0 | 12000 | inf | 0.8267 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
peterhsu/tf-bert-finetuned-squad
|
peterhsu
| 2022-03-28T14:14:15Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-28T08:00:46Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tf-bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf-bert-finetuned-squad
This model is a fine-tuned version of [peterhsu/tf-bert-finetuned-squad](https://huggingface.co/peterhsu/tf-bert-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dhlee347/distilbert-imdb
|
dhlee347
| 2022-03-28T14:07:15Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-28T14:01:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9302
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1796
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2808 | 1.0 | 782 | 0.1796 | 0.9302 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Chikashi/t5-small-finetuned-cnndm
|
Chikashi
| 2022-03-28T14:04:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-28T09:07:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.417
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6854
- Rouge1: 24.417
- Rouge2: 11.6924
- Rougel: 20.1756
- Rougelsum: 23.0414
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:|
| 1.8522 | 1.0 | 35890 | 1.6854 | 24.417 | 11.6924 | 20.1756 | 23.0414 | 18.9996 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_augmented
|
scasutt
| 2022-03-28T12:29:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-27T17:08:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_augmented
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5016
- Wer: 0.4656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.418 | 1.05 | 250 | 3.4171 | 1.0 |
| 3.0886 | 2.1 | 500 | 3.4681 | 1.0 |
| 2.9422 | 3.15 | 750 | 2.6151 | 1.0 |
| 1.3195 | 4.2 | 1000 | 0.8789 | 0.7739 |
| 0.9154 | 5.25 | 1250 | 0.6364 | 0.6518 |
| 0.6519 | 6.3 | 1500 | 0.5682 | 0.5949 |
| 0.5622 | 7.35 | 1750 | 0.5273 | 0.5625 |
| 0.4965 | 8.4 | 2000 | 0.4891 | 0.5283 |
| 0.4283 | 9.45 | 2250 | 0.5018 | 0.5260 |
| 0.4019 | 10.5 | 2500 | 0.5016 | 0.5006 |
| 0.3585 | 11.55 | 2750 | 0.5047 | 0.5003 |
| 0.3275 | 12.6 | 3000 | 0.5148 | 0.4866 |
| 0.3427 | 13.65 | 3250 | 0.5035 | 0.4786 |
| 0.3229 | 14.7 | 3500 | 0.4855 | 0.4768 |
| 0.3332 | 15.75 | 3750 | 0.5040 | 0.4769 |
| 0.2861 | 16.81 | 4000 | 0.5138 | 0.4669 |
| 0.3029 | 17.86 | 4250 | 0.5133 | 0.4670 |
| 0.2633 | 18.91 | 4500 | 0.5063 | 0.4637 |
| 0.2621 | 19.96 | 4750 | 0.5016 | 0.4656 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/abeshinzo
|
huggingtweets
| 2022-03-28T12:19:48Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-28T12:19:01Z |
---
language: en
thumbnail: http://www.huggingtweets.com/abeshinzo/1648469983562/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1765776666/s-abetwitter1_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">安倍晋三</div>
<div style="text-align: center; font-size: 14px;">@abeshinzo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 安倍晋三.
| Data | 安倍晋三 |
| --- | --- |
| Tweets downloaded | 2365 |
| Retweets | 77 |
| Short tweets | 1629 |
| Tweets kept | 659 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37uwbwzs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @abeshinzo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ib1nsfa1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ib1nsfa1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/abeshinzo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
VincentC12/sentiment_analysis_kara
|
VincentC12
| 2022-03-28T11:52:03Z | 21 | 0 |
pytorch
|
[
"pytorch",
"distilbert",
"sentiment-analysis",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
library_name: pytorch
metrics:
- negative
- positive
tags:
- sentiment-analysis
widget:
- text: "Thank you for listening to the recommendations of the telephone team for teleworking. we have a strong expertise in this field and accurate listening to Our management!!!!"
example_title: "Exemple positif"
- text: "working conditions and wages are less than average more part of the time it is not a hierarchical system Our opinion counts"
example_title: "Exemple négatif"
---
Ce modèle est développé pour KARA.
Ce modèle est :
- Un outil d'analyse de sentiment associé à un commentaire de sondage RH
- Entrainé pour être utilisé en ANGLAIS (les commentaires doivent êtres traduits)
- Spécialisé pour des commentaires entre 10 et 512 charactères
Ce modèle n'est pas :
- Utilisable pour détecter un discours haineux ou bien une lettre de suicide
Étiquettes :
- Label_0 = Négatif
- Label_1 = Positif
version 1.1.0
Performances sur le jeux de données du HRM : 91.5% de précision
|
mosymosy/Fatima_Fellowship_Quick_Coding_Challenge
|
mosymosy
| 2022-03-28T11:38:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-28T11:27:02Z |
Fatima Fellowship Quick Coding Challenge
Computer Vision
|
robvanderg/Sem-RemmmBERT
|
robvanderg
| 2022-03-28T11:29:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rembert",
"feature-extraction",
"STILT",
"retraining",
"multi-task learning",
"multilingual",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-28T11:20:13Z |
---
language:
- multilingual
tags:
- STILT
- retraining
- multi-task learning
datasets:
- SemEval 2022
---
## Sem-RemmmBERT
This is the SemEval MaChAmp Multitask Multilingual BERT model. This model is retrained from remBERT (https://huggingface.co/google/rembertased).
The retraining is done based on all SemEval 2022 tasks that are text based, and have annotation on the word, sentence or paragraph level. The retraining is done with MaChAmp (https://machamp-nlp.github.io/), a toolkit focusing on multi-task learning for NLP. More information can be found in the paper (which should be released when the SemEval proceedings are online).
|
robvanderg/Sem-mmmBERT
|
robvanderg
| 2022-03-28T11:28:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"STILT",
"retraining",
"multi-task learning",
"multilingual",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-28T11:15:17Z |
---
language:
- multilingual
tags:
- STILT
- retraining
- multi-task learning
datasets:
- SemEval 2022
---
## Sem-mmmBERT
This is the SemEval MaChAmp Multitask Multilingual BERT model. This model is retrained from mBERT (https://huggingface.co/bert-base-multilingual-cased).
The retraining is done based on all SemEval 2022 tasks that are text based, and have annotation on the word, sentence or paragraph level. The retraining is done with MaChAmp (https://machamp-nlp.github.io/), a toolkit focusing on multi-task learning for NLP. More information can be found in the paper (which should be released when the SemEval proceedings are online).
|
sanchit-gandhi/wav2vec2-2-bart-large-cnn-no-adapter
|
sanchit-gandhi
| 2022-03-28T11:26:30Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-26T17:08:05Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9938
- Wer: 0.9745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9301 | 2.24 | 500 | 4.6291 | 0.9601 |
| 4.4562 | 4.48 | 1000 | 4.3604 | 0.9608 |
| 3.8356 | 6.73 | 1500 | 4.0728 | 0.9530 |
| 3.2716 | 8.97 | 2000 | 3.9938 | 0.9745 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
21iridescent/distilroberta-base-finetuned-squad2-lwt
|
21iridescent
| 2022-03-28T11:18:44Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-28T08:54:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilroberta-base-finetuned-squad2-lwt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-squad2-lwt
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1702 | 1.0 | 4120 | 1.1220 |
| 0.9787 | 2.0 | 8240 | 1.0500 |
| 0.8153 | 3.0 | 12360 | 1.1356 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
{'HasAns_exact': 71.39001349527665,
'HasAns_f1': 77.71740687727831,
'HasAns_total': 5928,
'NoAns_exact': 68.59545836837678,
'NoAns_f1': 68.59545836837678,
'NoAns_total': 5945,
'best_exact': 69.9991577528847,
'best_exact_thresh': 0.0,
'best_f1': 73.1583245993857,
'best_f1_thresh': 0.0,
'exact': 69.99073528173166,
'f1': 73.1499021282327,
'total': 11873}
|
21iridescent/distilbert-base-uncased-finetuned-squad
|
21iridescent
| 2022-03-28T08:10:11Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-28T03:09:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2739 | 1.0 | 4118 | 1.2801 |
| 1.0001 | 2.0 | 8236 | 1.2823 |
| 0.8484 | 3.0 | 12354 | 1.3466 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/nsawaikar
|
huggingtweets
| 2022-03-28T07:54:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-28T07:52:56Z |
---
language: en
thumbnail: http://www.huggingtweets.com/nsawaikar/1648454046318/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1508184022052184064/yqLU6MxW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nathan.eth</div>
<div style="text-align: center; font-size: 14px;">@nsawaikar</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nathan.eth.
| Data | Nathan.eth |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 336 |
| Short tweets | 621 |
| Tweets kept | 2293 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/pn1domem/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nsawaikar's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/g9hqx5dx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/g9hqx5dx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nsawaikar')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
timhbach/Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract
|
timhbach
| 2022-03-28T06:27:50Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-28T03:21:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0231
- eval_precision: 0.7448
- eval_recall: 0.75
- eval_f1: 0.7474
- eval_accuracy: 0.9942
- eval_runtime: 61.7618
- eval_samples_per_second: 27.201
- eval_steps_per_second: 3.4
- epoch: 3.0
- step: 5670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
aps/flava_full_pretrained_encoders_torchmm
|
aps
| 2022-03-28T06:03:42Z | 0 | 0 | null |
[
"pytorch",
"license:bsd-3-clause",
"region:us"
] | null | 2022-03-28T05:35:04Z |
---
license: bsd-3-clause
---
|
huggingtweets/freudwarrior123
|
huggingtweets
| 2022-03-28T04:26:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-28T04:23:45Z |
---
language: en
thumbnail: http://www.huggingtweets.com/freudwarrior123/1648441457881/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1443547125770559488/QNDa_bi1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">freudwarrior123</div>
<div style="text-align: center; font-size: 14px;">@freudwarrior123</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from freudwarrior123.
| Data | freudwarrior123 |
| --- | --- |
| Tweets downloaded | 859 |
| Retweets | 274 |
| Short tweets | 34 |
| Tweets kept | 551 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3798mw2s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @freudwarrior123's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2n7ltssk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2n7ltssk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/freudwarrior123')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
YXHugging/autotrain-xlm-roberta-base-reviews-672119799
|
YXHugging
| 2022-03-28T01:30:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain",
"unk",
"dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-27T00:52:19Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 1583.7188188958198
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119799
- CO2 Emissions (in grams): 1583.7188188958198
## Validation Metrics
- Loss: 0.9590993523597717
- Accuracy: 0.5827541666666667
- Macro F1: 0.5806748283026683
- Micro F1: 0.5827541666666667
- Weighted F1: 0.5806748283026683
- Macro Precision: 0.5834325027348383
- Micro Precision: 0.5827541666666667
- Weighted Precision: 0.5834325027348383
- Macro Recall: 0.5827541666666667
- Micro Recall: 0.5827541666666667
- Weighted Recall: 0.5827541666666667
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119799
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119799", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119799", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
BigSalmon/InformalToFormalLincoln32
|
BigSalmon
| 2022-03-28T00:48:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T23:33:14Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln32")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln32")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
|
minimaxir/imgbeddings
|
minimaxir
| 2022-03-28T00:36:28Z | 0 | 3 |
transformers
|
[
"transformers",
"onnx",
"ai",
"images",
"image-processing",
"embeddings",
"clip",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-27T17:23:51Z |
---
language:
- en
tags:
- ai
- transformers
- onnx
- images
- image-processing
- embeddings
- clip
license: mit
---
# imgbeddings
The HF repo where the models for [imgbeddings](https://github.com/minimaxir/imgbeddings) are loaded.
The ONNX files were generated using [this export Notebook](https://github.com/minimaxir/imgbeddings/blob/main/examples/export.ipynb).
## License
MIT
|
huggingtweets/baguioni
|
huggingtweets
| 2022-03-27T22:55:21Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T22:54:40Z |
---
language: en
thumbnail: http://www.huggingtweets.com/baguioni/1648421716784/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506662013707046914/hVtCPrPL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">baguio</div>
<div style="text-align: center; font-size: 14px;">@baguioni</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from baguio.
| Data | baguio |
| --- | --- |
| Tweets downloaded | 3012 |
| Retweets | 1090 |
| Short tweets | 527 |
| Tweets kept | 1395 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1z9nh9v8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @baguioni's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2s53fr1o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2s53fr1o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/baguioni')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/baguioni-elonmusk-jacobe
|
huggingtweets
| 2022-03-27T22:44:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T22:43:39Z |
---
language: en
thumbnail: http://www.huggingtweets.com/baguioni-elonmusk-jacobe/1648421056394/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1025926108984664064/2ZHTSIof_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506662013707046914/hVtCPrPL_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Rowel Atienza & baguio</div>
<div style="text-align: center; font-size: 14px;">@baguioni-elonmusk-jacobe</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Rowel Atienza & baguio.
| Data | Elon Musk | Rowel Atienza | baguio |
| --- | --- | --- | --- |
| Tweets downloaded | 1621 | 100 | 3012 |
| Retweets | 69 | 29 | 1090 |
| Short tweets | 520 | 4 | 527 |
| Tweets kept | 1032 | 67 | 1395 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xuj1tda/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @baguioni-elonmusk-jacobe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3fpkbu3i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3fpkbu3i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/baguioni-elonmusk-jacobe')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
leonadase/bert-base-chinese-finetuned-fdRE
|
leonadase
| 2022-03-27T20:52:06Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:sem_eval2010_task8",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-27T19:04:51Z |
---
tags:
- generated_from_trainer
datasets:
- sem_eval2010_task8
metrics:
- accuracy
model-index:
- name: bert-base-chinese-finetuned-fdRE
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sem_eval2010_task8
type: sem_eval2010_task8
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9080962800875274
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-fdRE
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the sem_eval2010_task8 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2716
- Accuracy: 0.9081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 46 | 0.5571 | 0.7812 |
| No log | 2.0 | 92 | 0.4030 | 0.8621 |
| No log | 3.0 | 138 | 0.3139 | 0.8928 |
| No log | 4.0 | 184 | 0.2716 | 0.9081 |
| No log | 5.0 | 230 | 0.2564 | 0.9081 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ikram54/autotrain-harassement-675420038
|
ikram54
| 2022-03-27T18:08:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:ikram54/autotrain-data-harassement",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-27T18:06:02Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ikram54/autotrain-data-harassement
co2_eq_emissions: 2.6332836871905054
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 675420038
- CO2 Emissions (in grams): 2.6332836871905054
## Validation Metrics
- Loss: 0.8747465014457703
- Accuracy: 0.7085201793721974
- Macro F1: 0.579743989078862
- Micro F1: 0.7085201793721974
- Weighted F1: 0.6913786522271296
- Macro Precision: 0.5669375905888698
- Micro Precision: 0.7085201793721974
- Weighted Precision: 0.6760144007300164
- Macro Recall: 0.5940655209452201
- Micro Recall: 0.7085201793721974
- Weighted Recall: 0.7085201793721974
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ikram54/autotrain-harassement-675420038
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ikram54/autotrain-harassement-675420038", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ikram54/autotrain-harassement-675420038", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
YXHugging/autotrain-xlm-roberta-base-reviews-672119801
|
YXHugging
| 2022-03-27T16:53:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain",
"unk",
"dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-27T01:21:43Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 999.5670927087938
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119801
- CO2 Emissions (in grams): 999.5670927087938
## Validation Metrics
- Loss: 0.9767692685127258
- Accuracy: 0.5738333333333333
- Macro F1: 0.5698748846905103
- Micro F1: 0.5738333333333333
- Weighted F1: 0.5698748846905102
- Macro Precision: 0.5734242161804903
- Micro Precision: 0.5738333333333333
- Weighted Precision: 0.5734242161804902
- Macro Recall: 0.5738333333333333
- Micro Recall: 0.5738333333333333
- Weighted Recall: 0.5738333333333333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119801
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119801", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119801", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
SAGAR4REAL/wav2vec2-large-hindicone
|
SAGAR4REAL
| 2022-03-27T16:20:28Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-27T13:41:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-hindicone
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-hindicone
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
csukuangfj/icefall-asr-librispeech-stateless-transducer-2022-03-27-2
|
csukuangfj
| 2022-03-27T15:59:24Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2022-03-27T13:27:21Z |
## Introduction
Please see <https://github.com/k2-fsa/icefall/pull/271> for more details.
|
csukuangfj/icefall-asr-librispeech-stateless-transducer-2022-03-27
|
csukuangfj
| 2022-03-27T15:59:00Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2022-03-27T07:29:38Z |
## Introduction
Please see <https://github.com/k2-fsa/icefall/pull/271> for more details.
|
EMBO/bio-lm
|
EMBO
| 2022-03-27T15:46:51Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"language model",
"dataset:EMBO/biolang",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- english
thumbnail:
tags:
- language model
license:
datasets:
- EMBO/biolang
metrics:
-
---
# bio-lm
## Model description
This model is a [RoBERTa base pre-trained model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang).
## Intended uses & limitations
#### How to use
The intended use of this model is to be fine-tuned for downstream tasks, token classification in particular.
To have a quick check of the model as-is in a fill-mask task:
```python
from transformers import pipeline, RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
text = "Let us try this model to see if it <mask>."
fill_mask = pipeline(
"fill-mask",
model='EMBO/bio-lm',
tokenizer=tokenizer
)
fill_mask(text)
```
#### Limitations and bias
This model should be fine-tuned on a specifi task like token classification.
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained with a masked language modeling taskon the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang) wich includes 12Mio examples from abstracts and figure legends extracted from papers published in life sciences.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Command: `python -m lm.train /data/json/oapmc_abstracts_figs/ MLM`
- Tokenizer vocab size: 50265
- Training data: EMBO/biolang MLM
- Training with: 12005390 examples
- Evaluating on: 36713 examples
- Epochs: 3.0
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- tensorboard run: lm-MLM-2021-01-27T15-17-43.113766
End of training:
```
trainset: 'loss': 0.8653350830078125
validation set: 'eval_loss': 0.8192330598831177, 'eval_recall': 0.8154601116513597
```
## Eval results
Eval on test set:
```
recall: 0.814471959728645
```
|
sebastian-hofstaetter/colberter-128-32-msmarco
|
sebastian-hofstaetter
| 2022-03-27T15:07:44Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"ColBERT",
"bag-of-words",
"dense-passage-retrieval",
"knowledge-distillation",
"en",
"dataset:ms_marco",
"arxiv:2203.13088",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-27T14:59:48Z |
---
license: apache-2.0
language: "en"
tags:
- bag-of-words
- dense-passage-retrieval
- knowledge-distillation
datasets:
- ms_marco
---
# ColBERTer (Dim: 32) for Passage Retrieval
If you want to know more about our ColBERTer architecture check out our paper: https://arxiv.org/abs/2203.13088 🎉
For more information, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/colberter
## Limitations & Bias
- The model is only trained on english text.
- The model inherits social biases from both DistilBERT and MSMARCO.
- The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text.
## Citation
If you use our model checkpoint please cite our work as:
```
@article{Hofstaetter2022_colberter,
author = {Sebastian Hofst{\"a}tter and Omar Khattab and Sophia Althammer and Mete Sertkan and Allan Hanbury},
title = {Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction},
publisher = {arXiv},
url = {https://arxiv.org/abs/2203.13088},
doi = {10.48550/ARXIV.2203.13088},
year = {2022},
}
```
|
perevalov/query-validation-rubq
|
perevalov
| 2022-03-27T14:14:28Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"kgqa",
"question answering",
"sparql",
"bert-base-cased",
"en",
"license:apache-2.0",
"region:us"
] | null | 2022-03-27T10:52:12Z |
---
language: en
tags:
- kgqa
- question answering
- sparql
- bert-base-cased
license: apache-2.0
---
# SPARQL Query Validation model
## Model description
## Intended uses & limitations
### How to use
|
perevalov/query-validation-lcquad
|
perevalov
| 2022-03-27T14:04:19Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"kgqa",
"question answering",
"sparql",
"bert-base-cased",
"en",
"license:apache-2.0",
"region:us"
] | null | 2022-03-27T09:51:36Z |
---
language: en
tags:
- kgqa
- question answering
- sparql
- bert-base-cased
license: apache-2.0
---
# SPARQL Query Validation model
## Model description
## Intended uses & limitations
### How to use
|
YXHugging/autotrain-xlm-roberta-base-reviews-672119798
|
YXHugging
| 2022-03-27T12:58:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain",
"unk",
"dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-26T21:07:59Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 1013.8825767332373
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119798
- CO2 Emissions (in grams): 1013.8825767332373
## Validation Metrics
- Loss: 0.9646632075309753
- Accuracy: 0.5789333333333333
- Macro F1: 0.5775792001871465
- Micro F1: 0.5789333333333333
- Weighted F1: 0.5775792001871465
- Macro Precision: 0.5829444191847423
- Micro Precision: 0.5789333333333333
- Weighted Precision: 0.5829444191847424
- Macro Recall: 0.5789333333333333
- Micro Recall: 0.5789333333333333
- Weighted Recall: 0.5789333333333333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119798
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119798", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119798", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Danik51002/NewModel
|
Danik51002
| 2022-03-27T12:52:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-13T16:51:09Z |
---
tags:
- generated_from_trainer
model-index:
- name: NewModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NewModel
This model is a fine-tuned version of [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 42
- eval_batch_size: 42
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 840
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- num_epochs: 200
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
PaddyP/distilbert-base-uncased-finetuned-emotion
|
PaddyP
| 2022-03-27T07:06:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-27T06:12:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2302
- Accuracy: 0.922
- F1: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3344 | 0.903 | 0.9004 |
| No log | 2.0 | 500 | 0.2302 | 0.922 | 0.9218 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
huggingtweets/psimon365
|
huggingtweets
| 2022-03-27T02:56:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T02:56:02Z |
---
language: en
thumbnail: http://www.huggingtweets.com/psimon365/1648349798068/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1507859834107879426/d5Jqrb7Y_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Psimon 🌐</div>
<div style="text-align: center; font-size: 14px;">@psimon365</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Psimon 🌐.
| Data | Psimon 🌐 |
| --- | --- |
| Tweets downloaded | 181 |
| Retweets | 0 |
| Short tweets | 34 |
| Tweets kept | 147 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/q7gcbo7v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @psimon365's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kyaiz92o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kyaiz92o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/psimon365')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Ayham/roberta_roberta_summarization_xsum
|
Ayham
| 2022-03-27T00:55:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-26T19:07:42Z |
---
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: roberta_roberta_summarization_xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_roberta_summarization_xsum
This model is a fine-tuned version of [](https://huggingface.co/) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.10.3
|
MONAI/example_spleen_segmentation
|
MONAI
| 2022-03-27T00:32:20Z | 0 | 6 | null |
[
"monai",
"arxiv:1811.12506",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
tags:
- monai
---
# Description
A pre-trained model for volumetric (3D) segmentation of the spleen from CT image.
# Model Overview
This model is trained using the runner-up [1] awarded pipeline of the "Medical Segmentation Decathlon Challenge 2018" using the UNet architecture [2] with 32 training images and 9 validation images.
## Data
The training dataset is Task09_Spleen.tar from http://medicaldecathlon.com/.
## Training configuration
The training was performed with at least 12GB-memory GPUs.
Actual Model Input: 96 x 96 x 96
## Input and output formats
Input: 1 channel CT image
Output: 2 channels: Label 1: spleen; Label 0: everything else
## Scores
This model achieves the following Dice score on the validation data (our own split from the training dataset):
Mean Dice = 0.96
## commands example
Execute inference:
```
python -m monai.bundle run evaluator --meta_file configs/metadata.json --config_file configs/inference.json --logging_file configs/logging.conf
```
Verify the metadata format:
```
python -m monai.bundle verify_metadata --meta_file configs/metadata.json --filepath eval/schema.json
```
Verify the data shape of network:
```
python -m monai.bundle verify_net_in_out network_def --meta_file configs/metadata.json --config_file configs/inference.json
```
Export checkpoint to TorchScript file:
```
python -m monai.bundle export network_def --filepath models/model.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json
```
# Disclaimer
This is an example, not to be used for diagnostic purposes.
# References
[1] Xia, Yingda, et al. "3D Semi-Supervised Learning with Uncertainty-Aware Multi-View Co-Training." arXiv preprint arXiv:1811.12506 (2018). https://arxiv.org/abs/1811.12506.
[2] Kerfoot E., Clough J., Oksuz I., Lee J., King A.P., Schnabel J.A. (2019) Left-Ventricle Quantification Using Residual U-Net. In: Pop M. et al. (eds) Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges. STACOM 2018. Lecture Notes in Computer Science, vol 11395. Springer, Cham. https://doi.org/10.1007/978-3-030-12029-0_40
|
huggingtweets/mkobach-naval-shaneaparrish
|
huggingtweets
| 2022-03-27T00:07:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T00:04:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/mkobach-naval-shaneaparrish/1648339620049/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374075536595505154/1_1jV_AF_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1253758424292171778/48gD7Hne_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1256841238298292232/ycqwaMI2_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matthew Kobach & Shane Parrish & Naval</div>
<div style="text-align: center; font-size: 14px;">@mkobach-naval-shaneaparrish</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matthew Kobach & Shane Parrish & Naval.
| Data | Matthew Kobach | Shane Parrish | Naval |
| --- | --- | --- | --- |
| Tweets downloaded | 3248 | 3197 | 3249 |
| Retweets | 135 | 102 | 181 |
| Short tweets | 444 | 147 | 617 |
| Tweets kept | 2669 | 2948 | 2451 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17cy2tt4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mkobach-naval-shaneaparrish's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1zkb00dh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1zkb00dh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mkobach-naval-shaneaparrish')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
scasutt/wav2vec2-base_toy_train_data_masked_audio
|
scasutt
| 2022-03-26T22:02:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-26T14:57:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_masked_audio
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_masked_audio
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1950
- Wer: 0.7340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1287 | 2.1 | 250 | 3.4581 | 1.0 |
| 3.0259 | 4.2 | 500 | 2.8099 | 0.9999 |
| 1.4881 | 6.3 | 750 | 1.2929 | 0.8950 |
| 0.9665 | 8.4 | 1000 | 1.1675 | 0.8346 |
| 0.7614 | 10.5 | 1250 | 1.1388 | 0.8003 |
| 0.5858 | 12.6 | 1500 | 1.1510 | 0.7672 |
| 0.5005 | 14.7 | 1750 | 1.1606 | 0.7532 |
| 0.4486 | 16.8 | 2000 | 1.1571 | 0.7427 |
| 0.4224 | 18.9 | 2250 | 1.1950 | 0.7340 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Mnauel/wav2vec2-base-finetuned-ks
|
Mnauel
| 2022-03-26T20:53:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-12T10:51:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5766
- Accuracy: 0.8308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.7247 | 0.7462 |
| No log | 2.0 | 14 | 0.6844 | 0.7615 |
| 0.4279 | 3.0 | 21 | 0.7254 | 0.7462 |
| 0.4279 | 4.0 | 28 | 0.5891 | 0.8 |
| 0.4279 | 5.0 | 35 | 0.6991 | 0.7462 |
| 0.4478 | 6.0 | 42 | 0.6579 | 0.7615 |
| 0.4478 | 7.0 | 49 | 0.6164 | 0.8 |
| 0.4478 | 8.0 | 56 | 0.6191 | 0.8077 |
| 0.4194 | 9.0 | 63 | 0.5766 | 0.8308 |
| 0.4194 | 10.0 | 70 | 0.5704 | 0.8154 |
| 0.4194 | 11.0 | 77 | 0.6518 | 0.8 |
| 0.3833 | 12.0 | 84 | 0.6190 | 0.8077 |
| 0.3833 | 13.0 | 91 | 0.5693 | 0.8231 |
| 0.3833 | 14.0 | 98 | 0.5628 | 0.8231 |
| 0.3607 | 15.0 | 105 | 0.5741 | 0.8154 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.10.3
|
dannyvas23/electricidad-small-discriminator-finetuned-clasificacion-texto-suicida
|
dannyvas23
| 2022-03-26T19:22:14Z | 25 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"sentiment",
"emotion",
"es",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-26T17:19:56Z |
---
license: afl-3.0
language: "es"
tags:
- generated_from_trainer
- sentiment
- emotion
widget:
- text: "La vida no merece la pena"
example_title: "Ejemplo 1"
- text: "Para vivir así lo mejor es estar muerto"
example_title: "Ejemplo 2"
- text: "me siento triste por no poder viajar"
example_title: "Ejemplo 3"
- text: "Quiero terminar con todo"
example_title: "Ejemplo 4"
- text: "Disfruto de la vista"
example_title: "Ejemplo 5"
metrics:
- accuracy
model-index:
- name: electricidad-small-discriminator-finetuned-clasificacion-texto-suicida
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-discriminator-finetuned-clasificacion-texto-suicida
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0458
- Accuracy: 0.9916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Validation Loss | Accuracy |
|:-------------:|:-----:|:---------------:|:--------:|
| 0.161100 | 1.0 | 0.133057 | 0.952718 |
| 0.134500 | 2.0 | 0.110966 | 0.960804 |
| 0.108500 | 3.0 | 0.086417 | 0.970835 |
| 0.099400 | 4.0 | 0.073618 | 0.974856 |
| 0.090500 | 5.0 | 0.065231 | 0.979629 |
| 0.080700 | 6.0 | 0.060849 | 0.982324 |
| 0.069200 | 7.0 | 0.054718 | 0.986125 |
| 0.060400 | 8.0 | 0.051153 | 0.985948 |
| 0.048200 | 9.0 | 0.045747 | 0.989748 |
| 0.045500 | 10.0 | 0.049992 | 0.988069 |
| 0.043400 | 11.0 | 0.046325 | 0.990234 |
| 0.034300 | 12.0 | 0.050746 | 0.989792 |
| 0.032900 | 13.0 | 0.043434 | 0.991737 |
| 0.028400 | 14.0 | 0.045003 | 0.991869 |
| 0.022300 | 15.0 | 0.045819 | 0.991648 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dannyvas23/clasificacion-texto-suicida-finetuned-amazon-review
|
dannyvas23
| 2022-03-26T17:12:23Z | 24 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"sentiment",
"emotion",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-21T19:26:40Z |
---
language: "es"
tags:
- generated_from_trainer
- sentiment
- emotion
widget:
- text: "no me gusta esta vida."
example_title: "Ejemplo 1"
- text: "odio estar ahi"
example_title: "Ejemplo 2"
- text: "me siento triste por no poder viajar"
example_title: "Ejemplo 3"
metrics:
- accuracy
model-index:
- name: clasificacion-texto-suicida-finetuned-amazon-review
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificacion-texto-suicida-finetuned-amazon-review
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1546
- Accuracy: 0.9488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1643 | 1.0 | 12022 | 0.1546 | 0.9488 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
bigmorning/distilgpt2-500e
|
bigmorning
| 2022-03-26T16:37:42Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-26T16:31:57Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilgpt2-500e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilgpt2-500e
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
bigmorning/distilbert1000e
|
bigmorning
| 2022-03-26T15:31:46Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-26T15:27:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert1000e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert1000e
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
bigmorning/distilbert500e
|
bigmorning
| 2022-03-26T14:54:50Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-26T14:48:24Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert500e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert500e
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
zuppif/versioning-test
|
zuppif
| 2022-03-26T13:35:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-26T13:34:47Z |
| | uid | hidden_size |
|---:|:------------------------------------------------------------------------------------------------------------------------|--------------:|
| 0 | [e87a4e028b11ec7bf770c6f3ab5c6349](https://huggingface.co/zuppif/versioning-test/tree/e87a4e028b11ec7bf770c6f3ab5c6349) | 8 |
| 1 | [48f2a327cfb7cb0f9b519d9abf73a9be](https://huggingface.co/zuppif/versioning-test/tree/48f2a327cfb7cb0f9b519d9abf73a9be) | 16 |
| 2 | [1c9d18df9ec06b5f7e2f49b2ef1cb826](https://huggingface.co/zuppif/versioning-test/tree/1c9d18df9ec06b5f7e2f49b2ef1cb826) | 32 |
|
Mr-Wick/Roberta
|
Mr-Wick
| 2022-03-26T12:39:55Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-23T16:08:46Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Roberta
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Roberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16476, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
peterhsu/bert-finetuned-squad
|
peterhsu
| 2022-03-26T08:48:45Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-21T13:26:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
lichenda/chime4_fasnet_dprnn_tac
|
lichenda
| 2022-03-26T05:52:41Z | 36 | 0 |
espnet
|
[
"espnet",
"audio",
"audio-to-audio",
"dataset:chime4",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-21T08:18:15Z |
---
tags:
- espnet
- audio
- audio-to-audio
language: noinfo
datasets:
- chime4
license: cc-by-4.0
---
## ESPnet2 ENH model
### `lichenda/chime4_fasnet_dprnn_tac`
This model was trained by LiChenda using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 98f5fb2185b98f9c08fd56492b3d3234504561e7
pip install -e .
cd egs2/chime4/enh1
./run.sh --skip_data_prep false --skip_train true --download_model lichenda/chime4_fasnet_dprnn_tac
```
<!-- Generated by ./scripts/utils/show_enh_score.sh -->
# RESULTS
## Environments
- date: `Sat Mar 19 07:17:45 CST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: `648b024d8fb262eb9923c06a698b9c6df5b16e51`
- Commit date: `Wed Mar 16 18:47:21 2022 +0800`
## ..
config: conf/tuning/train_enh_dprnntac_fasnet.yaml
|dataset|STOI|SAR|SDR|SIR|
|---|---|---|---|---|
|enhanced_dt05_simu_isolated_6ch_track|0.95|15.75|15.75|0.00|
|enhanced_et05_simu_isolated_6ch_track|0.94|15.40|15.40|0.00|
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_dprnntac_fasnet.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_dprnntac_fasnet_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 10
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 8
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_16k/train/speech_mix_shape
- exp/enh_stats_16k/train/speech_ref1_shape
valid_shape_file:
- exp/enh_stats_16k/valid/speech_mix_shape
- exp/enh_stats_16k/valid/speech_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 32000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dt05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 0
scheduler: steplr
scheduler_conf:
step_size: 2
gamma: 0.98
init: xavier_uniform
model_conf:
stft_consistency: false
loss_type: mask_mse
mask_type: null
criterions:
- name: si_snr
conf:
eps: 1.0e-07
wrapper: fixed_order
wrapper_conf:
weight: 1.0
use_preprocessor: false
encoder: same
encoder_conf: {}
separator: fasnet
separator_conf:
enc_dim: 64
feature_dim: 64
hidden_dim: 128
layer: 6
segment_size: 24
num_spk: 1
win_len: 16
context_len: 16
sr: 16000
fasnet_type: fasnet
dropout: 0.2
decoder: same
decoder_conf: {}
required:
- output_dir
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{ESPnet-SE,
author = {Chenda Li and Jing Shi and Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and
Naoyuki Kamo and Moto Hira and Tomoki Hayashi and Christoph B{"{o}}ddeker and Zhuo Chen and Shinji Watanabe},
title = {ESPnet-SE: End-To-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
booktitle = {{IEEE} Spoken Language Technology Workshop, {SLT} 2021, Shenzhen, China, January 19-22, 2021},
pages = {785--792},
publisher = {{IEEE}},
year = {2021},
url = {https://doi.org/10.1109/SLT48900.2021.9383615},
doi = {10.1109/SLT48900.2021.9383615},
timestamp = {Mon, 12 Apr 2021 17:08:59 +0200},
biburl = {https://dblp.org/rec/conf/slt/Li0ZSCKHHBC021.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ianMconversica/autotrain-phrasinator-reverse-670319725
|
ianMconversica
| 2022-03-26T03:59:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"unk",
"dataset:McIan91/autotrain-data-phrasinator-reverse",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-26T01:38:37Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- McIan91/autotrain-data-phrasinator-reverse
co2_eq_emissions: 149.95517950000834
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 670319725
- CO2 Emissions (in grams): 149.95517950000834
## Validation Metrics
- Loss: 0.0022294693626463413
- Rouge1: 67.5833
- Rouge2: 65.7386
- RougeL: 67.5812
- RougeLsum: 67.585
- Gen Len: 18.907
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/McIan91/autotrain-phrasinator-reverse-670319725
```
|
ahmeddbahaa/mt5-finetuned-en-ar
|
ahmeddbahaa
| 2022-03-26T02:24:12Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-25T19:26:01Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: mt5-finetuned-en-ar
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
args: arabic
metrics:
- name: Rouge1
type: rouge
value: 0.2824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-finetuned-en-ar
This model is a fine-tuned version of [ahmeddbahaa/mt5-small-finetuned-mt5-en](https://huggingface.co/ahmeddbahaa/mt5-small-finetuned-mt5-en) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2314
- Rouge1: 0.2824
- Rouge2: 0.0
- Rougel: 0.2902
- Rougelsum: 0.298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.1685 | 1.0 | 4130 | 2.4262 | 0.0941 | 0.0235 | 0.1098 | 0.1098 |
| 2.686 | 2.0 | 8260 | 2.2853 | 0.2824 | 0.0 | 0.298 | 0.298 |
| 2.481 | 3.0 | 12390 | 2.2314 | 0.2824 | 0.0 | 0.2902 | 0.298 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/huggingpuppy
|
huggingtweets
| 2022-03-25T18:42:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-25T18:41:40Z |
---
language: en
thumbnail: http://www.huggingtweets.com/huggingpuppy/1648233768787/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1504530325526900756/QOTZak3q_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">hug. (INGROUP INTERN)</div>
<div style="text-align: center; font-size: 14px;">@huggingpuppy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from hug. (INGROUP INTERN).
| Data | hug. (INGROUP INTERN) |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 97 |
| Short tweets | 816 |
| Tweets kept | 2336 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wq0kiqq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @huggingpuppy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3aonv9kh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3aonv9kh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/huggingpuppy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
patrickvonplaten/deberta_amazon_reviews_v1
|
patrickvonplaten
| 2022-03-25T17:57:32Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-25T10:12:59Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta_amazon_reviews_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_amazon_reviews_v1
This model is a fine-tuned version of [patrickvonplaten/deberta_v3_amazon_reviews](https://huggingface.co/patrickvonplaten/deberta_v3_amazon_reviews) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 2
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-base_toy_train_data_augment_0.1
|
scasutt
| 2022-03-25T17:44:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-25T14:40:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_augment_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_augment_0.1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3786
- Wer: 0.9954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1342 | 1.05 | 250 | 3.3901 | 0.9954 |
| 3.0878 | 2.1 | 500 | 3.4886 | 0.9954 |
| 3.0755 | 3.15 | 750 | 3.4616 | 0.9954 |
| 3.0891 | 4.2 | 1000 | 3.5316 | 0.9954 |
| 3.0724 | 5.25 | 1250 | 3.2608 | 0.9954 |
| 3.0443 | 6.3 | 1500 | 3.3881 | 0.9954 |
| 3.0421 | 7.35 | 1750 | 3.4507 | 0.9954 |
| 3.0448 | 8.4 | 2000 | 3.4525 | 0.9954 |
| 3.0455 | 9.45 | 2250 | 3.3342 | 0.9954 |
| 3.0425 | 10.5 | 2500 | 3.3385 | 0.9954 |
| 3.0457 | 11.55 | 2750 | 3.4411 | 0.9954 |
| 3.0375 | 12.6 | 3000 | 3.4459 | 0.9954 |
| 3.0459 | 13.65 | 3250 | 3.3883 | 0.9954 |
| 3.0455 | 14.7 | 3500 | 3.3417 | 0.9954 |
| 3.0524 | 15.75 | 3750 | 3.3908 | 0.9954 |
| 3.0443 | 16.81 | 4000 | 3.3932 | 0.9954 |
| 3.0446 | 17.86 | 4250 | 3.4052 | 0.9954 |
| 3.0412 | 18.91 | 4500 | 3.3776 | 0.9954 |
| 3.0358 | 19.96 | 4750 | 3.3786 | 0.9954 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
manandey/wav2vec2-large-xlsr-_irish
|
manandey
| 2022-03-25T16:53:49Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"ga",
"dataset:common_voice",
"doi:10.57967/hf/0190",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ga
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Irish by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ga-IE
type: common_voice
args: ga-IE
metrics:
- name: Test WER
type: wer
value: 42.34
---
# Wav2Vec2-Large-XLSR-53-Irish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Irish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ga-IE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-_irish")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-_irish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ga-IE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-_irish")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-_irish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 42.34%
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
Wende/bert-finetuned-ner
|
Wende
| 2022-03-25T16:19:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-25T15:21:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9321670242614293
- name: Recall
type: recall
value: 0.9505217098619994
- name: F1
type: f1
value: 0.9412548954253812
- name: Accuracy
type: accuracy
value: 0.9860334373344322
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0575
- Precision: 0.9322
- Recall: 0.9505
- F1: 0.9413
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2219 | 1.0 | 878 | 0.0716 | 0.9076 | 0.9288 | 0.9181 | 0.9808 |
| 0.0453 | 2.0 | 1756 | 0.0597 | 0.9297 | 0.9477 | 0.9386 | 0.9852 |
| 0.0239 | 3.0 | 2634 | 0.0575 | 0.9322 | 0.9505 | 0.9413 | 0.9860 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.8.2+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
huggingtweets/rivatez
|
huggingtweets
| 2022-03-25T14:57:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-25T14:51:51Z |
---
language: en
thumbnail: http://www.huggingtweets.com/rivatez/1648220244511/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1421403684085374979/SoqYa6o3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Riva</div>
<div style="text-align: center; font-size: 14px;">@rivatez</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Riva.
| Data | Riva |
| --- | --- |
| Tweets downloaded | 3178 |
| Retweets | 780 |
| Short tweets | 405 |
| Tweets kept | 1993 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qe0i10s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rivatez's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rspxzzv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rspxzzv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rivatez')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
vumichien/tf-bert-base-cased-squad2
|
vumichien
| 2022-03-25T14:02:14Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-25T13:56:15Z |
---
license: cc-by-4.0
tags:
- generated_from_keras_callback
model-index:
- name: tf-bert-base-cased-squad2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf-bert-base-cased-squad2
This model is a fine-tuned version of [deepset/bert-base-cased-squad2](https://huggingface.co/deepset/bert-base-cased-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Tokenizers 0.11.6
|
ssardorf/pegasus-xsum-new-dataset
|
ssardorf
| 2022-03-25T13:12:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-25T13:07:00Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-xsum-new-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-xsum-new-dataset
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8355
- Rouge1: 48.7306
- Rouge2: 34.1291
- Rougel: 44.0778
- Rougelsum: 45.7139
- Gen Len: 30.8889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cpu
- Datasets 1.18.3
- Tokenizers 0.11.6
|
vumichien/mobilebert-uncased-squad-v2
|
vumichien
| 2022-03-25T13:09:07Z | 72 | 0 |
transformers
|
[
"transformers",
"tf",
"mobilebert",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-25T13:07:29Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: tf-mobilebert-uncased-squad-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf-mobilebert-uncased-squad-v2
This model is a fine-tuned version of [csarron/mobilebert-uncased-squad-v2](https://huggingface.co/csarron/mobilebert-uncased-squad-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Tokenizers 0.11.6
|
bigmorning/try-m-e
|
bigmorning
| 2022-03-25T12:37:22Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-25T06:42:40Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: try-m-e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# try-m-e
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_augment_0.1.csv
|
scasutt
| 2022-03-25T12:18:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-25T11:45:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_augment_0.1.csv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_augment_0.1.csv
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4695
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.2456 | 0.84 | 200 | 3.6215 | 1.0 |
| 3.0637 | 1.68 | 400 | 3.3918 | 1.0 |
| 3.046 | 2.52 | 600 | 3.4168 | 1.0 |
| 3.0627 | 3.36 | 800 | 3.4695 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
microsoft/wavlm-base-plus-sd
|
microsoft
| 2022-03-25T12:06:46Z | 1,616,093 | 10 |
transformers
|
[
"transformers",
"pytorch",
"wavlm",
"audio-frame-classification",
"speech",
"en",
"arxiv:1912.07875",
"arxiv:2106.06909",
"arxiv:2101.00390",
"arxiv:2110.13900",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- speech
---
# WavLM-Base-Plus for Speaker Diarization
[Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm)
The model was pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz.
The model was pre-trained on:
- 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875)
- 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909)
- 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390)
[Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)
Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei
**Abstract**
*Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.*
The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm.
# Fine-tuning details
The model is fine-tuned on the [LibriMix dataset](https://github.com/JorisCos/LibriMix) using just a linear layer for mapping the network outputs.
# Usage
## Speaker Diarization
```python
from transformers import Wav2Vec2FeatureExtractor, WavLMForAudioFrameClassification
from datasets import load_dataset
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('microsoft/wavlm-base-plus-sd')
model = WavLMForAudioFrameClassification.from_pretrained('microsoft/wavlm-base-plus-sd')
# audio file is decoded on the fly
inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt")
logits = model(**inputs).logits
probabilities = torch.sigmoid(logits[0])
# labels is a one-hot array of shape (num_frames, num_speakers)
labels = (probabilities > 0.5).long()
```
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)

|
eliasws/openApiT5-distilled-description-v3
|
eliasws
| 2022-03-25T09:30:37Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"t5",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-25T09:25:50Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5547 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1109,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jkhan447/sentiment-model-sample-5-emotion
|
jkhan447
| 2022-03-25T08:12:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-25T05:26:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sentiment-model-sample-5-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample-5-emotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4360
- Accuracy: 0.925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
csukuangfj/transducer-loss-benchmarking
|
csukuangfj
| 2022-03-25T07:46:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Introduction
This repo contains the benchmark results for <https://github.com/csukuangfj/transducer-loss-benchmarking>
## Usage
First, install `git-lfs`.
Second, use the following command to clone this repo:
```bash
git lfs install
git clone https://huggingface.co/csukuangfj/transducer-loss-benchmarking
```
**Caution**: You have to run `git lfs install` first. Otherwise, you will be **SAD** later.
Third,
```
pip install torch-tb-profiler
cd transducer-loss-benchmarking
tensorboard --logdir ./log/torchaudio-30 --port 6006
tensorboard --logdir ./log/optimized_transducer-30 --port 6007
```
Fourth, open your browser and go to
- <http://localhost:6006/#pytorch_profiler>
- <http://localhost:6006/#pytorch_profiler>
You will see the following images:


|
Ebtihal/AraBertMo_base_V9
|
Ebtihal
| 2022-03-25T07:25:05Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
Arabic Model AraBertMo_base_V9
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V9' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 30024| 9 | 64 | 4230 | 7h 57m 42s | 7.3264 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V9")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V9")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
|
clisi2000/distilbert-base-uncased-finetuned-clinc
|
clisi2000
| 2022-03-25T06:23:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-22T05:03:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9158064516129032
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7796
- Accuracy: 0.9158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2883 | 1.0 | 318 | 3.2778 | 0.7390 |
| 2.6185 | 2.0 | 636 | 1.8740 | 0.8232 |
| 1.5423 | 3.0 | 954 | 1.1579 | 0.8890 |
| 1.0131 | 4.0 | 1272 | 0.8629 | 0.9077 |
| 0.7964 | 5.0 | 1590 | 0.7796 | 0.9158 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.2+cpu
- Datasets 1.18.4
- Tokenizers 0.10.3
|
bigmorning/try-m
|
bigmorning
| 2022-03-25T04:01:22Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T16:02:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: try-m
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# try-m
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
espnet/marathi_openslr64_wav2vec2_asrconformer5
|
espnet
| 2022-03-25T03:25:20Z | 0 | 0 | null |
[
"tensorboard",
"region:us"
] | null | 2022-03-23T21:14:55Z |
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Mar 23 05:58:21 UTC 2022`
- python version: `3.9.10 | packaged by conda-forge | (main, Feb 1 2022, 21:24:11) [GCC 9.4.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `1991a25855821b8b61d775681aa0cdfd6161bbc8`
- Commit date: `Mon Mar 21 22:19:19 2022 +0800`
## asr_train_asr_conformer5_raw_mr_bpe150_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave/dev_mr|137|1563|80.2|17.1|2.8|2.0|21.8|71.5|
|inference_asr_model_valid.acc.ave/test_mr|200|2536|73.9|20.8|5.4|1.1|27.2|82.0|
|inference_lm_config_mr_bpe150_valid.loss.ave_asr_model_valid.acc.ave/dev_mr|137|1563|81.3|15.6|3.1|2.0|20.7|72.3|
|inference_lm_config_mr_bpe150_valid.loss.ave_asr_model_valid.acc.ave/test_mr|200|2536|76.6|20.7|2.7|0.9|24.3|80.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave/dev_mr|137|9369|93.7|2.8|3.5|2.3|8.6|71.5|
|inference_asr_model_valid.acc.ave/test_mr|200|14174|90.3|3.7|5.9|1.6|11.3|82.0|
|inference_lm_config_mr_bpe150_valid.loss.ave_asr_model_valid.acc.ave/dev_mr|137|9369|92.4|3.8|3.8|2.7|10.2|72.3|
|inference_lm_config_mr_bpe150_valid.loss.ave_asr_model_valid.acc.ave/test_mr|200|14174|88.3|7.6|4.1|2.7|14.4|80.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave/dev_mr|137|6050|90.0|5.6|4.5|2.4|12.4|71.5|
|inference_asr_model_valid.acc.ave/test_mr|200|9254|85.6|7.6|6.8|1.6|16.0|82.0|
|inference_lm_config_mr_bpe150_valid.loss.ave_asr_model_valid.acc.ave/dev_mr|137|6050|88.8|7.0|4.2|2.7|13.9|72.3|
|inference_lm_config_mr_bpe150_valid.loss.ave_asr_model_valid.acc.ave/test_mr|200|9254|83.2|12.3|4.5|3.9|20.7|80.5|
|
sanchit-gandhi/wav2vec2-2-rnd-no-adapter-regularisation
|
sanchit-gandhi
| 2022-03-25T03:10:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-22T10:13:48Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7177
- Wer: 0.1283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.1228 | 1.68 | 1500 | 6.0490 | 1.1433 |
| 5.4173 | 3.36 | 3000 | 5.3453 | 1.4878 |
| 4.1635 | 5.04 | 4500 | 4.4185 | 0.9644 |
| 2.1246 | 6.73 | 6000 | 3.2089 | 0.5026 |
| 1.88 | 8.41 | 7500 | 1.9886 | 0.3438 |
| 1.2606 | 10.09 | 9000 | 1.4472 | 0.2487 |
| 0.7492 | 11.77 | 10500 | 1.1716 | 0.1949 |
| 0.8868 | 13.45 | 12000 | 1.0146 | 0.1702 |
| 0.5078 | 15.13 | 13500 | 0.8821 | 0.1548 |
| 0.4515 | 16.82 | 15000 | 0.8181 | 0.1417 |
| 0.3902 | 18.5 | 16500 | 0.7765 | 0.1364 |
| 0.3575 | 20.18 | 18000 | 0.7367 | 0.1333 |
| 0.2903 | 21.86 | 19500 | 0.7211 | 0.1301 |
| 0.2698 | 23.54 | 21000 | 0.7177 | 0.1283 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
A8month/RestNet-B
|
A8month
| 2022-03-25T01:52:29Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-25T01:52:29Z |
---
license: apache-2.0
---
|
Aureliano/distilbert-base-uncased-if
|
Aureliano
| 2022-03-25T00:06:03Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"distilbert",
"text-classification",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-21T17:46:18Z |
---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (uncased) for Interactive Fiction
[`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) finetuned on a dataset of Interactive
Fiction commands.
Details on the datasets can be found [here](https://github.com/aporporato/jericho-corpora).
The resulting model scored an accuracy of 0.976253 on the WordNet task test set.
## How to use the discriminator in `transformers`
```python
import tensorflow as tf
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
discriminator = TFAutoModelForSequenceClassification.from_pretrained("Aureliano/distilbert-base-uncased-if")
tokenizer = AutoTokenizer.from_pretrained("Aureliano/distilbert-base-uncased-if")
text = "get lamp"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = discriminator.config.id2label[tf.math.argmax(prediction).numpy()]
print(text, ":", label) # take.v.04 -> "get into one's hands, take physically"
```
## How to use the discriminator in `transformers` on a custom dataset
(Heavily based on: https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb)
```python
import math
import numpy as np
import tensorflow as tf
from datasets import load_metric, Dataset, DatasetDict
from transformers import TFAutoModel, TFAutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, create_optimizer
from transformers.keras_callbacks import KerasMetricCallback
# This example shows how this model can be used:
# you should finetune the model of your specific corpus if commands, bigger than this
dict_train = {
"idx": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18",
"19", "20"],
"sentence": ["e", "get pen", "drop book", "x paper", "i", "south", "get paper", "drop the pen", "x book",
"inventory", "n", "get the book", "drop paper", "look at Pen", "inv", "g", "s", "get sandwich",
"drop sandwich", "x sandwich", "agin"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04",
"drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02",
"inventory.v.01", "repeat.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "repeat.v.01"]
}
dict_val = {
"idx": ["0", "1", "2", "3", "4", "5"],
"sentence": ["w", "get shield", "drop sword", "x spikes", "i", "repeat"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "repeat.v.01"]
}
raw_train_dataset = Dataset.from_dict(dict_train)
raw_val_dataset = Dataset.from_dict(dict_val)
raw_dataset = DatasetDict()
raw_dataset["train"] = raw_train_dataset
raw_dataset["val"] = raw_val_dataset
raw_dataset = raw_dataset.class_encode_column("label")
print(raw_dataset)
print(raw_dataset["train"].features)
print(raw_dataset["val"].features)
print(raw_dataset["train"][1])
label2id = {}
id2label = {}
for i, l in enumerate(raw_dataset["train"].features["label"].names):
label2id[l] = i
id2label[i] = l
discriminator = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased",
label2id=label2id,
id2label=id2label)
discriminator.distilbert = TFAutoModel.from_pretrained("Aureliano/distilbert-base-uncased-if")
tokenizer = AutoTokenizer.from_pretrained("Aureliano/distilbert-base-uncased-if")
tokenize_function = lambda example: tokenizer(example["sentence"], truncation=True)
pre_tokenizer_columns = set(raw_dataset["train"].features)
encoded_dataset = raw_dataset.map(tokenize_function, batched=True)
tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
batch_size = len(encoded_dataset["train"])
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator
)
tf_validation_dataset = encoded_dataset["val"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator
)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_epochs = 20
batches_per_epoch = math.ceil(len(encoded_dataset["train"]) / batch_size)
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(
init_lr=2e-5, num_warmup_steps=total_train_steps // 5, num_train_steps=total_train_steps
)
metric = load_metric("accuracy")
def compute_metrics(eval_predictions):
logits, labels = eval_predictions
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_dataset)
callbacks = [metric_callback]
discriminator.compile(optimizer=optimizer, loss=loss, metrics=["sparse_categorical_accuracy"])
discriminator.fit(
tf_train_dataset,
epochs=num_epochs,
validation_data=tf_validation_dataset,
callbacks=callbacks
)
print("Evaluate on test data")
results = discriminator.evaluate(tf_validation_dataset)
print("test loss, test acc:", results)
text = "i"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'inventory.v.01' (-> "make or include in an itemized record or report"), but probably only with a better finetuning dataset
text = "get lamp"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'take.v.04' (-> "get into one's hands, take physically"), but probably only with a better finetuning dataset
text = "w"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'travel.v.01' (-> "change location; move, travel, or proceed, also metaphorically"), but probably only with a better finetuning dataset
```
## How to use in a Rasa pipeline
The model can integrated in a Rasa pipeline through
a [`LanguageModelFeaturizer`](https://rasa.com/docs/rasa/components#languagemodelfeaturizer)
```yaml
recipe: default.v1
language: en
pipeline:
# See https://rasa.com/docs/rasa/tuning-your-model for more information.
...
- name: "WhitespaceTokenizer"
...
- name: LanguageModelFeaturizer
model_name: "distilbert"
model_weights: "Aureliano/distilbert-base-uncased-if"
...
```
|
ahmeddbahaa/mt5-small-finetuned-mt5-en
|
ahmeddbahaa
| 2022-03-24T20:02:45Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-24T15:17:24Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-mt5-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
args: english
metrics:
- name: Rouge1
type: rouge
value: 23.8952
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-mt5-en
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8345
- Rouge1: 23.8952
- Rouge2: 5.8792
- Rougel: 18.6495
- Rougelsum: 18.7057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| No log | 1.0 | 224 | 3.0150 | 24.4639 | 5.3016 | 18.3987 | 18.4963 |
| No log | 2.0 | 448 | 2.8738 | 24.5075 | 5.842 | 18.8133 | 18.9072 |
| No log | 3.0 | 672 | 2.8345 | 23.8952 | 5.8792 | 18.6495 | 18.7057 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mp6kv/feedback_intent_test
|
mp6kv
| 2022-03-24T18:42:11Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-05T19:53:12Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: feedback_intent_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# feedback_intent_test
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
Custom data generated labeling text according to these three categories.
- Positive : Encouraging the student that they are correct and on the right track
- Neutral : Mixed feedback or feedback that asks for more information
- Negative : Informing the student they need to change direction or that they are not correct
Takes a user input of string text and classifies it according to one of three categories.
## Intended uses & limitations
from transformers import pipeline
classifier = pipeline("text-classification",model="mp6kv/feedback_intent_test")
output = classifier("great job, you're getting it!")
score = output[0]['score']
label = output[0]['label']
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
mp6kv/pump_intent_test
|
mp6kv
| 2022-03-24T18:40:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-05T19:25:45Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: pump_intent_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pump_intent_test
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
Custom data generated labeling text according to these three categories.
These three categories are the subcategories of Pump - essentially when a user asks a question and expects an answer in response
- Value: a slot value or a calculation
- Clarification: Asking for further information on a previous answer
- Testing: Testing for knowledge of facts and definitions
Takes a user input of string text and classifies it according to one of three categories.
## Intended uses & limitations
from transformers import pipeline
classifier = pipeline("text-classification",model="mp6kv/pump_intent_test")
output = classifier("What is the value of the length of the blue object?")
score = output[0]['score']
label = output[0]['label']
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
optimum/roberta-base-squad2
|
optimum
| 2022-03-24T16:12:36Z | 164 | 1 |
transformers
|
[
"transformers",
"onnx",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-24T16:11:57Z |
---
language: en
datasets:
- squad_v2
license: cc-by-4.0
---
# ONNX convert roberta-base for QA
## Conversion of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2)
NOTE: This is version 2 of the model. See [this github issue](https://github.com/deepset-ai/FARM/issues/552) from the FARM repository for an explanation of why we updated. If you'd like to use version 1, specify `revision="v1.0"` when loading the model in Transformers 3.5. For exmaple:
```
model_name = "deepset/roberta-base-squad2"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="question-answering")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 2
base_LM_model = "roberta-base"
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Using a distilled model instead
Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.87029394424324,
"f1": 82.91251169582613,
"total": 11873,
"HasAns_exact": 77.93522267206478,
"HasAns_f1": 84.02838248389763,
"HasAns_total": 5928,
"NoAns_exact": 81.79983179142137,
"NoAns_f1": 81.79983179142137,
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
huggingtweets/untiltrees
|
huggingtweets
| 2022-03-24T16:08:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T15:42:21Z |
---
language: en
thumbnail: http://www.huggingtweets.com/untiltrees/1648138126631/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1350186722596974593/lANAV_Xj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dancing Box</div>
<div style="text-align: center; font-size: 14px;">@untiltrees</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dancing Box.
| Data | Dancing Box |
| --- | --- |
| Tweets downloaded | 994 |
| Retweets | 41 |
| Short tweets | 91 |
| Tweets kept | 862 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36kia24g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @untiltrees's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8md8jogv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8md8jogv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/untiltrees')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gayanin/t5-small-med-term-conditional-masking
|
gayanin
| 2022-03-24T14:54:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-23T20:16:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-med-term-conditional-masking
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-med-term-conditional-masking
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6808
- Rouge2 Precision: 0.6855
- Rouge2 Recall: 0.486
- Rouge2 Fmeasure: 0.5507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.9303 | 1.0 | 15827 | 0.8262 | 0.6603 | 0.4698 | 0.5318 |
| 0.8677 | 2.0 | 31654 | 0.7679 | 0.6695 | 0.4762 | 0.539 |
| 0.8315 | 3.0 | 47481 | 0.7393 | 0.6741 | 0.4783 | 0.5418 |
| 0.7999 | 4.0 | 63308 | 0.7194 | 0.6774 | 0.4811 | 0.5448 |
| 0.7746 | 5.0 | 79135 | 0.7059 | 0.6804 | 0.4815 | 0.5459 |
| 0.7785 | 6.0 | 94962 | 0.6958 | 0.6827 | 0.4841 | 0.5485 |
| 0.7592 | 7.0 | 110789 | 0.6893 | 0.6841 | 0.4849 | 0.5494 |
| 0.745 | 8.0 | 126616 | 0.6849 | 0.6846 | 0.4852 | 0.5498 |
| 0.7443 | 9.0 | 142443 | 0.6818 | 0.6854 | 0.4865 | 0.551 |
| 0.7417 | 10.0 | 158270 | 0.6808 | 0.6855 | 0.486 | 0.5507 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
NbAiLab/wav2vec2-large-voxrex-npsc-nynorsk
|
NbAiLab
| 2022-03-24T13:40:35Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"NbAiLab/NPSC",
"robust-speech-event",
"no",
"nn-NO",
"hf-asr-leaderboard",
"dataset:NbAiLab/NPSC",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- automatic-speech-recognition
- NbAiLab/NPSC
- robust-speech-event
- "no"
- nn-NO
- hf-asr-leaderboard
datasets:
- NbAiLab/NPSC
language:
- nn-NO
model-index:
- name: wav2vec2-large-voxrex-npsc-nynorsk
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: NPSC
type: NbAiLab/NPSC
args: 16K_mp3_nynorsk
metrics:
- name: Test (Nynorsk) WER
type: wer
value: 0.12220762155059132
- name: Test (Nynorsk) CER
type: cer
value: 0.04195612578778549
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-voxrex-npsc-nynorsk
This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the NBAILAB/NPSC - 16K_MP3_NYNORSK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4142
- Wer: 0.1576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.086 | 2.17 | 500 | 3.0773 | 1.0 |
| 2.8532 | 4.35 | 1000 | 2.8393 | 1.0 |
| 0.9738 | 6.52 | 1500 | 0.7283 | 0.4890 |
| 0.6763 | 8.7 | 2000 | 0.5340 | 0.3662 |
| 0.5303 | 10.87 | 2500 | 0.4521 | 0.3140 |
| 0.4765 | 13.04 | 3000 | 0.4181 | 0.2853 |
| 0.4219 | 15.22 | 3500 | 0.4156 | 0.2934 |
| 0.3564 | 17.39 | 4000 | 0.3925 | 0.2509 |
| 0.3282 | 19.57 | 4500 | 0.3824 | 0.2420 |
| 0.3118 | 21.74 | 5000 | 0.3636 | 0.2354 |
| 0.2919 | 23.91 | 5500 | 0.3615 | 0.2281 |
| 0.2961 | 26.09 | 6000 | 0.3548 | 0.2255 |
| 0.284 | 28.26 | 6500 | 0.3526 | 0.2209 |
| 0.2566 | 30.43 | 7000 | 0.3526 | 0.2205 |
| 0.2422 | 32.61 | 7500 | 0.3569 | 0.2173 |
| 0.2472 | 34.78 | 8000 | 0.3592 | 0.2166 |
| 0.2337 | 36.96 | 8500 | 0.3625 | 0.2172 |
| 0.2315 | 39.13 | 9000 | 0.3580 | 0.2155 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/melindagates
|
huggingtweets
| 2022-03-24T13:28:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-24T13:22:09Z |
---
language: en
thumbnail: http://www.huggingtweets.com/melindagates/1648128524647/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1054713372845862912/1SR434Pr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Melinda French Gates</div>
<div style="text-align: center; font-size: 14px;">@melindagates</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Melinda French Gates.
| Data | Melinda French Gates |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 231 |
| Short tweets | 2 |
| Tweets kept | 3017 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39nn0ehw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @melindagates's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2xcx4bfy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2xcx4bfy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/melindagates')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
espnet/Karthik_DSTC2_asr_train_asr_wav2vec_conformer_2
|
espnet
| 2022-03-24T12:42:03Z | 1 | 0 |
espnet
|
[
"espnet",
"tensorboard",
"audio",
"automatic-speech-recognition",
"en",
"dataset:DSTC2",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-24T12:03:09Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- DSTC2
license: cc-by-4.0
---
## ESPnet2 ASR pretrained model
### `espnet/Karthik_DSTC2_asr_train_asr_wav2vec_conformer_2`
This model was trained by Karthik using DSTC2/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
tomascufaro/xls-r-es-test
|
tomascufaro
| 2022-03-24T11:58:49Z | 28 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"es",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- es
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-es-test
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: es
metrics:
- name: Test WER
type: wer
value: 12.62
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: es
metrics:
- name: Test WER
type: wer
value: 36.08
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: es
metrics:
- name: Test WER
type: wer
value: 39.19
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-es-test
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ES dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1304
- WER: 0.1261
- CER: 0.035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.9613 | 0.07 | 500 | 2.9647 | 1.0 |
| 2.604 | 0.14 | 1000 | 1.8300 | 0.9562 |
| 1.177 | 0.21 | 1500 | 0.3652 | 0.3077 |
| 1.0745 | 0.28 | 2000 | 0.2707 | 0.2504 |
| 1.0103 | 0.35 | 2500 | 0.2338 | 0.2157 |
| 0.9858 | 0.42 | 3000 | 0.2321 | 0.2129 |
| 0.974 | 0.49 | 3500 | 0.2164 | 0.2031 |
| 0.9699 | 0.56 | 4000 | 0.2078 | 0.1970 |
| 0.9513 | 0.63 | 4500 | 0.2173 | 0.2139 |
| 0.9657 | 0.7 | 5000 | 0.2050 | 0.1979 |
| 0.9484 | 0.77 | 5500 | 0.2008 | 0.1919 |
| 0.9317 | 0.84 | 6000 | 0.2012 | 0.1911 |
| 0.9366 | 0.91 | 6500 | 0.2024 | 0.1976 |
| 0.9242 | 0.98 | 7000 | 0.2062 | 0.2028 |
| 0.9138 | 1.05 | 7500 | 0.1924 | 0.1863 |
| 0.921 | 1.12 | 8000 | 0.1935 | 0.1836 |
| 0.9117 | 1.19 | 8500 | 0.1887 | 0.1815 |
| 0.9064 | 1.26 | 9000 | 0.1909 | 0.1839 |
| 0.9118 | 1.32 | 9500 | 0.1869 | 0.1830 |
| 0.9121 | 1.39 | 10000 | 0.1863 | 0.1802 |
| 0.9048 | 1.46 | 10500 | 0.1845 | 0.1791 |
| 0.8955 | 1.53 | 11000 | 0.1863 | 0.1774 |
| 0.8947 | 1.6 | 11500 | 0.1907 | 0.1814 |
| 0.9073 | 1.67 | 12000 | 0.1892 | 0.1853 |
| 0.8927 | 1.74 | 12500 | 0.1821 | 0.1750 |
| 0.8732 | 1.81 | 13000 | 0.1815 | 0.1768 |
| 0.8761 | 1.88 | 13500 | 0.1822 | 0.1749 |
| 0.8751 | 1.95 | 14000 | 0.1789 | 0.1715 |
| 0.8889 | 2.02 | 14500 | 0.1819 | 0.1791 |
| 0.8864 | 2.09 | 15000 | 0.1826 | 0.1794 |
| 0.886 | 2.16 | 15500 | 0.1788 | 0.1776 |
| 0.8915 | 2.23 | 16000 | 0.1756 | 0.1719 |
| 0.8689 | 2.3 | 16500 | 0.1769 | 0.1711 |
| 0.879 | 2.37 | 17000 | 0.1777 | 0.1739 |
| 0.8692 | 2.44 | 17500 | 0.1765 | 0.1705 |
| 0.8504 | 2.51 | 18000 | 0.1699 | 0.1652 |
| 0.8728 | 2.58 | 18500 | 0.1705 | 0.1694 |
| 0.8523 | 2.65 | 19000 | 0.1674 | 0.1645 |
| 0.8513 | 2.72 | 19500 | 0.1661 | 0.1611 |
| 0.8498 | 2.79 | 20000 | 0.1660 | 0.1631 |
| 0.8432 | 2.86 | 20500 | 0.1636 | 0.1610 |
| 0.8492 | 2.93 | 21000 | 0.1708 | 0.1688 |
| 0.8561 | 3.0 | 21500 | 0.1663 | 0.1604 |
| 0.842 | 3.07 | 22000 | 0.1690 | 0.1625 |
| 0.857 | 3.14 | 22500 | 0.1642 | 0.1605 |
| 0.8518 | 3.21 | 23000 | 0.1626 | 0.1585 |
| 0.8506 | 3.28 | 23500 | 0.1651 | 0.1605 |
| 0.8394 | 3.35 | 24000 | 0.1647 | 0.1585 |
| 0.8431 | 3.42 | 24500 | 0.1632 | 0.1573 |
| 0.8566 | 3.49 | 25000 | 0.1614 | 0.1550 |
| 0.8534 | 3.56 | 25500 | 0.1645 | 0.1589 |
| 0.8386 | 3.63 | 26000 | 0.1632 | 0.1582 |
| 0.8357 | 3.7 | 26500 | 0.1631 | 0.1556 |
| 0.8299 | 3.77 | 27000 | 0.1612 | 0.1550 |
| 0.8421 | 3.84 | 27500 | 0.1602 | 0.1552 |
| 0.8375 | 3.91 | 28000 | 0.1592 | 0.1537 |
| 0.8328 | 3.97 | 28500 | 0.1587 | 0.1537 |
| 0.8155 | 4.04 | 29000 | 0.1587 | 0.1520 |
| 0.8335 | 4.11 | 29500 | 0.1624 | 0.1556 |
| 0.8138 | 4.18 | 30000 | 0.1581 | 0.1547 |
| 0.8195 | 4.25 | 30500 | 0.1560 | 0.1507 |
| 0.8092 | 4.32 | 31000 | 0.1561 | 0.1534 |
| 0.8191 | 4.39 | 31500 | 0.1549 | 0.1493 |
| 0.8008 | 4.46 | 32000 | 0.1540 | 0.1493 |
| 0.8138 | 4.53 | 32500 | 0.1544 | 0.1493 |
| 0.8173 | 4.6 | 33000 | 0.1553 | 0.1511 |
| 0.8081 | 4.67 | 33500 | 0.1541 | 0.1484 |
| 0.8192 | 4.74 | 34000 | 0.1560 | 0.1506 |
| 0.8068 | 4.81 | 34500 | 0.1540 | 0.1503 |
| 0.8105 | 4.88 | 35000 | 0.1529 | 0.1483 |
| 0.7976 | 4.95 | 35500 | 0.1507 | 0.1451 |
| 0.8143 | 5.02 | 36000 | 0.1505 | 0.1462 |
| 0.8053 | 5.09 | 36500 | 0.1517 | 0.1476 |
| 0.785 | 5.16 | 37000 | 0.1526 | 0.1478 |
| 0.7936 | 5.23 | 37500 | 0.1489 | 0.1421 |
| 0.807 | 5.3 | 38000 | 0.1483 | 0.1420 |
| 0.8092 | 5.37 | 38500 | 0.1481 | 0.1435 |
| 0.793 | 5.44 | 39000 | 0.1503 | 0.1438 |
| 0.814 | 5.51 | 39500 | 0.1495 | 0.1480 |
| 0.807 | 5.58 | 40000 | 0.1472 | 0.1424 |
| 0.7913 | 5.65 | 40500 | 0.1471 | 0.1422 |
| 0.7844 | 5.72 | 41000 | 0.1473 | 0.1422 |
| 0.7888 | 5.79 | 41500 | 0.1445 | 0.1385 |
| 0.7806 | 5.86 | 42000 | 0.1435 | 0.1394 |
| 0.7773 | 5.93 | 42500 | 0.1461 | 0.1424 |
| 0.786 | 6.0 | 43000 | 0.1450 | 0.1413 |
| 0.7784 | 6.07 | 43500 | 0.1463 | 0.1424 |
| 0.7937 | 6.14 | 44000 | 0.1438 | 0.1386 |
| 0.7738 | 6.21 | 44500 | 0.1437 | 0.1383 |
| 0.7728 | 6.28 | 45000 | 0.1424 | 0.1371 |
| 0.7681 | 6.35 | 45500 | 0.1416 | 0.1376 |
| 0.776 | 6.42 | 46000 | 0.1415 | 0.1380 |
| 0.7773 | 6.49 | 46500 | 0.1416 | 0.1371 |
| 0.7692 | 6.56 | 47000 | 0.1398 | 0.1345 |
| 0.7642 | 6.62 | 47500 | 0.1381 | 0.1341 |
| 0.7692 | 6.69 | 48000 | 0.1392 | 0.1334 |
| 0.7667 | 6.76 | 48500 | 0.1392 | 0.1348 |
| 0.7712 | 6.83 | 49000 | 0.1398 | 0.1333 |
| 0.7628 | 6.9 | 49500 | 0.1392 | 0.1344 |
| 0.7622 | 6.97 | 50000 | 0.1377 | 0.1329 |
| 0.7639 | 7.04 | 50500 | 0.1361 | 0.1316 |
| 0.742 | 7.11 | 51000 | 0.1376 | 0.1327 |
| 0.7526 | 7.18 | 51500 | 0.1387 | 0.1342 |
| 0.7606 | 7.25 | 52000 | 0.1363 | 0.1316 |
| 0.7626 | 7.32 | 52500 | 0.1365 | 0.1313 |
| 0.752 | 7.39 | 53000 | 0.1354 | 0.1309 |
| 0.7562 | 7.46 | 53500 | 0.1362 | 0.1312 |
| 0.7557 | 7.53 | 54000 | 0.1358 | 0.1325 |
| 0.7588 | 7.6 | 54500 | 0.1343 | 0.1311 |
| 0.7485 | 7.67 | 55000 | 0.1346 | 0.1301 |
| 0.7466 | 7.74 | 55500 | 0.1354 | 0.1314 |
| 0.7558 | 7.81 | 56000 | 0.1359 | 0.1325 |
| 0.7578 | 7.88 | 56500 | 0.1363 | 0.1334 |
| 0.7411 | 7.95 | 57000 | 0.1346 | 0.1301 |
| 0.7478 | 8.02 | 57500 | 0.1355 | 0.1305 |
| 0.7451 | 8.09 | 58000 | 0.1349 | 0.1302 |
| 0.7383 | 8.16 | 58500 | 0.1349 | 0.1294 |
| 0.7482 | 8.23 | 59000 | 0.1341 | 0.1293 |
| 0.742 | 8.3 | 59500 | 0.1338 | 0.1296 |
| 0.7343 | 8.37 | 60000 | 0.1348 | 0.1307 |
| 0.7385 | 8.44 | 60500 | 0.1324 | 0.1282 |
| 0.7567 | 8.51 | 61000 | 0.1334 | 0.1281 |
| 0.7342 | 8.58 | 61500 | 0.1338 | 0.1289 |
| 0.7401 | 8.65 | 62000 | 0.1331 | 0.1285 |
| 0.7362 | 8.72 | 62500 | 0.1329 | 0.1283 |
| 0.7241 | 8.79 | 63000 | 0.1323 | 0.1277 |
| 0.7244 | 8.86 | 63500 | 0.1317 | 0.1269 |
| 0.7274 | 8.93 | 64000 | 0.1308 | 0.1260 |
| 0.7411 | 9.0 | 64500 | 0.1309 | 0.1256 |
| 0.7255 | 9.07 | 65000 | 0.1316 | 0.1265 |
| 0.7406 | 9.14 | 65500 | 0.1315 | 0.1270 |
| 0.7418 | 9.21 | 66000 | 0.1315 | 0.1269 |
| 0.7301 | 9.27 | 66500 | 0.1315 | 0.1273 |
| 0.7248 | 9.34 | 67000 | 0.1323 | 0.1274 |
| 0.7423 | 9.41 | 67500 | 0.1309 | 0.1267 |
| 0.7152 | 9.48 | 68000 | 0.1312 | 0.1271 |
| 0.7295 | 9.55 | 68500 | 0.1306 | 0.1262 |
| 0.7231 | 9.62 | 69000 | 0.1308 | 0.1263 |
| 0.7344 | 9.69 | 69500 | 0.1313 | 0.1267 |
| 0.7264 | 9.76 | 70000 | 0.1305 | 0.1263 |
| 0.7309 | 9.83 | 70500 | 0.1303 | 0.1262 |
| 0.73 | 9.9 | 71000 | 0.1303 | 0.1261 |
| 0.7353 | 9.97 | 71500 | 0.1304 | 0.1260 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sammy786/wav2vec2-xlsr-romansh_sursilvan
|
sammy786
| 2022-03-24T11:58:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"rm-sursilv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- rm-sursilv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- rm-sursilv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-romansh_sursilvan
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: rm-sursilv
metrics:
- name: Test WER
type: wer
value: 13.82
- name: Test CER
type: cer
value: 3.02
---
# sammy786/wav2vec2-xlsr-romansh_sursilvan
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - rm-sursilv dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 16.38
- Wer: 21.25
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 4.825500 | 2.932350 | 1.000000 |
| 400 | 1.325600 | 0.292645 | 0.415436 |
| 600 | 0.709800 | 0.219167 | 0.324451 |
| 800 | 0.576800 | 0.174390 | 0.275477 |
| 1000 | 0.538100 | 0.183737 | 0.272116 |
| 1200 | 0.475200 | 0.159078 | 0.253871 |
| 1400 | 0.420400 | 0.167277 | 0.240907 |
| 1600 | 0.393500 | 0.167216 | 0.247269 |
| 1800 | 0.407500 | 0.178282 | 0.239827 |
| 2000 | 0.374400 | 0.184590 | 0.239467 |
| 2200 | 0.382600 | 0.164106 | 0.227824 |
| 2400 | 0.363100 | 0.162543 | 0.228544 |
| 2600 | 0.199000 | 0.172903 | 0.231665 |
| 2800 | 0.150800 | 0.160117 | 0.222662 |
| 3000 | 0.101100 | 0.169553 | 0.222662 |
| 3200 | 0.104200 | 0.161056 | 0.220622 |
| 3400 | 0.096900 | 0.161562 | 0.216781 |
| 3600 | 0.092200 | 0.163880 | 0.212580 |
| 3800 | 0.089200 | 0.162288 | 0.214140 |
| 4000 | 0.076200 | 0.160470 | 0.213540 |
| 4200 | 0.087900 | 0.162827 | 0.213060 |
| 4400 | 0.066200 | 0.161096 | 0.213300 |
| 4600 | 0.076000 | 0.162060 | 0.213660 |
| 4800 | 0.071400 | 0.162045 | 0.213300 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-romansh_sursilvan --dataset mozilla-foundation/common_voice_8_0 --config rm-sursilv --split test
```
|
sammy786/wav2vec2-xlsr-kyrgyz
|
sammy786
| 2022-03-24T11:58:41Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ky",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ky
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ky
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-kyrgyz
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ky
metrics:
- name: Test WER
type: wer
value: 25.24
- name: Test CER
type: cer
value: 6.25
---
# sammy786/wav2vec2-xlsr-kyrgyz
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ky dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 43.06
- Wer: 39.19
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 200 | 5.357800 | 2.700367 | 1.000000 |
| 400 | 1.513600 | 0.642542 | 0.598820 |
| 600 | 0.961900 | 0.530665 | 0.502739 |
| 800 | 0.776000 | 0.507709 | 0.462705 |
| 1000 | 0.646100 | 0.453115 | 0.444164 |
| 1200 | 0.581200 | 0.454797 | 0.438264 |
| 1400 | 0.437900 | 0.459389 | 0.426464 |
| 1600 | 0.348600 | 0.401247 | 0.416351 |
| 1800 | 0.312800 | 0.436135 | 0.409608 |
| 2000 | 0.294100 | 0.440911 | 0.398651 |
| 2200 | 0.281400 | 0.432729 | 0.394016 |
| 2400 | 0.258400 | 0.429860 | 0.393595 |
| 2600 | 0.263700 | 0.432689 | 0.395280 |
| 2800 | 0.256900 | 0.430672 | 0.391909 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-kyrgyz --dataset mozilla-foundation/common_voice_8_0 --config ky --split test
```
|
sammy786/wav2vec2-xlsr-dhivehi
|
sammy786
| 2022-03-24T11:58:38Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- dv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- dv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-dhivehi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: dv
metrics:
- name: Test WER
type: wer
value: 26.91
- name: Test CER
type: cer
value: 4.02
---
# sammy786/wav2vec2-xlsr-dhivehi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - dv dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 14.86
- Wer: 29.32
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|-------|---------------|-----------------|----------|
| 200 | 4.883800 | 3.190218 | 1.000000 |
| 400 | 1.600100 | 0.497887 | 0.726159 |
| 600 | 0.928500 | 0.358781 | 0.603892 |
| 800 | 0.867900 | 0.309132 | 0.570786 |
| 1000 | 0.743100 | 0.309116 | 0.552954 |
| 1200 | 0.725100 | 0.266839 | 0.538378 |
| 1400 | 0.786200 | 0.259797 | 0.535897 |
| 1600 | 0.655700 | 0.245691 | 0.517290 |
| 1800 | 0.650500 | 0.246957 | 0.516204 |
| 2000 | 0.685500 | 0.234808 | 0.516204 |
| 2200 | 0.487100 | 0.228409 | 0.507753 |
| 2400 | 0.401300 | 0.221087 | 0.495968 |
| 2600 | 0.359300 | 0.212476 | 0.489301 |
| 2800 | 0.347300 | 0.204848 | 0.487750 |
| 3000 | 0.327000 | 0.203163 | 0.478756 |
| 3200 | 0.337100 | 0.210235 | 0.487595 |
| 3400 | 0.308900 | 0.201471 | 0.491316 |
| 3600 | 0.292600 | 0.192437 | 0.476120 |
| 3800 | 0.289600 | 0.198398 | 0.468445 |
| 4000 | 0.290200 | 0.193484 | 0.467204 |
| 4200 | 0.272600 | 0.193999 | 0.470150 |
| 4400 | 0.266700 | 0.187384 | 0.460769 |
| 4600 | 0.253800 | 0.187279 | 0.476663 |
| 4800 | 0.266400 | 0.197395 | 0.466817 |
| 5000 | 0.258000 | 0.188920 | 0.456660 |
| 5200 | 0.237200 | 0.180770 | 0.457358 |
| 5400 | 0.237900 | 0.178149 | 0.448287 |
| 5600 | 0.232600 | 0.179827 | 0.461002 |
| 5800 | 0.228500 | 0.182142 | 0.445185 |
| 6000 | 0.221000 | 0.173619 | 0.440688 |
| 6200 | 0.219500 | 0.172291 | 0.442859 |
| 6400 | 0.219400 | 0.173339 | 0.430609 |
| 6600 | 0.201900 | 0.177552 | 0.426423 |
| 6800 | 0.199000 | 0.173157 | 0.429834 |
| 7000 | 0.200000 | 0.166503 | 0.423709 |
| 7200 | 0.194600 | 0.171812 | 0.429834 |
| 7400 | 0.192100 | 0.164989 | 0.420530 |
| 7600 | 0.185000 | 0.168355 | 0.418825 |
| 7800 | 0.175100 | 0.168128 | 0.419290 |
| 8000 | 0.173500 | 0.167959 | 0.424950 |
| 8200 | 0.172200 | 0.173643 | 0.414793 |
| 8400 | 0.164200 | 0.167020 | 0.406342 |
| 8600 | 0.170800 | 0.168050 | 0.405334 |
| 8800 | 0.157900 | 0.164290 | 0.396573 |
| 9000 | 0.159900 | 0.163188 | 0.397426 |
| 9200 | 0.151700 | 0.164370 | 0.390991 |
| 9400 | 0.146600 | 0.165053 | 0.392852 |
| 9600 | 0.142200 | 0.164939 | 0.391844 |
| 9800 | 0.148300 | 0.164422 | 0.385719 |
| 10000 | 0.136200 | 0.166569 | 0.385951 |
| 10200 | 0.140700 | 0.161377 | 0.379594 |
| 10400 | 0.133300 | 0.165194 | 0.378276 |
| 10600 | 0.131300 | 0.164328 | 0.369205 |
| 10800 | 0.135500 | 0.160254 | 0.373236 |
| 11000 | 0.121100 | 0.163522 | 0.372693 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-dhivehi --dataset mozilla-foundation/common_voice_8_0 --config dv --split test
```
|
sammy786/wav2vec2-xlsr-chuvash
|
sammy786
| 2022-03-24T11:58:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"cv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- cv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- cv
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sammy786/wav2vec2-xlsr-chuvash
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: cv
metrics:
- name: Test WER
type: wer
value: 27.81
- name: Test CER
type: cer
value: 5.79
---
# sammy786/wav2vec2-xlsr-chuvash
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - cv dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets):
- Loss: 18.02
- Wer: 29.22
## Model description
"facebook/wav2vec2-xls-r-1b" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Finnish train.tsv, dev.tsv and other.tsv
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000045637994662983496
- train_batch_size: 8
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:----:|:-------------:|:---------------:|:--------:|
| 200 | 6.559100 | 2.274687 | 1.000000 |
| 400 | 1.346100 | 0.508268 | 0.681995 |
| 600 | 0.797500 | 0.391174 | 0.572876 |
| 800 | 0.556300 | 0.308620 | 0.489283 |
| 1000 | 0.435800 | 0.273956 | 0.454014 |
| 1200 | 0.388700 | 0.311027 | 0.499415 |
| 1400 | 0.338300 | 0.243977 | 0.413874 |
| 1600 | 0.294000 | 0.214134 | 0.385230 |
| 1800 | 0.276000 | 0.245991 | 0.397311 |
| 2000 | 0.253900 | 0.208324 | 0.363016 |
| 2200 | 0.233600 | 0.222156 | 0.370811 |
| 2400 | 0.219700 | 0.202602 | 0.364186 |
| 2600 | 0.205000 | 0.241339 | 0.384451 |
| 2800 | 0.176000 | 0.263558 | 0.384061 |
| 3000 | 0.166700 | 0.211768 | 0.333398 |
| 3200 | 0.160600 | 0.198677 | 0.321512 |
| 3400 | 0.154600 | 0.208655 | 0.328722 |
| 3600 | 0.146800 | 0.188022 | 0.317810 |
| 3800 | 0.133200 | 0.181083 | 0.313133 |
| 4000 | 0.134200 | 0.190084 | 0.316251 |
| 4200 | 0.114200 | 0.193034 | 0.312159 |
| 4400 | 0.117300 | 0.194122 | 0.312354 |
| 4600 | 0.112300 | 0.191111 | 0.305534 |
| 4800 | 0.107800 | 0.185930 | 0.302611 |
| 5000 | 0.100400 | 0.178625 | 0.299883 |
| 5200 | 0.099800 | 0.176442 | 0.294622 |
| 5400 | 0.100800 | 0.177935 | 0.294427 |
| 5600 | 0.096300 | 0.182903 | 0.293843 |
| 5800 | 0.094200 | 0.181041 | 0.293453 |
| 6000 | 0.097600 | 0.179865 | 0.290725 |
| 6200 | 0.091600 | 0.180327 | 0.292868 |
| 6400 | 0.093100 | 0.180275 | 0.292284 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id sammy786/wav2vec2-xlsr-chuvash --dataset mozilla-foundation/common_voice_8_0 --config cv --split test
```
|
lgris/wav2vec2-xls-r-gn-cv7
|
lgris
| 2022-03-24T11:58:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"gn",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- gn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- gn
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xls-r-gn-cv7
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Validation WER
type: wer
value: 73.02
- name: Validation CER
type: cer
value: 17.79
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: gn
metrics:
- name: Test WER
type: wer
value: 62.65
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-gn-cv7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7197
- Wer: 0.7434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 13000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 3.4669 | 6.24 | 100 | 3.3003 | 1.0 |
| 3.3214 | 12.48 | 200 | 3.2090 | 1.0 |
| 3.1619 | 18.73 | 300 | 2.6322 | 1.0 |
| 1.751 | 24.97 | 400 | 1.4089 | 0.9803 |
| 0.7997 | 31.24 | 500 | 0.9996 | 0.9211 |
| 0.4996 | 37.48 | 600 | 0.9879 | 0.8553 |
| 0.3677 | 43.73 | 700 | 0.9543 | 0.8289 |
| 0.2851 | 49.97 | 800 | 1.0627 | 0.8487 |
| 0.2556 | 56.24 | 900 | 1.0933 | 0.8355 |
| 0.2268 | 62.48 | 1000 | 0.9191 | 0.8026 |
| 0.1914 | 68.73 | 1100 | 0.9582 | 0.7961 |
| 0.1749 | 74.97 | 1200 | 1.0502 | 0.8092 |
| 0.157 | 81.24 | 1300 | 0.9998 | 0.7632 |
| 0.1505 | 87.48 | 1400 | 1.0076 | 0.7303 |
| 0.1278 | 93.73 | 1500 | 0.9321 | 0.75 |
| 0.1078 | 99.97 | 1600 | 1.0383 | 0.7697 |
| 0.1156 | 106.24 | 1700 | 1.0302 | 0.7763 |
| 0.1107 | 112.48 | 1800 | 1.0419 | 0.7763 |
| 0.091 | 118.73 | 1900 | 1.0694 | 0.75 |
| 0.0829 | 124.97 | 2000 | 1.0257 | 0.7829 |
| 0.0865 | 131.24 | 2100 | 1.2108 | 0.7368 |
| 0.0907 | 137.48 | 2200 | 1.0458 | 0.7697 |
| 0.0897 | 143.73 | 2300 | 1.1504 | 0.7895 |
| 0.0766 | 149.97 | 2400 | 1.1663 | 0.7237 |
| 0.0659 | 156.24 | 2500 | 1.1320 | 0.7632 |
| 0.0699 | 162.48 | 2600 | 1.2586 | 0.7434 |
| 0.0613 | 168.73 | 2700 | 1.1815 | 0.8158 |
| 0.0598 | 174.97 | 2800 | 1.3299 | 0.75 |
| 0.0577 | 181.24 | 2900 | 1.2035 | 0.7171 |
| 0.0576 | 187.48 | 3000 | 1.2134 | 0.7434 |
| 0.0518 | 193.73 | 3100 | 1.3406 | 0.7566 |
| 0.0524 | 199.97 | 3200 | 1.4251 | 0.75 |
| 0.0467 | 206.24 | 3300 | 1.3533 | 0.7697 |
| 0.0428 | 212.48 | 3400 | 1.2463 | 0.7368 |
| 0.0453 | 218.73 | 3500 | 1.4532 | 0.7566 |
| 0.0473 | 224.97 | 3600 | 1.3152 | 0.7434 |
| 0.0451 | 231.24 | 3700 | 1.2232 | 0.7368 |
| 0.0361 | 237.48 | 3800 | 1.2938 | 0.7171 |
| 0.045 | 243.73 | 3900 | 1.4148 | 0.7434 |
| 0.0422 | 249.97 | 4000 | 1.3786 | 0.7961 |
| 0.036 | 256.24 | 4100 | 1.4488 | 0.7697 |
| 0.0352 | 262.48 | 4200 | 1.2294 | 0.6776 |
| 0.0326 | 268.73 | 4300 | 1.2796 | 0.6974 |
| 0.034 | 274.97 | 4400 | 1.3805 | 0.7303 |
| 0.0305 | 281.24 | 4500 | 1.4994 | 0.7237 |
| 0.0325 | 287.48 | 4600 | 1.4330 | 0.6908 |
| 0.0338 | 293.73 | 4700 | 1.3091 | 0.7368 |
| 0.0306 | 299.97 | 4800 | 1.2174 | 0.7171 |
| 0.0299 | 306.24 | 4900 | 1.3527 | 0.7763 |
| 0.0287 | 312.48 | 5000 | 1.3651 | 0.7368 |
| 0.0274 | 318.73 | 5100 | 1.4337 | 0.7368 |
| 0.0258 | 324.97 | 5200 | 1.3831 | 0.6908 |
| 0.022 | 331.24 | 5300 | 1.3556 | 0.6974 |
| 0.021 | 337.48 | 5400 | 1.3836 | 0.7237 |
| 0.0241 | 343.73 | 5500 | 1.4352 | 0.7039 |
| 0.0229 | 349.97 | 5600 | 1.3904 | 0.7105 |
| 0.026 | 356.24 | 5700 | 1.4131 | 0.7171 |
| 0.021 | 362.48 | 5800 | 1.5426 | 0.6974 |
| 0.0191 | 368.73 | 5900 | 1.5960 | 0.7632 |
| 0.0227 | 374.97 | 6000 | 1.6240 | 0.7368 |
| 0.0204 | 381.24 | 6100 | 1.4301 | 0.7105 |
| 0.0175 | 387.48 | 6200 | 1.5554 | 0.75 |
| 0.0183 | 393.73 | 6300 | 1.6044 | 0.7697 |
| 0.0183 | 399.97 | 6400 | 1.5963 | 0.7368 |
| 0.016 | 406.24 | 6500 | 1.5679 | 0.7829 |
| 0.0178 | 412.48 | 6600 | 1.5928 | 0.7697 |
| 0.014 | 418.73 | 6700 | 1.7000 | 0.7632 |
| 0.0182 | 424.97 | 6800 | 1.5340 | 0.75 |
| 0.0148 | 431.24 | 6900 | 1.9274 | 0.7368 |
| 0.0148 | 437.48 | 7000 | 1.6437 | 0.7697 |
| 0.0173 | 443.73 | 7100 | 1.5468 | 0.75 |
| 0.0109 | 449.97 | 7200 | 1.6083 | 0.75 |
| 0.0167 | 456.24 | 7300 | 1.6732 | 0.75 |
| 0.0139 | 462.48 | 7400 | 1.5097 | 0.7237 |
| 0.013 | 468.73 | 7500 | 1.5947 | 0.7171 |
| 0.0128 | 474.97 | 7600 | 1.6260 | 0.7105 |
| 0.0166 | 481.24 | 7700 | 1.5756 | 0.7237 |
| 0.0127 | 487.48 | 7800 | 1.4506 | 0.6908 |
| 0.013 | 493.73 | 7900 | 1.4882 | 0.7368 |
| 0.0125 | 499.97 | 8000 | 1.5589 | 0.7829 |
| 0.0141 | 506.24 | 8100 | 1.6328 | 0.7434 |
| 0.0115 | 512.48 | 8200 | 1.6586 | 0.7434 |
| 0.0117 | 518.73 | 8300 | 1.6043 | 0.7105 |
| 0.009 | 524.97 | 8400 | 1.6508 | 0.7237 |
| 0.0108 | 531.24 | 8500 | 1.4507 | 0.6974 |
| 0.011 | 537.48 | 8600 | 1.5942 | 0.7434 |
| 0.009 | 543.73 | 8700 | 1.8121 | 0.7697 |
| 0.0112 | 549.97 | 8800 | 1.6923 | 0.7697 |
| 0.0073 | 556.24 | 8900 | 1.7096 | 0.7368 |
| 0.0098 | 562.48 | 9000 | 1.7052 | 0.7829 |
| 0.0088 | 568.73 | 9100 | 1.6956 | 0.7566 |
| 0.0099 | 574.97 | 9200 | 1.4909 | 0.7171 |
| 0.0075 | 581.24 | 9300 | 1.6307 | 0.7697 |
| 0.0077 | 587.48 | 9400 | 1.6196 | 0.7961 |
| 0.0088 | 593.73 | 9500 | 1.6119 | 0.7566 |
| 0.0085 | 599.97 | 9600 | 1.4512 | 0.7368 |
| 0.0086 | 606.24 | 9700 | 1.5992 | 0.7237 |
| 0.0109 | 612.48 | 9800 | 1.4706 | 0.7368 |
| 0.0098 | 618.73 | 9900 | 1.3824 | 0.7171 |
| 0.0091 | 624.97 | 10000 | 1.4776 | 0.6974 |
| 0.0072 | 631.24 | 10100 | 1.4896 | 0.7039 |
| 0.0087 | 637.48 | 10200 | 1.5467 | 0.7368 |
| 0.007 | 643.73 | 10300 | 1.5493 | 0.75 |
| 0.0076 | 649.97 | 10400 | 1.5706 | 0.7303 |
| 0.0085 | 656.24 | 10500 | 1.5748 | 0.7237 |
| 0.0075 | 662.48 | 10600 | 1.5081 | 0.7105 |
| 0.0068 | 668.73 | 10700 | 1.4967 | 0.6842 |
| 0.0117 | 674.97 | 10800 | 1.4986 | 0.7105 |
| 0.0054 | 681.24 | 10900 | 1.5587 | 0.7303 |
| 0.0059 | 687.48 | 11000 | 1.5886 | 0.7171 |
| 0.0071 | 693.73 | 11100 | 1.5746 | 0.7171 |
| 0.0048 | 699.97 | 11200 | 1.6166 | 0.7237 |
| 0.0048 | 706.24 | 11300 | 1.6098 | 0.7237 |
| 0.0056 | 712.48 | 11400 | 1.5834 | 0.7237 |
| 0.0048 | 718.73 | 11500 | 1.5653 | 0.7171 |
| 0.0045 | 724.97 | 11600 | 1.6252 | 0.7237 |
| 0.0068 | 731.24 | 11700 | 1.6794 | 0.7171 |
| 0.0044 | 737.48 | 11800 | 1.6881 | 0.7039 |
| 0.008 | 743.73 | 11900 | 1.7393 | 0.75 |
| 0.0045 | 749.97 | 12000 | 1.6869 | 0.7237 |
| 0.0047 | 756.24 | 12100 | 1.7105 | 0.7303 |
| 0.0057 | 762.48 | 12200 | 1.7439 | 0.7303 |
| 0.004 | 768.73 | 12300 | 1.7871 | 0.7434 |
| 0.0061 | 774.97 | 12400 | 1.7812 | 0.7303 |
| 0.005 | 781.24 | 12500 | 1.7410 | 0.7434 |
| 0.0056 | 787.48 | 12600 | 1.7220 | 0.7303 |
| 0.0064 | 793.73 | 12700 | 1.7141 | 0.7434 |
| 0.0042 | 799.97 | 12800 | 1.7139 | 0.7368 |
| 0.0049 | 806.24 | 12900 | 1.7211 | 0.7434 |
| 0.0044 | 812.48 | 13000 | 1.7197 | 0.7434 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
infinitejoy/wav2vec2-large-xls-r-300m-romansh-vallader
|
infinitejoy
| 2022-03-24T11:58:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"rm-vallader",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- rm-vallader
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- rm-vallader
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Romansh Vallader
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: rm-vallader
metrics:
- name: Test WER
type: wer
value: 31.689
- name: Test CER
type: cer
value: 7.202
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-romansh-vallader
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - RM-VALLADER dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3155
- Wer: 0.3162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9556 | 15.62 | 500 | 2.9300 | 1.0 |
| 1.7874 | 31.25 | 1000 | 0.7566 | 0.6509 |
| 1.0131 | 46.88 | 1500 | 0.3671 | 0.3828 |
| 0.8439 | 62.5 | 2000 | 0.3350 | 0.3416 |
| 0.7502 | 78.12 | 2500 | 0.3155 | 0.3296 |
| 0.7093 | 93.75 | 3000 | 0.3182 | 0.3186 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-hausa
|
infinitejoy
| 2022-03-24T11:58:04Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ha",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ha
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- ha
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Hausa
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ha
metrics:
- name: Test WER
type: wer
value: 100
- name: Test CER
type: cer
value: 132.32
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hausa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5756
- Wer: 0.6014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.7064 | 11.36 | 500 | 2.7112 | 1.0 |
| 1.3079 | 22.73 | 1000 | 0.7337 | 0.7776 |
| 1.0919 | 34.09 | 1500 | 0.5938 | 0.7023 |
| 0.9546 | 45.45 | 2000 | 0.5698 | 0.6133 |
| 0.8895 | 56.82 | 2500 | 0.5739 | 0.6142 |
| 0.8152 | 68.18 | 3000 | 0.5579 | 0.6091 |
| 0.7703 | 79.55 | 3500 | 0.5813 | 0.6210 |
| 0.732 | 90.91 | 4000 | 0.5756 | 0.5860 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
azuur/wav2vec2-base-gn-demo
|
azuur
| 2022-03-24T11:57:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"hf-asr-leaderboard",
"gn",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language:
- gn
tags:
- generated_from_trainer
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- hf-asr-leaderboard
datasets:
- common_voice
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-base-gn-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-gn-demo
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7426
- Wer: 0.7256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 50
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 4.0 | 100 | 0.7045 | 0.7409 |
| No log | 8.0 | 200 | 0.7200 | 0.75 |
| No log | 12.0 | 300 | 0.7400 | 0.7439 |
| No log | 16.0 | 400 | 0.7677 | 0.7515 |
| 0.0846 | 20.0 | 500 | 0.7765 | 0.7271 |
| 0.0846 | 24.0 | 600 | 0.7821 | 0.7287 |
| 0.0846 | 28.0 | 700 | 0.7671 | 0.7180 |
| 0.0846 | 32.0 | 800 | 0.7594 | 0.7180 |
| 0.0846 | 36.0 | 900 | 0.7500 | 0.7165 |
| 0.0713 | 40.0 | 1000 | 0.7351 | 0.7287 |
| 0.0713 | 44.0 | 1100 | 0.7361 | 0.7241 |
| 0.0713 | 48.0 | 1200 | 0.7389 | 0.7378 |
| 0.0713 | 52.0 | 1300 | 0.7424 | 0.7210 |
| 0.0713 | 56.0 | 1400 | 0.7425 | 0.7256 |
| 0.0669 | 60.0 | 1500 | 0.7426 | 0.7256 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
anuragshas/wav2vec2-xls-r-300m-mt-cv8-with-lm
|
anuragshas
| 2022-03-24T11:57:50Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"mt",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- mt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: XLS-R-300M - Maltese
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice 8
args: mt
metrics:
- type: wer
value: 15.967
name: Test WER
- name: Test CER
type: cer
value: 3.657
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Maltese
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1895
- Wer: 0.1984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4219 | 3.6 | 400 | 3.3127 | 1.0 |
| 3.0399 | 7.21 | 800 | 3.0330 | 1.0 |
| 1.5756 | 10.81 | 1200 | 0.6108 | 0.5724 |
| 1.0995 | 14.41 | 1600 | 0.3091 | 0.3154 |
| 0.9639 | 18.02 | 2000 | 0.2596 | 0.2841 |
| 0.9032 | 21.62 | 2400 | 0.2270 | 0.2514 |
| 0.8145 | 25.23 | 2800 | 0.2172 | 0.2483 |
| 0.7845 | 28.83 | 3200 | 0.2084 | 0.2333 |
| 0.7694 | 32.43 | 3600 | 0.1974 | 0.2234 |
| 0.7333 | 36.04 | 4000 | 0.2020 | 0.2185 |
| 0.693 | 39.64 | 4400 | 0.1947 | 0.2148 |
| 0.6802 | 43.24 | 4800 | 0.1960 | 0.2102 |
| 0.667 | 46.85 | 5200 | 0.1904 | 0.2072 |
| 0.6486 | 50.45 | 5600 | 0.1881 | 0.2009 |
| 0.6339 | 54.05 | 6000 | 0.1877 | 0.1989 |
| 0.6254 | 57.66 | 6400 | 0.1893 | 0.2003 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-mt-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config mt --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-300m-mt-cv8-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "mt", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "għadu jilagħbu ċirku tant bilfondi"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 19.853 | 15.967 |
|
anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm
|
anuragshas
| 2022-03-24T11:57:47Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"lv",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- lv
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Latvian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: lv
metrics:
- name: Test WER
type: wer
value: 9.633
- name: Test CER
type: cer
value: 2.614
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: lv
metrics:
- name: Test WER
type: wer
value: 36.11
- name: Test CER
type: cer
value: 14.244
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: lv
metrics:
- name: Test WER
type: wer
value: 44.12
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Latvian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - LV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1660
- Wer: 0.1705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.489 | 2.56 | 400 | 3.3590 | 1.0 |
| 2.9903 | 5.13 | 800 | 2.9704 | 1.0001 |
| 1.6712 | 7.69 | 1200 | 0.6179 | 0.6566 |
| 1.2635 | 10.26 | 1600 | 0.3176 | 0.4531 |
| 1.0819 | 12.82 | 2000 | 0.2517 | 0.3508 |
| 1.0136 | 15.38 | 2400 | 0.2257 | 0.3124 |
| 0.9625 | 17.95 | 2800 | 0.1975 | 0.2311 |
| 0.901 | 20.51 | 3200 | 0.1986 | 0.2097 |
| 0.8842 | 23.08 | 3600 | 0.1904 | 0.2039 |
| 0.8542 | 25.64 | 4000 | 0.1847 | 0.1981 |
| 0.8244 | 28.21 | 4400 | 0.1805 | 0.1847 |
| 0.7689 | 30.77 | 4800 | 0.1736 | 0.1832 |
| 0.7825 | 33.33 | 5200 | 0.1698 | 0.1821 |
| 0.7817 | 35.9 | 5600 | 0.1758 | 0.1803 |
| 0.7488 | 38.46 | 6000 | 0.1663 | 0.1760 |
| 0.7171 | 41.03 | 6400 | 0.1636 | 0.1721 |
| 0.7222 | 43.59 | 6800 | 0.1663 | 0.1729 |
| 0.7156 | 46.15 | 7200 | 0.1633 | 0.1715 |
| 0.7121 | 48.72 | 7600 | 0.1666 | 0.1718 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config lv --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm --dataset speech-recognition-community-v2/dev_data --config lv --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "lv", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "domāju ka viņam viss labi"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 16.997 | 9.633 |
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.