modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 18:29:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 18:25:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
RASMUS/wav2vec2-xlsr-1b-ru
|
RASMUS
| 2022-03-23T18:29:08Z | 43 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"speech",
"ru",
"dataset:mozilla-foundation/common_voice_8_0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: ru
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- speech
model-index:
- name: XLS-R 1B Wav2Vec2 Russian by Rasmus Toivanen
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ru
metrics:
- name: Test WER
type: wer
value: 10.83
- name: Test CER
type: cer
value: 2.41
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ru
metrics:
- name: Test WER
type: wer
value: 37.71
- name: Test CER
type: cer
value: 12.98
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ru
metrics:
- name: Test WER
type: wer
value: 31.89
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-1b-ru
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
- Wer: 0.0971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5462 | 0.35 | 500 | 0.4027 | 0.3575 |
| 0.498 | 0.69 | 1000 | 0.2588 | 0.2513 |
| 0.4279 | 1.04 | 1500 | 0.2265 | 0.2204 |
| 0.4099 | 1.38 | 2000 | 0.2189 | 0.1979 |
| 0.4688 | 1.73 | 2500 | 0.2100 | 0.1920 |
| 0.2241 | 2.07 | 3000 | 0.1980 | 0.1767 |
| 0.2056 | 2.42 | 3500 | 0.2020 | 0.1683 |
| 0.3423 | 2.76 | 4000 | 0.1862 | 0.1606 |
| 0.2478 | 3.11 | 4500 | 0.1787 | 0.1563 |
| 0.3079 | 3.45 | 5000 | 0.1759 | 0.1555 |
| 0.2477 | 3.8 | 5500 | 0.1713 | 0.1423 |
| 0.1718 | 4.14 | 6000 | 0.1695 | 0.1391 |
| 0.1675 | 4.49 | 6500 | 0.1677 | 0.1372 |
| 0.1631 | 4.83 | 7000 | 0.1652 | 0.1333 |
| 0.1429 | 5.18 | 7500 | 0.1605 | 0.1308 |
| 0.1505 | 5.52 | 8000 | 0.1612 | 0.1245 |
| 0.1385 | 5.87 | 8500 | 0.1487 | 0.1225 |
| 0.1285 | 6.22 | 9000 | 0.1526 | 0.1201 |
| 0.1153 | 6.56 | 9500 | 0.1464 | 0.1172 |
| 0.1159 | 6.91 | 10000 | 0.1505 | 0.1143 |
| 0.1061 | 7.25 | 10500 | 0.1444 | 0.1106 |
| 0.1016 | 7.6 | 11000 | 0.1427 | 0.1075 |
| 0.1125 | 7.94 | 11500 | 0.1386 | 0.1045 |
| 0.0937 | 8.29 | 12000 | 0.1403 | 0.1022 |
| 0.1059 | 8.63 | 12500 | 0.1406 | 0.1022 |
| 0.0857 | 8.98 | 13000 | 0.1372 | 0.0992 |
| 0.0901 | 9.32 | 13500 | 0.1380 | 0.0977 |
| 0.0913 | 9.67 | 14000 | 0.1352 | 0.0971 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
samitizerxu/wav2vec2-xls-r-300m-eo
|
samitizerxu
| 2022-03-23T18:29:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"eo",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- eo
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- eo
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-eo
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: eo
metrics:
- name: Test WER
type: wer
value: 34.72
- name: Test CER
type: cer
value: 7.54
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-eo
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - EO dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2584
- Wer: 0.3114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.1701 | 0.8 | 500 | 2.8105 | 1.0 |
| 1.9143 | 1.6 | 1000 | 0.5977 | 0.7002 |
| 1.1259 | 2.4 | 1500 | 0.5063 | 0.6157 |
| 0.9732 | 3.2 | 2000 | 0.4264 | 0.5673 |
| 0.8983 | 4.0 | 2500 | 0.4249 | 0.4902 |
| 0.8507 | 4.8 | 3000 | 0.3811 | 0.4536 |
| 0.8064 | 5.6 | 3500 | 0.3643 | 0.4467 |
| 0.7866 | 6.4 | 4000 | 0.3600 | 0.4453 |
| 0.7773 | 7.2 | 4500 | 0.3724 | 0.4470 |
| 0.747 | 8.0 | 5000 | 0.3501 | 0.4189 |
| 0.7279 | 8.8 | 5500 | 0.3500 | 0.4261 |
| 0.7153 | 9.6 | 6000 | 0.3328 | 0.3966 |
| 0.7 | 10.4 | 6500 | 0.3314 | 0.3869 |
| 0.6784 | 11.2 | 7000 | 0.3396 | 0.4051 |
| 0.6582 | 12.0 | 7500 | 0.3236 | 0.3899 |
| 0.6478 | 12.8 | 8000 | 0.3263 | 0.3832 |
| 0.6277 | 13.6 | 8500 | 0.3139 | 0.3769 |
| 0.6053 | 14.4 | 9000 | 0.2955 | 0.3536 |
| 0.5777 | 15.2 | 9500 | 0.2793 | 0.3413 |
| 0.5631 | 16.0 | 10000 | 0.2789 | 0.3353 |
| 0.5446 | 16.8 | 10500 | 0.2709 | 0.3264 |
| 0.528 | 17.6 | 11000 | 0.2693 | 0.3234 |
| 0.5169 | 18.4 | 11500 | 0.2656 | 0.3193 |
| 0.5041 | 19.2 | 12000 | 0.2575 | 0.3102 |
| 0.4971 | 20.0 | 12500 | 0.2584 | 0.3114 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id samitizerxu/wav2vec2-xls-r-300m-eo --dataset mozilla-foundation/common_voice_7_0 --config eo --split test
```
|
mpoyraz/wav2vec2-xls-r-300m-cv8-turkish
|
mpoyraz
| 2022-03-23T18:29:03Z | 54 | 3 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"tr",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: tr
tags:
- automatic-speech-recognition
- common_voice
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- tr
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: mpoyraz/wav2vec2-xls-r-300m-cv8-turkish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: tr
metrics:
- name: Test WER
type: wer
value: 10.61
- name: Test CER
type: cer
value: 2.67
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: tr
metrics:
- name: Test WER
type: wer
value: 36.46
- name: Test CER
type: cer
value: 12.38
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: tr
metrics:
- name: Test WER
type: wer
value: 40.91
---
# wav2vec2-xls-r-300m-cv8-turkish
## Model description
This ASR model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Turkish language.
## Training and evaluation data
The following datasets were used for finetuning:
- [Common Voice 8.0 TR](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) All `validated` split except `test` split was used for training.
## Training procedure
To support the datasets above, custom pre-processing and loading steps was performed and [wav2vec2-turkish](https://github.com/mpoyraz/wav2vec2-turkish) repo was used for that purpose.
### Training hyperparameters
The following hypermaters were used for finetuning:
- learning_rate 2.5e-4
- num_train_epochs 20
- warmup_steps 500
- freeze_feature_extractor
- mask_time_prob 0.1
- mask_feature_prob 0.1
- feat_proj_dropout 0.05
- attention_dropout 0.05
- final_dropout 0.1
- activation_dropout 0.05
- per_device_train_batch_size 8
- per_device_eval_batch_size 8
- gradient_accumulation_steps 8
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
## Language Model
N-gram language model is trained on a Turkish Wikipedia articles using KenLM and [ngram-lm-wiki](https://github.com/mpoyraz/ngram-lm-wiki) repo was used to generate arpa LM and convert it into binary format.
## Evaluation Commands
Please install [unicode_tr](https://pypi.org/project/unicode_tr/) package before running evaluation. It is used for Turkish text processing.
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv8-turkish --dataset mozilla-foundation/common_voice_8_0 --config tr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv8-turkish --dataset speech-recognition-community-v2/dev_data --config tr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Evaluation results:
| Dataset | WER | CER |
|---|---|---|
|Common Voice 8 TR test split| 10.61 | 2.67 |
|Speech Recognition Community dev data| 36.46 | 12.38 |
|
lgris/sew-tiny-portuguese-cv8
|
lgris
| 2022-03-23T18:29:00Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"sew",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"pt",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- pt
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- pt
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: sew-tiny-portuguese-cv8
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: pt
metrics:
- name: Test WER
type: wer
value: 33.71
- name: Test CER
type: cer
value: 10.69
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 52.79
- name: Test CER
type: cer
value: 20.98
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 53.18
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 55.23
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-tiny-portuguese-cv8
This model is a fine-tuned version of [lgris/sew-tiny-pt](https://huggingface.co/lgris/sew-tiny-pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4082
- Wer: 0.3053
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 1.93 | 1000 | 2.9134 | 0.9767 |
| 2.9224 | 3.86 | 2000 | 2.8405 | 0.9789 |
| 2.9224 | 5.79 | 3000 | 2.8094 | 0.9800 |
| 2.8531 | 7.72 | 4000 | 2.7439 | 0.9891 |
| 2.8531 | 9.65 | 5000 | 2.7057 | 1.0159 |
| 2.7721 | 11.58 | 6000 | 2.7235 | 1.0709 |
| 2.7721 | 13.51 | 7000 | 2.5931 | 1.1035 |
| 2.6566 | 15.44 | 8000 | 2.2171 | 0.9884 |
| 2.6566 | 17.37 | 9000 | 1.2399 | 0.8081 |
| 1.9558 | 19.31 | 10000 | 0.9045 | 0.6353 |
| 1.9558 | 21.24 | 11000 | 0.7705 | 0.5533 |
| 1.4987 | 23.17 | 12000 | 0.7068 | 0.5165 |
| 1.4987 | 25.1 | 13000 | 0.6641 | 0.4718 |
| 1.3811 | 27.03 | 14000 | 0.6043 | 0.4470 |
| 1.3811 | 28.96 | 15000 | 0.5532 | 0.4268 |
| 1.2897 | 30.89 | 16000 | 0.5371 | 0.4101 |
| 1.2897 | 32.82 | 17000 | 0.5924 | 0.4150 |
| 1.225 | 34.75 | 18000 | 0.4949 | 0.3894 |
| 1.225 | 36.68 | 19000 | 0.5591 | 0.4045 |
| 1.193 | 38.61 | 20000 | 0.4927 | 0.3731 |
| 1.193 | 40.54 | 21000 | 0.4922 | 0.3712 |
| 1.1482 | 42.47 | 22000 | 0.4799 | 0.3662 |
| 1.1482 | 44.4 | 23000 | 0.4846 | 0.3648 |
| 1.1201 | 46.33 | 24000 | 0.4770 | 0.3623 |
| 1.1201 | 48.26 | 25000 | 0.4530 | 0.3426 |
| 1.0892 | 50.19 | 26000 | 0.4523 | 0.3527 |
| 1.0892 | 52.12 | 27000 | 0.4573 | 0.3443 |
| 1.0583 | 54.05 | 28000 | 0.4488 | 0.3353 |
| 1.0583 | 55.98 | 29000 | 0.4295 | 0.3285 |
| 1.0319 | 57.92 | 30000 | 0.4321 | 0.3220 |
| 1.0319 | 59.85 | 31000 | 0.4244 | 0.3236 |
| 1.0076 | 61.78 | 32000 | 0.4197 | 0.3201 |
| 1.0076 | 63.71 | 33000 | 0.4230 | 0.3208 |
| 0.9851 | 65.64 | 34000 | 0.4090 | 0.3127 |
| 0.9851 | 67.57 | 35000 | 0.4088 | 0.3133 |
| 0.9695 | 69.5 | 36000 | 0.4123 | 0.3088 |
| 0.9695 | 71.43 | 37000 | 0.4017 | 0.3090 |
| 0.9514 | 73.36 | 38000 | 0.4184 | 0.3086 |
| 0.9514 | 75.29 | 39000 | 0.4075 | 0.3043 |
| 0.944 | 77.22 | 40000 | 0.4082 | 0.3053 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-abkhaz
|
infinitejoy
| 2022-03-23T18:28:58Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
license: apache-2.0
tags:
- ab
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Abkhaz
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ab
metrics:
- name: Test WER
type: wer
value: 60.07
- name: Test CER
type: cer
value: 12.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-abkhaz
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5359
- Wer: 0.6192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.8617 | 22.73 | 500 | 2.6264 | 1.0013 |
| 1.2716 | 45.45 | 1000 | 0.6218 | 0.6942 |
| 1.049 | 68.18 | 1500 | 0.5442 | 0.6368 |
| 0.9632 | 90.91 | 2000 | 0.5364 | 0.6242 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
Harveenchadha/hindi_large_wav2vec2
|
Harveenchadha
| 2022-03-23T18:28:53Z | 44 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"hi",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:Harveenchadha/indic-voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
language:
- hi
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- hi
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- Harveenchadha/indic-voice
model-index:
- name: Hindi Large
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: hi
metrics:
- name: Test WER
type: wer
value: 23.08
- name: Test CER
type: cer
value: 8.11
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-7.0
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 23.36
- name: Test CER
type: cer
value: 8.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice-8.0
type: mozilla-foundation/common_voice_8_0
args: hi
metrics:
- name: Test WER
type: wer
value: 24.85
- name: Test CER
type: cer
value: 9.99
---
|
anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm
|
anuragshas
| 2022-03-23T18:28:35Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"sk",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- sk
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Slovak
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sk
metrics:
- name: Test WER
type: wer
value: 18.609
- name: Test CER
type: cer
value: 5.488
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sk
metrics:
- name: Test WER
type: wer
value: 40.548
- name: Test CER
type: cer
value: 17.733
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sk
metrics:
- name: Test WER
type: wer
value: 44.1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Slovak
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3067
- Wer: 0.2678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.175 | 2.41 | 400 | 4.6909 | 1.0 |
| 3.3785 | 4.82 | 800 | 3.3080 | 1.0 |
| 2.6964 | 7.23 | 1200 | 2.0651 | 1.1055 |
| 1.3008 | 9.64 | 1600 | 0.5845 | 0.6207 |
| 1.1185 | 12.05 | 2000 | 0.4195 | 0.4193 |
| 1.0252 | 14.46 | 2400 | 0.3824 | 0.3570 |
| 0.935 | 16.87 | 2800 | 0.3693 | 0.3462 |
| 0.8818 | 19.28 | 3200 | 0.3587 | 0.3318 |
| 0.8534 | 21.69 | 3600 | 0.3420 | 0.3180 |
| 0.8137 | 24.1 | 4000 | 0.3426 | 0.3130 |
| 0.7968 | 26.51 | 4400 | 0.3349 | 0.3102 |
| 0.7558 | 28.92 | 4800 | 0.3216 | 0.3019 |
| 0.7313 | 31.33 | 5200 | 0.3451 | 0.3060 |
| 0.7358 | 33.73 | 5600 | 0.3272 | 0.2967 |
| 0.718 | 36.14 | 6000 | 0.3315 | 0.2882 |
| 0.6991 | 38.55 | 6400 | 0.3299 | 0.2830 |
| 0.6529 | 40.96 | 6800 | 0.3140 | 0.2836 |
| 0.6225 | 43.37 | 7200 | 0.3128 | 0.2751 |
| 0.633 | 45.78 | 7600 | 0.3211 | 0.2774 |
| 0.5876 | 48.19 | 8000 | 0.3162 | 0.2764 |
| 0.588 | 50.6 | 8400 | 0.3082 | 0.2722 |
| 0.5915 | 53.01 | 8800 | 0.3120 | 0.2681 |
| 0.5798 | 55.42 | 9200 | 0.3133 | 0.2709 |
| 0.5736 | 57.83 | 9600 | 0.3086 | 0.2676 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config sk --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm --dataset speech-recognition-community-v2/dev_data --config sk --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "sk", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => ""
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 26.707 | 18.609 |
|
infinitejoy/wav2vec2-large-xls-r-300m-arabic
|
infinitejoy
| 2022-03-23T18:28:27Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ar
license: apache-2.0
tags:
- ar
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Arabic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: ar
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ar
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300m-SV
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: NA
- Wer: NA
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py \
--model_id infinitejoy/wav2vec2-large-xls-r-300m-arabic \
--dataset mozilla-foundation/common_voice_7_0 --config ar --split test --log_outputs
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py \
--model_id infinitejoy/wav2vec2-large-xls-r-300m-arabic --dataset speech-recognition-community-v2/dev_data \
--config ar --split validation --chunk_length_s 10 --stride_length_s 1
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "infinitejoy/wav2vec2-large-xls-r-300m-arabic"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "ar", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| NA | NA |
|
emre/wav2vec2-xls-r-300m-Russian-small
|
emre
| 2022-03-23T18:28:22Z | 19 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"ru",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language:
- ru
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Russian-small
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ru
type: common_voice
args: ru
metrics:
- name: Test WER
type: wer
value: 48.38
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ru
metrics:
- name: Test WER
type: wer
value: 58.25
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ru
metrics:
- name: Test WER
type: wer
value: 56.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Russian-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3514
- Wer: 0.4838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.512 | 1.32 | 400 | 3.2207 | 1.0 |
| 3.1562 | 2.65 | 800 | 3.0166 | 1.0 |
| 1.5211 | 3.97 | 1200 | 0.7134 | 0.8275 |
| 0.6724 | 5.3 | 1600 | 0.4713 | 0.6402 |
| 0.4693 | 6.62 | 2000 | 0.3904 | 0.5668 |
| 0.3693 | 7.95 | 2400 | 0.3609 | 0.5121 |
| 0.3004 | 9.27 | 2800 | 0.3514 | 0.4838 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell
|
FremyCompany
| 2022-03-23T18:28:16Z | 9 | 4 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"nl",
"nl_BE",
"nl_NL",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- nl
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- nl
- nl_BE
- nl_NL
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-nl-v1-cv8-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 3.93
- name: Test CER
type: cer
value: 1.22
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: 16.35
- name: Test CER
type: cer
value: 9.64
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 15.81
---
# XLS-R-based CTC model with 5-gram language model from Open Subtitles
This model is a version of [facebook/wav2vec2-xls-r-2b-22-to-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16) fine-tuned mainly on the [CGN dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/), as well as the [MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL](https://commonvoice.mozilla.org) dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):
- Wer: 0.03931
- Cer: 0.01224
> **IMPORTANT NOTE**: The `hunspell` typo fixer is **not enabled** on the website, which returns raw CTC+LM results. Hunspell reranking is only available in the `eval.py` decoding script. For best results, please use the code in that file while using the model locally for inference.
> **IMPORTANT NOTE**: Evaluating this model requires `apt install libhunspell-dev` and a pip install of `hunspell` in addition to pip installs of `pipy-kenlm` and `pyctcdecode` (see `install_requirements.sh`); in addition, the chunking lengths and strides were optimized for the model as `12s` and `2s` respectively (see `eval.sh`).
> **QUICK REMARK**: The "Robust Speech Event" set does not contain cleaned transcription text, so its WER/CER are vastly over-estimated. For instance `2014` in the dev set is left as a number but will be recognized as `tweeduizend veertien`, which counts as 3 mistakes (`2014` missing, and both `tweeduizend` and `veertien` wrongly inserted). Other normalization problems in the dev set include the presence of single quotes around some words, that then end up as non-match despite being the correct word (but without quotes), and the removal of some speech words in the final transcript (`ja`, etc...). As a result, our real error rate on the dev set is significantly lower than reported.
>
> 
>
> You can compare the [predictions](https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell/blob/main/log_speech-recognition-community-v2_dev_data_nl_validation_predictions.txt) with the [targets](https://huggingface.co/FremyCompany/xls-r-2b-nl-v2_lm-5gram-os2_hunspell/blob/main/log_speech-recognition-community-v2_dev_data_nl_validation_targets.txt) on the validation dev set yourself, for example using [this diffing tool](https://countwordsfree.com/comparetexts).
> **WE DO SPEECH RECOGNITION**: Hello reader! If you are considering using this (or another) model in production, but would benefit from a model fine-tuned specifically for your use case (using text and/or labelled speech), feel free to [contact our team](https://www.ugent.be/ea/idlab/en/research/semantic-intelligence/speech-and-audio-processing.htm). This model was developped during the [Robust Speech Recognition challenge](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) event by [François REMY](https://www.linkedin.com/in/fremycompany/) [(twitter)](https://twitter.com/FremyCompany) and [Geoffroy VANDERREYDT](https://be.linkedin.com/in/geoffroy-vanderreydt-a4421460).
> We would like to thank [OVH](https://www.ovhcloud.com/en/public-cloud/ai-training/) for providing us with a V100S GPU.
## Model description
The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame.
To improve accuracy, a beam-search decoder based on `pyctcdecode` is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.
To further deal with typos, `hunspell` is used to propose alternative spellings for words not in the unigrams of the language model. These alternatives are then reranked based on the language model trained above, and a penalty proportional to the levenshtein edit distance between the alternative and the recognized word. This for examples enables to correct `collegas` into `collega's` or `gogol` into `google`.
## Intended uses & limitations
This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
## Training and evaluation data
The model was:
0. initialized with [the 2B parameter model from Facebook](facebook/wav2vec2-xls-r-2b-22-to-16).
1. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
2. trained `1` epoch (36000 iterations of batch size 32) on [the `cgn` dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/).
3. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
FremyCompany/xls-r-2b-nl-v2_lm-5gram-os
|
FremyCompany
| 2022-03-23T18:28:14Z | 5 | 3 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"nl",
"nl_BE",
"nl_NL",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- nl
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- nl
- nl_BE
- nl_NL
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-nl-v1-cv8-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 4.06
- name: Test CER
type: cer
value: 1.22
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: 17.77
- name: Test CER
type: cer
value: 9.77
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 16.32
---
# XLS-R-based CTC model with 5-gram language model from Open Subtitles
This model is a version of [facebook/wav2vec2-xls-r-2b-22-to-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16) fine-tuned mainly on the [CGN dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/), as well as the [MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL](https://commonvoice.mozilla.org) dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):
- Wer: 0.04057
- Cer: 0.01222
## Model description
The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame.
To improve accuracy, a beam-search decoder based on `pyctcdecode` is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.
## Intended uses & limitations
This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
## Training and evaluation data
The model was:
0. initialized with [the 2B parameter model from Facebook](facebook/wav2vec2-xls-r-2b-22-to-16).
1. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
2. trained `1` epoch (36000 iterations of batch size 32) on [the `cgn` dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/).
3. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
nouamanetazi/wav2vec2-xls-r-300m-ar-with-lm
|
nouamanetazi
| 2022-03-23T18:27:54Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ar
license: apache-2.0
tags:
- ar
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: XLS-R-300M - Arabic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ar
metrics:
- name: Test WER
type: wer
value: 1.0
- name: Test CER
type: cer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-ar
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - AR dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.0191
- eval_wer: 1.0
- eval_runtime: 252.2389
- eval_samples_per_second: 30.217
- eval_steps_per_second: 0.476
- epoch: 1.0
- step: 340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
Please use the evaluation script `eval.py` included in the repo.
1. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id nouamanetazi/wav2vec2-xls-r-300m-ar --dataset speech-recognition-community-v2/dev_data --config ar --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
arijitx/wav2vec2-xls-r-300m-bengali
|
arijitx
| 2022-03-23T18:27:52Z | 427 | 6 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"hf-asr-leaderboard",
"openslr_SLR53",
"robust-speech-event",
"dataset:openslr",
"dataset:SLR53",
"dataset:AI4Bharat/IndicCorp",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- bn
license: apache-2.0
tags:
- automatic-speech-recognition
- bn
- hf-asr-leaderboard
- openslr_SLR53
- robust-speech-event
datasets:
- openslr
- SLR53
- AI4Bharat/IndicCorp
metrics:
- wer
- cer
model-index:
- name: arijitx/wav2vec2-xls-r-300m-bengali
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: openslr
name: Open SLR
args: SLR53
metrics:
- type: wer
value: 0.21726385291857586
name: Test WER
- type: cer
value: 0.04725010353701041
name: Test CER
- type: wer
value: 0.15322879016421437
name: Test WER with lm
- type: cer
value: 0.03413696666806267
name: Test CER with lm
---
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the OPENSLR_SLR53 - bengali dataset.
It achieves the following results on the evaluation set.
Without language model :
- WER: 0.21726385291857586
- CER: 0.04725010353701041
With 5 gram language model trained on 30M sentences randomly chosen from [AI4Bharat IndicCorp](https://indicnlp.ai4bharat.org/corpora/) dataset :
- WER: 0.15322879016421437
- CER: 0.03413696666806267
Note : 5% of a total 10935 samples have been used for evaluation. Evaluation set has 10935 examples which was not part of training training was done on first 95% and eval was done on last 5%. Training was stopped after 180k steps. Output predictions are available under files section.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset_name="openslr"
- model_name_or_path="facebook/wav2vec2-xls-r-300m"
- dataset_config_name="SLR53"
- output_dir="./wav2vec2-xls-r-300m-bengali"
- overwrite_output_dir
- num_train_epochs="50"
- per_device_train_batch_size="32"
- per_device_eval_batch_size="32"
- gradient_accumulation_steps="1"
- learning_rate="7.5e-5"
- warmup_steps="2000"
- length_column_name="input_length"
- evaluation_strategy="steps"
- text_column_name="sentence"
- chars_to_ignore , ? . ! \- \; \: \" “ % ‘ ” � — ’ … –
- save_steps="2000"
- eval_steps="3000"
- logging_steps="100"
- layerdrop="0.0"
- activation_dropout="0.1"
- save_total_limit="3"
- freeze_feature_encoder
- feat_proj_dropout="0.0"
- mask_time_prob="0.75"
- mask_time_length="10"
- mask_feature_prob="0.25"
- mask_feature_length="64"
- preprocessing_num_workers 32
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
Notes
- Training and eval code modified from : https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event.
- Bengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used.
- Minimum audio duration of 0.5s has been used to filter the training data which excluded may be 10-20 samples.
- OpenSLR53 transcripts are *not* part of LM training and LM used to evaluate.
|
lgris/sew-tiny-portuguese-cv
|
lgris
| 2022-03-23T18:27:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sew",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"pt",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- pt
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- pt
- robust-speech-event
datasets:
- common_voice
model-index:
- name: sew-tiny-portuguese-cv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 30.02
- name: Test CER
type: cer
value: 10.34
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 56.46
- name: Test CER
type: cer
value: 22.94
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 57.17
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 61.3
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-tiny-portuguese-cv
This model is a fine-tuned version of [lgris/sew-tiny-pt](https://huggingface.co/lgris/sew-tiny-pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5110
- Wer: 0.2842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| No log | 4.92 | 1000 | 0.8468 | 0.6494 |
| 3.4638 | 9.85 | 2000 | 0.4978 | 0.3815 |
| 3.4638 | 14.78 | 3000 | 0.4734 | 0.3417 |
| 0.9904 | 19.7 | 4000 | 0.4577 | 0.3344 |
| 0.9904 | 24.63 | 5000 | 0.4376 | 0.3170 |
| 0.8849 | 29.55 | 6000 | 0.4225 | 0.3118 |
| 0.8849 | 34.48 | 7000 | 0.4354 | 0.3080 |
| 0.819 | 39.41 | 8000 | 0.4434 | 0.3004 |
| 0.819 | 44.33 | 9000 | 0.4710 | 0.3132 |
| 0.7706 | 49.26 | 10000 | 0.4497 | 0.3064 |
| 0.7706 | 54.19 | 11000 | 0.4598 | 0.3100 |
| 0.7264 | 59.11 | 12000 | 0.4271 | 0.3013 |
| 0.7264 | 64.04 | 13000 | 0.4333 | 0.2959 |
| 0.6909 | 68.96 | 14000 | 0.4554 | 0.3019 |
| 0.6909 | 73.89 | 15000 | 0.4444 | 0.2888 |
| 0.6614 | 78.81 | 16000 | 0.4734 | 0.3081 |
| 0.6614 | 83.74 | 17000 | 0.4820 | 0.3058 |
| 0.6379 | 88.67 | 18000 | 0.4416 | 0.2950 |
| 0.6379 | 93.59 | 19000 | 0.4614 | 0.2974 |
| 0.6055 | 98.52 | 20000 | 0.4812 | 0.3018 |
| 0.6055 | 103.45 | 21000 | 0.4700 | 0.3018 |
| 0.5823 | 108.37 | 22000 | 0.4726 | 0.2999 |
| 0.5823 | 113.3 | 23000 | 0.4979 | 0.2887 |
| 0.5597 | 118.23 | 24000 | 0.4813 | 0.2980 |
| 0.5597 | 123.15 | 25000 | 0.4968 | 0.2972 |
| 0.542 | 128.08 | 26000 | 0.5331 | 0.3059 |
| 0.542 | 133.0 | 27000 | 0.5046 | 0.2978 |
| 0.5185 | 137.93 | 28000 | 0.4882 | 0.2922 |
| 0.5185 | 142.85 | 29000 | 0.4945 | 0.2938 |
| 0.499 | 147.78 | 30000 | 0.4971 | 0.2913 |
| 0.499 | 152.71 | 31000 | 0.4948 | 0.2873 |
| 0.4811 | 157.63 | 32000 | 0.4924 | 0.2918 |
| 0.4811 | 162.56 | 33000 | 0.5128 | 0.2911 |
| 0.4679 | 167.49 | 34000 | 0.5098 | 0.2892 |
| 0.4679 | 172.41 | 35000 | 0.4966 | 0.2863 |
| 0.456 | 177.34 | 36000 | 0.5033 | 0.2839 |
| 0.456 | 182.27 | 37000 | 0.5114 | 0.2875 |
| 0.4453 | 187.19 | 38000 | 0.5154 | 0.2859 |
| 0.4453 | 192.12 | 39000 | 0.5102 | 0.2847 |
| 0.4366 | 197.04 | 40000 | 0.5110 | 0.2842 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
pablouribe/xls-r-spanish-test
|
pablouribe
| 2022-03-23T18:27:46Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: xls-r-spanish-test
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: es
metrics:
- name: Test WER
type: wer
value: 13.89
- name: Test CER
type: cer
value: 3.85
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: es
metrics:
- name: Test WER
type: wer
value: 37.66
- name: Test CER
type: cer
value: 15.32
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: es
metrics:
- name: Test WER
type: wer
value: 41.17
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - ES dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1461
- Wer: 1.0063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.953 | 0.15 | 1000 | 2.9528 | 1.0 |
| 1.1519 | 0.3 | 2000 | 0.3735 | 1.0357 |
| 1.0278 | 0.45 | 3000 | 0.2529 | 1.0390 |
| 0.9922 | 0.61 | 4000 | 0.2208 | 1.0270 |
| 0.9618 | 0.76 | 5000 | 0.2088 | 1.0294 |
| 0.9364 | 0.91 | 6000 | 0.2019 | 1.0214 |
| 0.9179 | 1.06 | 7000 | 0.1940 | 1.0294 |
| 0.9154 | 1.21 | 8000 | 0.1915 | 1.0290 |
| 0.8985 | 1.36 | 9000 | 0.1837 | 1.0211 |
| 0.9055 | 1.51 | 10000 | 0.1838 | 1.0273 |
| 0.8861 | 1.67 | 11000 | 0.1765 | 1.0139 |
| 0.892 | 1.82 | 12000 | 0.1723 | 1.0188 |
| 0.8778 | 1.97 | 13000 | 0.1735 | 1.0092 |
| 0.8645 | 2.12 | 14000 | 0.1707 | 1.0106 |
| 0.8595 | 2.27 | 15000 | 0.1713 | 1.0186 |
| 0.8392 | 2.42 | 16000 | 0.1686 | 1.0053 |
| 0.8436 | 2.57 | 17000 | 0.1653 | 1.0096 |
| 0.8405 | 2.73 | 18000 | 0.1689 | 1.0077 |
| 0.8382 | 2.88 | 19000 | 0.1645 | 1.0114 |
| 0.8247 | 3.03 | 20000 | 0.1647 | 1.0078 |
| 0.8219 | 3.18 | 21000 | 0.1611 | 1.0026 |
| 0.8024 | 3.33 | 22000 | 0.1580 | 1.0062 |
| 0.8087 | 3.48 | 23000 | 0.1578 | 1.0038 |
| 0.8097 | 3.63 | 24000 | 0.1556 | 1.0057 |
| 0.8094 | 3.79 | 25000 | 0.1552 | 1.0035 |
| 0.7836 | 3.94 | 26000 | 0.1516 | 1.0052 |
| 0.8042 | 4.09 | 27000 | 0.1515 | 1.0054 |
| 0.7925 | 4.24 | 28000 | 0.1499 | 1.0031 |
| 0.7855 | 4.39 | 29000 | 0.1490 | 1.0041 |
| 0.7814 | 4.54 | 30000 | 0.1482 | 1.0068 |
| 0.7859 | 4.69 | 31000 | 0.1460 | 1.0066 |
| 0.7819 | 4.85 | 32000 | 0.1464 | 1.0062 |
| 0.7784 | 5.0 | 33000 | 0.1460 | 1.0063 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
w11wo/wav2vec2-xls-r-300m-zh-HK-v2
|
w11wo
| 2022-03-23T18:27:41Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:common_voice",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: zh-HK
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: Wav2Vec2 XLS-R 300M Cantonese (zh-HK)
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 31.73
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 23.11
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 23.02
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 56.6
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 55.11
---
# Wav2Vec2 XLS-R 300M Cantonese (zh-HK)
Wav2Vec2 XLS-R 300M Cantonese (zh-HK) is an automatic speech recognition model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a fine-tuned version of [Wav2Vec2-XLS-R-300M](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the `zh-HK` subset of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
This model was trained using HuggingFace's PyTorch framework and is part of the [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by HuggingFace. All training was done on a Tesla V100, sponsored by OVH.
All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-zh-HK-v2/tree/main) tab, as well as the [Training metrics](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-zh-HK-v2/tensorboard) logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------------ | ------- | ----- | ------------------------------- |
| `wav2vec2-xls-r-300m-zh-HK-v2` | 300M | XLS-R | `Common Voice zh-HK` Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | CER |
| -------------------------------- | ------ | ------ |
| `Common Voice` | 0.8089 | 31.73% |
| `Common Voice 7` | N/A | 23.11% |
| `Common Voice 8` | N/A | 23.02% |
| `Robust Speech Event - Dev Data` | N/A | 56.60% |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 0.0001
- `train_batch_size`: 8
- `eval_batch_size`: 8
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 32
- `optimizer`: Adam with `betas=(0.9, 0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_steps`: 2000
- `num_epochs`: 100.0
- `mixed_precision_training`: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
| :-----------: | :---: | :---: | :-------------: | :----: | :----: |
| 69.8341 | 1.34 | 500 | 80.0722 | 1.0 | 1.0 |
| 6.6418 | 2.68 | 1000 | 6.6346 | 1.0 | 1.0 |
| 6.2419 | 4.02 | 1500 | 6.2909 | 1.0 | 1.0 |
| 6.0813 | 5.36 | 2000 | 6.1150 | 1.0 | 1.0 |
| 5.9677 | 6.7 | 2500 | 6.0301 | 1.1386 | 1.0028 |
| 5.9296 | 8.04 | 3000 | 5.8975 | 1.2113 | 1.0058 |
| 5.6434 | 9.38 | 3500 | 5.5404 | 2.1624 | 1.0171 |
| 5.1974 | 10.72 | 4000 | 4.5440 | 2.1702 | 0.9366 |
| 4.3601 | 12.06 | 4500 | 3.3839 | 2.2464 | 0.8998 |
| 3.9321 | 13.4 | 5000 | 2.8785 | 2.3097 | 0.8400 |
| 3.6462 | 14.74 | 5500 | 2.5108 | 1.9623 | 0.6663 |
| 3.5156 | 16.09 | 6000 | 2.2790 | 1.6479 | 0.5706 |
| 3.32 | 17.43 | 6500 | 2.1450 | 1.8337 | 0.6244 |
| 3.1918 | 18.77 | 7000 | 1.8536 | 1.9394 | 0.6017 |
| 3.1139 | 20.11 | 7500 | 1.7205 | 1.9112 | 0.5638 |
| 2.8995 | 21.45 | 8000 | 1.5478 | 1.0624 | 0.3250 |
| 2.7572 | 22.79 | 8500 | 1.4068 | 1.1412 | 0.3367 |
| 2.6881 | 24.13 | 9000 | 1.3312 | 2.0100 | 0.5683 |
| 2.5993 | 25.47 | 9500 | 1.2553 | 2.0039 | 0.6450 |
| 2.5304 | 26.81 | 10000 | 1.2422 | 2.0394 | 0.5789 |
| 2.4352 | 28.15 | 10500 | 1.1582 | 1.9970 | 0.5507 |
| 2.3795 | 29.49 | 11000 | 1.1160 | 1.8255 | 0.4844 |
| 2.3287 | 30.83 | 11500 | 1.0775 | 1.4123 | 0.3780 |
| 2.2622 | 32.17 | 12000 | 1.0704 | 1.7445 | 0.4894 |
| 2.2225 | 33.51 | 12500 | 1.0272 | 1.7237 | 0.5058 |
| 2.1843 | 34.85 | 13000 | 0.9756 | 1.8042 | 0.5028 |
| 2.1 | 36.19 | 13500 | 0.9527 | 1.8909 | 0.6055 |
| 2.0741 | 37.53 | 14000 | 0.9418 | 1.9026 | 0.5880 |
| 2.0179 | 38.87 | 14500 | 0.9363 | 1.7977 | 0.5246 |
| 2.0615 | 40.21 | 15000 | 0.9635 | 1.8112 | 0.5599 |
| 1.9448 | 41.55 | 15500 | 0.9249 | 1.7250 | 0.4914 |
| 1.8966 | 42.89 | 16000 | 0.9023 | 1.5829 | 0.4319 |
| 1.8662 | 44.24 | 16500 | 0.9002 | 1.4833 | 0.4230 |
| 1.8136 | 45.58 | 17000 | 0.9076 | 1.1828 | 0.2987 |
| 1.7908 | 46.92 | 17500 | 0.8774 | 1.5773 | 0.4258 |
| 1.7354 | 48.26 | 18000 | 0.8727 | 1.5037 | 0.4024 |
| 1.6739 | 49.6 | 18500 | 0.8636 | 1.1239 | 0.2789 |
| 1.6457 | 50.94 | 19000 | 0.8516 | 1.2269 | 0.3104 |
| 1.5847 | 52.28 | 19500 | 0.8399 | 1.3309 | 0.3360 |
| 1.5971 | 53.62 | 20000 | 0.8441 | 1.3153 | 0.3335 |
| 1.602 | 54.96 | 20500 | 0.8590 | 1.2932 | 0.3433 |
| 1.5063 | 56.3 | 21000 | 0.8334 | 1.1312 | 0.2875 |
| 1.4631 | 57.64 | 21500 | 0.8474 | 1.1698 | 0.2999 |
| 1.4997 | 58.98 | 22000 | 0.8638 | 1.4279 | 0.3854 |
| 1.4301 | 60.32 | 22500 | 0.8550 | 1.2737 | 0.3300 |
| 1.3798 | 61.66 | 23000 | 0.8266 | 1.1802 | 0.2934 |
| 1.3454 | 63.0 | 23500 | 0.8235 | 1.3816 | 0.3711 |
| 1.3678 | 64.34 | 24000 | 0.8550 | 1.6427 | 0.5035 |
| 1.3761 | 65.68 | 24500 | 0.8510 | 1.6709 | 0.4907 |
| 1.2668 | 67.02 | 25000 | 0.8515 | 1.5842 | 0.4505 |
| 1.2835 | 68.36 | 25500 | 0.8283 | 1.5353 | 0.4221 |
| 1.2961 | 69.7 | 26000 | 0.8339 | 1.5743 | 0.4369 |
| 1.2656 | 71.05 | 26500 | 0.8331 | 1.5331 | 0.4217 |
| 1.2556 | 72.39 | 27000 | 0.8242 | 1.4708 | 0.4109 |
| 1.2043 | 73.73 | 27500 | 0.8245 | 1.4469 | 0.4031 |
| 1.2722 | 75.07 | 28000 | 0.8202 | 1.4924 | 0.4096 |
| 1.202 | 76.41 | 28500 | 0.8290 | 1.3807 | 0.3719 |
| 1.1679 | 77.75 | 29000 | 0.8195 | 1.4097 | 0.3749 |
| 1.1967 | 79.09 | 29500 | 0.8059 | 1.2074 | 0.3077 |
| 1.1241 | 80.43 | 30000 | 0.8137 | 1.2451 | 0.3270 |
| 1.1414 | 81.77 | 30500 | 0.8117 | 1.2031 | 0.3121 |
| 1.132 | 83.11 | 31000 | 0.8234 | 1.4266 | 0.3901 |
| 1.0982 | 84.45 | 31500 | 0.8064 | 1.3712 | 0.3607 |
| 1.0797 | 85.79 | 32000 | 0.8167 | 1.3356 | 0.3562 |
| 1.0119 | 87.13 | 32500 | 0.8215 | 1.2754 | 0.3268 |
| 1.0216 | 88.47 | 33000 | 0.8163 | 1.2512 | 0.3184 |
| 1.0375 | 89.81 | 33500 | 0.8137 | 1.2685 | 0.3290 |
| 0.9794 | 91.15 | 34000 | 0.8220 | 1.2724 | 0.3255 |
| 1.0207 | 92.49 | 34500 | 0.8165 | 1.2906 | 0.3361 |
| 1.0169 | 93.83 | 35000 | 0.8153 | 1.2819 | 0.3305 |
| 1.0127 | 95.17 | 35500 | 0.8187 | 1.2832 | 0.3252 |
| 0.9978 | 96.51 | 36000 | 0.8111 | 1.2612 | 0.3210 |
| 0.9923 | 97.85 | 36500 | 0.8076 | 1.2278 | 0.3122 |
| 1.0451 | 99.2 | 37000 | 0.8086 | 1.2451 | 0.3156 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
Wav2Vec2 XLS-R 300M Cantonese (zh-HK) was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on OVH Cloud.
## Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
lgris/sew-tiny-portuguese-cv7
|
lgris
| 2022-03-23T18:27:38Z | 24 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"sew",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"pt",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- pt
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- pt
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: sew-tiny-portuguese-cv7
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER
type: wer
value: 28.9
- name: Test CER
type: cer
value: 9.41
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sv
metrics:
- name: Test WER
type: wer
value: 47.27
- name: Test CER
type: cer
value: 19.62
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pt
metrics:
- name: Test WER
type: wer
value: 47.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pt
metrics:
- name: Test WER
type: wer
value: 49.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-tiny-portuguese-cv7
This model is a fine-tuned version of [lgris/sew-tiny-pt](https://huggingface.co/lgris/sew-tiny-pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4232
- Wer: 0.2745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| No log | 2.6 | 1000 | 1.0034 | 0.7308 |
| 4.1307 | 5.19 | 2000 | 0.6274 | 0.4721 |
| 4.1307 | 7.79 | 3000 | 0.5541 | 0.4130 |
| 1.3117 | 10.39 | 4000 | 0.5302 | 0.3880 |
| 1.3117 | 12.99 | 5000 | 0.5082 | 0.3644 |
| 1.2047 | 15.58 | 6000 | 0.4818 | 0.3539 |
| 1.2047 | 18.18 | 7000 | 0.4822 | 0.3477 |
| 1.14 | 20.78 | 8000 | 0.4781 | 0.3428 |
| 1.14 | 23.38 | 9000 | 0.4840 | 0.3401 |
| 1.0818 | 25.97 | 10000 | 0.4613 | 0.3251 |
| 1.0818 | 28.57 | 11000 | 0.4569 | 0.3257 |
| 1.0451 | 31.17 | 12000 | 0.4494 | 0.3132 |
| 1.0451 | 33.77 | 13000 | 0.4560 | 0.3201 |
| 1.011 | 36.36 | 14000 | 0.4687 | 0.3174 |
| 1.011 | 38.96 | 15000 | 0.4397 | 0.3122 |
| 0.9785 | 41.56 | 16000 | 0.4605 | 0.3173 |
| 0.9785 | 44.16 | 17000 | 0.4380 | 0.3064 |
| 0.9458 | 46.75 | 18000 | 0.4372 | 0.3048 |
| 0.9458 | 49.35 | 19000 | 0.4426 | 0.3039 |
| 0.9126 | 51.95 | 20000 | 0.4317 | 0.2962 |
| 0.9126 | 54.54 | 21000 | 0.4345 | 0.2960 |
| 0.8926 | 57.14 | 22000 | 0.4365 | 0.2948 |
| 0.8926 | 59.74 | 23000 | 0.4306 | 0.2940 |
| 0.8654 | 62.34 | 24000 | 0.4303 | 0.2928 |
| 0.8654 | 64.93 | 25000 | 0.4351 | 0.2915 |
| 0.8373 | 67.53 | 26000 | 0.4340 | 0.2909 |
| 0.8373 | 70.13 | 27000 | 0.4279 | 0.2907 |
| 0.83 | 72.73 | 28000 | 0.4214 | 0.2867 |
| 0.83 | 75.32 | 29000 | 0.4256 | 0.2849 |
| 0.8062 | 77.92 | 30000 | 0.4281 | 0.2826 |
| 0.8062 | 80.52 | 31000 | 0.4398 | 0.2865 |
| 0.7846 | 83.12 | 32000 | 0.4218 | 0.2812 |
| 0.7846 | 85.71 | 33000 | 0.4227 | 0.2791 |
| 0.7697 | 88.31 | 34000 | 0.4200 | 0.2767 |
| 0.7697 | 90.91 | 35000 | 0.4285 | 0.2791 |
| 0.7539 | 93.51 | 36000 | 0.4238 | 0.2777 |
| 0.7539 | 96.1 | 37000 | 0.4288 | 0.2757 |
| 0.7413 | 98.7 | 38000 | 0.4205 | 0.2748 |
| 0.7413 | 101.3 | 39000 | 0.4241 | 0.2761 |
| 0.7348 | 103.89 | 40000 | 0.4232 | 0.2745 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
reichenbach/wav2vec2-large-xls-r-300m-hi
|
reichenbach
| 2022-03-23T18:27:23Z | 41 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"hi",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language:
- hi
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4749
- Wer: 0.9420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.8626 | 4.76 | 400 | 3.6151 | 1.0 |
| 3.5463 | 9.52 | 800 | 3.5778 | 1.0 |
| 3.4415 | 14.28 | 1200 | 3.4525 | 1.0 |
| 3.0927 | 19.05 | 1600 | 2.6220 | 0.9860 |
| 2.0573 | 23.8 | 2000 | 2.3974 | 0.9610 |
| 1.5905 | 28.57 | 2400 | 2.4427 | 0.9558 |
| 1.426 | 33.33 | 2800 | 2.4736 | 0.9475 |
| 1.3147 | 38.09 | 3200 | 2.4494 | 0.9417 |
| 1.2642 | 42.85 | 3600 | 2.4665 | 0.9450 |
| 1.2289 | 47.62 | 4000 | 2.4749 | 0.9420 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
infinitejoy/wav2vec2-large-xls-r-300m-abkhaz-cv8
|
infinitejoy
| 2022-03-23T18:27:00Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
license: apache-2.0
tags:
- ab
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Abkhaz
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ab
metrics:
- name: Test WER
type: wer
value: 27.6
- name: Test CER
type: cer
value: 4.577
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-abkhaz-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1614
- Wer: 0.2907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.2881 | 4.26 | 4000 | 0.3764 | 0.6461 |
| 1.0767 | 8.53 | 8000 | 0.2657 | 0.5164 |
| 0.9841 | 12.79 | 12000 | 0.2330 | 0.4445 |
| 0.9274 | 17.06 | 16000 | 0.2134 | 0.3929 |
| 0.8781 | 21.32 | 20000 | 0.1945 | 0.3886 |
| 0.8381 | 25.59 | 24000 | 0.1840 | 0.3737 |
| 0.8054 | 29.85 | 28000 | 0.1756 | 0.3523 |
| 0.7763 | 34.12 | 32000 | 0.1745 | 0.3299 |
| 0.7474 | 38.38 | 36000 | 0.1677 | 0.3074 |
| 0.7298 | 42.64 | 40000 | 0.1649 | 0.2963 |
| 0.7125 | 46.91 | 44000 | 0.1617 | 0.2931 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anuragshas/wav2vec2-xls-r-1b-hi-with-lm
|
anuragshas
| 2022-03-23T18:26:47Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"hi",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
model-index:
- name: XLS-R-1B - Hindi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: hi
metrics:
- name: Test WER
type: wer
value: 15.899
- name: Test CER
type: cer
value: 5.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-1B - Hindi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6921
- Wer: 0.3547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.0674 | 2.07 | 400 | 1.3411 | 0.8835 |
| 1.324 | 4.15 | 800 | 0.9311 | 0.7142 |
| 1.2023 | 6.22 | 1200 | 0.8060 | 0.6170 |
| 1.1573 | 8.29 | 1600 | 0.7415 | 0.4972 |
| 1.1117 | 10.36 | 2000 | 0.7248 | 0.4588 |
| 1.0672 | 12.44 | 2400 | 0.6729 | 0.4350 |
| 1.0336 | 14.51 | 2800 | 0.7117 | 0.4346 |
| 1.0025 | 16.58 | 3200 | 0.7019 | 0.4272 |
| 0.9578 | 18.65 | 3600 | 0.6792 | 0.4118 |
| 0.9272 | 20.73 | 4000 | 0.6863 | 0.4156 |
| 0.9321 | 22.8 | 4400 | 0.6535 | 0.3972 |
| 0.8802 | 24.87 | 4800 | 0.6766 | 0.3906 |
| 0.844 | 26.94 | 5200 | 0.6782 | 0.3949 |
| 0.8387 | 29.02 | 5600 | 0.6916 | 0.3921 |
| 0.8042 | 31.09 | 6000 | 0.6806 | 0.3797 |
| 0.793 | 33.16 | 6400 | 0.7120 | 0.3831 |
| 0.7567 | 35.23 | 6800 | 0.6862 | 0.3808 |
| 0.7463 | 37.31 | 7200 | 0.6893 | 0.3709 |
| 0.7053 | 39.38 | 7600 | 0.7096 | 0.3701 |
| 0.6906 | 41.45 | 8000 | 0.6921 | 0.3676 |
| 0.6891 | 43.52 | 8400 | 0.7167 | 0.3663 |
| 0.658 | 45.6 | 8800 | 0.6833 | 0.3580 |
| 0.6576 | 47.67 | 9200 | 0.6914 | 0.3569 |
| 0.6358 | 49.74 | 9600 | 0.6922 | 0.3551 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-1b-hi-with-lm --dataset mozilla-foundation/common_voice_8_0 --config hi --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-1b-hi-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "hi", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "तुम्हारे पास तीन महीने बचे हैं"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 26.209 | 15.899 |
|
mpoyraz/wav2vec2-xls-r-300m-cv6-turkish
|
mpoyraz
| 2022-03-23T18:26:27Z | 9 | 7 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"hf-asr-leaderboard",
"robust-speech-event",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: tr
tags:
- automatic-speech-recognition
- common_voice
- hf-asr-leaderboard
- robust-speech-event
- tr
datasets:
- common_voice
model-index:
- name: mpoyraz/wav2vec2-xls-r-300m-cv6-turkish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 6.1
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 8.83
- name: Test CER
type: cer
value: 2.37
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: tr
metrics:
- name: Test WER
type: wer
value: 32.81
- name: Test CER
type: cer
value: 11.22
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: tr
metrics:
- name: Test WER
type: wer
value: 34.86
---
# wav2vec2-xls-r-300m-cv6-turkish
## Model description
This ASR model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Turkish language.
## Training and evaluation data
The following datasets were used for finetuning:
- [Common Voice 6.1 TR](https://huggingface.co/datasets/common_voice) All `validated` split except `test` split was used for training.
- [MediaSpeech](https://www.openslr.org/108/)
## Training procedure
To support both of the datasets above, custom pre-processing and loading steps was performed and [wav2vec2-turkish](https://github.com/mpoyraz/wav2vec2-turkish) repo was used for that purpose.
### Training hyperparameters
The following hypermaters were used for finetuning:
- learning_rate 2e-4
- num_train_epochs 10
- warmup_steps 500
- freeze_feature_extractor
- mask_time_prob 0.1
- mask_feature_prob 0.1
- feat_proj_dropout 0.05
- attention_dropout 0.05
- final_dropout 0.1
- activation_dropout 0.05
- per_device_train_batch_size 8
- per_device_eval_batch_size 8
- gradient_accumulation_steps 8
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1
- Datasets 1.18.3
- Tokenizers 0.10.3
## Language Model
N-gram language model is trained on a Turkish Wikipedia articles using KenLM and [ngram-lm-wiki](https://github.com/mpoyraz/ngram-lm-wiki) repo was used to generate arpa LM and convert it into binary format.
## Evaluation Commands
Please install [unicode_tr](https://pypi.org/project/unicode_tr/) package before running evaluation. It is used for Turkish text processing.
1. To evaluate on `common_voice` with split `test`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv6-turkish --dataset common_voice --config tr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv6-turkish --dataset speech-recognition-community-v2/dev_data --config tr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Evaluation results:
| Dataset | WER | CER |
|---|---|---|
|Common Voice 6.1 TR test split| 8.83 | 2.37 |
|Speech Recognition Community dev data| 32.81 | 11.22 |
|
samitizerxu/wav2vec2-xls-r-300m-zh-CN
|
samitizerxu
| 2022-03-23T18:26:06Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"zh",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- zh-CN
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
- zh
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-zh-CN
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: zh-CN
metrics:
- name: Test WER
type: wer
value: 80
- name: Test CER
type: cer
value: 40.11
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: zh-CN
metrics:
- name: Test CER
type: cer
value: 69.1
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: zh-CN
metrics:
- name: Test CER
type: cer
value: 43.08
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-zh-CN
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-CN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8828
- Wer: 2.0604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 60.2112 | 0.74 | 500 | 64.8189 | 1.0 |
| 8.1128 | 1.48 | 1000 | 6.8997 | 1.0 |
| 6.0492 | 2.22 | 1500 | 5.9677 | 1.9495 |
| 5.9326 | 2.95 | 2000 | 5.8845 | 1.4092 |
| 5.8763 | 3.69 | 2500 | 5.8460 | 1.6126 |
| 5.7888 | 4.43 | 3000 | 5.7545 | 2.2034 |
| 5.735 | 5.17 | 3500 | 5.6777 | 2.3350 |
| 5.6861 | 5.91 | 4000 | 5.5179 | 2.2232 |
| 5.381 | 6.65 | 4500 | 5.1420 | 2.1816 |
| 4.625 | 7.39 | 5000 | 3.9020 | 2.0722 |
| 4.214 | 8.12 | 5500 | 3.3394 | 2.1430 |
| 3.8992 | 8.86 | 6000 | 2.9085 | 2.1534 |
| 3.6481 | 9.6 | 6500 | 2.6208 | 2.3538 |
| 3.4658 | 10.34 | 7000 | 2.3172 | 2.2271 |
| 3.257 | 11.08 | 7500 | 2.0916 | 2.1351 |
| 3.1294 | 11.82 | 8000 | 1.8954 | 2.2133 |
| 3.0266 | 12.56 | 8500 | 1.7673 | 2.0896 |
| 2.9451 | 13.29 | 9000 | 1.6659 | 2.1381 |
| 2.8802 | 14.03 | 9500 | 1.5637 | 2.1969 |
| 2.78 | 14.77 | 10000 | 1.4921 | 2.2335 |
| 2.7049 | 15.51 | 10500 | 1.4132 | 2.2217 |
| 2.6768 | 16.25 | 11000 | 1.3667 | 2.2232 |
| 2.6358 | 16.99 | 11500 | 1.3111 | 2.1286 |
| 2.5802 | 17.72 | 12000 | 1.2679 | 2.1430 |
| 2.5012 | 18.46 | 12500 | 1.2365 | 2.1153 |
| 2.458 | 19.2 | 13000 | 1.2118 | 2.1573 |
| 2.4433 | 19.94 | 13500 | 1.1992 | 2.1336 |
| 2.438 | 20.68 | 14000 | 1.1803 | 2.1509 |
| 2.418 | 21.42 | 14500 | 1.1601 | 2.1232 |
| 2.3322 | 22.16 | 15000 | 1.1418 | 2.1930 |
| 2.3387 | 22.89 | 15500 | 1.1172 | 2.2464 |
| 2.3349 | 23.63 | 16000 | 1.1144 | 2.1856 |
| 2.291 | 24.37 | 16500 | 1.1018 | 2.1930 |
| 2.2766 | 25.11 | 17000 | 1.0883 | 2.1762 |
| 2.2534 | 25.85 | 17500 | 1.0744 | 2.1875 |
| 2.2393 | 26.59 | 18000 | 1.0561 | 2.1846 |
| 2.2085 | 27.33 | 18500 | 1.0466 | 2.1445 |
| 2.1966 | 28.06 | 19000 | 1.0382 | 2.1089 |
| 2.1794 | 28.8 | 19500 | 1.0264 | 1.9861 |
| 2.1423 | 29.54 | 20000 | 1.0246 | 1.9678 |
| 2.1649 | 30.28 | 20500 | 0.9982 | 2.0005 |
| 2.143 | 31.02 | 21000 | 0.9985 | 2.0450 |
| 2.1338 | 31.76 | 21500 | 0.9932 | 2.0025 |
| 2.1076 | 32.5 | 22000 | 0.9903 | 2.0505 |
| 2.0519 | 33.23 | 22500 | 0.9834 | 2.0737 |
| 2.0534 | 33.97 | 23000 | 0.9756 | 2.0247 |
| 2.0121 | 34.71 | 23500 | 0.9688 | 2.1440 |
| 2.0161 | 35.45 | 24000 | 0.9582 | 2.1232 |
| 2.0178 | 36.19 | 24500 | 0.9480 | 2.0896 |
| 2.0154 | 36.93 | 25000 | 0.9483 | 2.0787 |
| 1.9966 | 37.67 | 25500 | 0.9406 | 2.0297 |
| 1.9753 | 38.4 | 26000 | 0.9419 | 2.0346 |
| 1.9524 | 39.14 | 26500 | 0.9274 | 2.0698 |
| 1.9427 | 39.88 | 27000 | 0.9233 | 2.0787 |
| 1.9258 | 40.62 | 27500 | 0.9182 | 2.0529 |
| 1.9031 | 41.36 | 28000 | 0.9150 | 2.0787 |
| 1.9297 | 42.1 | 28500 | 0.9040 | 2.0505 |
| 1.9041 | 42.84 | 29000 | 0.9009 | 2.0579 |
| 1.8929 | 43.57 | 29500 | 0.8968 | 2.0327 |
| 1.9077 | 44.31 | 30000 | 0.8954 | 2.0619 |
| 1.8504 | 45.05 | 30500 | 0.8922 | 2.0737 |
| 1.8732 | 45.79 | 31000 | 0.8898 | 2.0683 |
| 1.877 | 46.53 | 31500 | 0.8849 | 2.0589 |
| 1.8587 | 47.27 | 32000 | 0.8843 | 2.0450 |
| 1.8236 | 48.01 | 32500 | 0.8810 | 2.0554 |
| 1.8392 | 48.74 | 33000 | 0.8820 | 2.0574 |
| 1.8428 | 49.48 | 33500 | 0.8816 | 2.0668 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id samitizerxu/wav2vec2-xls-r-300m-zh-CN --dataset mozilla-foundation/common_voice_7_0 --config zh-CN --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id samitizerxu/wav2vec2-xls-r-300m-zh-CN --dataset speech-recognition-community-v2/dev_data --config zh-CN --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
ravirajoshi/wav2vec2-large-xls-r-300m-marathi
|
ravirajoshi
| 2022-03-23T18:25:45Z | 20 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"mr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- mr
license: apache-2.0
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
model-index:
- name: wav2vec2-large-xls-r-300m-marathi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-marathi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5656
- Wer: 0.2156
|
huggingtweets/rickyflows
|
huggingtweets
| 2022-03-23T18:12:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T17:53:20Z |
---
language: en
thumbnail: http://www.huggingtweets.com/rickyflows/1648058984275/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1385231541278855171/lgH-Kso5_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">∞ ricky flowstate ∞</div>
<div style="text-align: center; font-size: 14px;">@rickyflows</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ∞ ricky flowstate ∞.
| Data | ∞ ricky flowstate ∞ |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 86 |
| Short tweets | 506 |
| Tweets kept | 2657 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/gn0lyrdk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rickyflows's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2fkt1gts) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2fkt1gts/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rickyflows')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/metakuna
|
huggingtweets
| 2022-03-23T17:48:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T17:35:38Z |
---
language: en
thumbnail: http://www.huggingtweets.com/metakuna/1648057688512/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1493720826935398408/hB4ndxdj_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">metakuna (8/100 blog posts)</div>
<div style="text-align: center; font-size: 14px;">@metakuna</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from metakuna (8/100 blog posts).
| Data | metakuna (8/100 blog posts) |
| --- | --- |
| Tweets downloaded | 3235 |
| Retweets | 242 |
| Short tweets | 524 |
| Tweets kept | 2469 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9uv1luph/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @metakuna's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1k1mb79h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1k1mb79h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/metakuna')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
muhammedshihebi/bert-base-multilingual-cased-squad
|
muhammedshihebi
| 2022-03-23T17:48:47Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-23T17:48:32Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-base-multilingual-cased-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-squad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5271
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18600, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.1256 | 0 |
| 0.7252 | 1 |
| 0.5271 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
machde-edu/test_ML_HF
|
machde-edu
| 2022-03-23T17:27:24Z | 0 | 0 | null |
[
"joblib",
"license:apache-2.0",
"region:us"
] | null | 2022-03-23T17:13:20Z |
---
license: apache-2.0
---
|
huggingtweets/stedmanhalliday
|
huggingtweets
| 2022-03-23T17:16:45Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T17:16:37Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500999718331199496/yhpq7J8H_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">SODI</div>
<div style="text-align: center; font-size: 14px;">@stedmanhalliday</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from SODI.
| Data | SODI |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 59 |
| Short tweets | 559 |
| Tweets kept | 2632 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4ry6l5q3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stedmanhalliday's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1lxo4zkg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1lxo4zkg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/stedmanhalliday')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/pierreavdb
|
huggingtweets
| 2022-03-23T16:50:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T16:43:47Z |
---
language: en
thumbnail: http://www.huggingtweets.com/pierreavdb/1648054135143/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1479780096483512323/LmKFSR3X_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pierre</div>
<div style="text-align: center; font-size: 14px;">@pierreavdb</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pierre.
| Data | Pierre |
| --- | --- |
| Tweets downloaded | 1064 |
| Retweets | 172 |
| Short tweets | 133 |
| Tweets kept | 759 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21bimkjn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pierreavdb's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ji40nkbv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ji40nkbv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pierreavdb')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Rocketknight1/temp-colab-upload-test
|
Rocketknight1
| 2022-03-23T16:29:27Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-23T16:28:11Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/temp-colab-upload-test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/temp-colab-upload-test
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5386
- Validation Loss: 0.0000
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5386 | 0.0000 | 0 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/seanmombo
|
huggingtweets
| 2022-03-23T16:22:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/seanmombo/1648052490598/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1494366913090273285/lmJtNNT2_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">mo bombo</div>
<div style="text-align: center; font-size: 14px;">@seanmombo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from mo bombo.
| Data | mo bombo |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 5 |
| Short tweets | 560 |
| Tweets kept | 2684 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bl9qwdw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @seanmombo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3p8cy5st) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3p8cy5st/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/seanmombo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Zarkit/classificationEsp2
|
Zarkit
| 2022-03-23T15:47:33Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-23T14:22:12Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Zarkit/classificationEsp2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Zarkit/classificationEsp2
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1649
- Validation Loss: 0.7498
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8979, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6010 | 0.5679 | 0 |
| 0.4173 | 0.5552 | 1 |
| 0.1649 | 0.7498 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
joe5campbell/Horovod_Tweet_Sentiment_10k_2eps
|
joe5campbell
| 2022-03-23T15:08:07Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-23T15:07:55Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Horovod_Tweet_Sentiment_10k_2eps
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Horovod_Tweet_Sentiment_10k_2eps
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.701302
- Train Accuracy: 0.49375
- Validation Loss: 0.69441336
- Validation Accuracy: 0.51171875
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 0.0003, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.7017118 | 0.50769234 | 0.6944223 | 0.503125 | 0 |
| 0.701302 | 0.49375 | 0.69441336 | 0.51171875 | 1 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Tokenizers 0.11.6
|
apoorvumang/kgt5-base-wikikg90mv2
|
apoorvumang
| 2022-03-23T15:02:38Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-23T13:16:50Z |
---
license: mit
widget:
- text: "Apoorv Umang Saxena| family name"
example_title: "Family name prediction"
- text: "Apoorv Saxena| country"
example_title: "Country prediction"
- text: "World War 2| followed by"
example_title: "followed by"
---
This is a t5-base model (init from pretrained weights) and finetuned on WikiKG90Mv2 dataset. Please see https://github.com/apoorvumang/kgt5/ for more details on the method.
This model was trained on the tail entity prediction task ie. given subject entity and relation, predict the object entity. Input should be provided in the form of "\<entity text\>| \<relation text\>".
We used the raw text title and descriptions to get entity and relation textual representations. These raw texts were obtained from ogb dataset itself (dataset/wikikg90m-v2/mapping/entity.csv and relation.csv). Entity representation was set to the title, and description was used to disambiguate if 2 entities had the same title. If still no disambiguation was possible, we used the wikidata ID (eg. Q123456).
We trained the model on WikiKG90Mv2 for approx 1.5 epochs on 4x1080Ti GPUs. The training time for 1 epoch was approx 5.5 days.
To evaluate the model, we sample 300 times from the decoder for each input (s,r) pair. We then remove predictions which do not map back to a valid entity, and then rank the predictions by their log probabilities. Filtering was performed subsequently. **We achieve 0.239 validation MRR** (the full leaderboard is here https://ogb.stanford.edu/docs/lsc/leaderboards/#wikikg90mv2)
You can try the following code in an ipython notebook to evaluate the pre-trained model. The full procedure of mapping entity to ids, filtering etc. is not included here for sake of simplicity but can be provided on request if needed. Please contact Apoorv (apoorvumang@gmail.com) for clarifications/details.
---------
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("apoorvumang/kgt5-base-wikikg90mv2")
model = AutoModelForSeq2SeqLM.from_pretrained("apoorvumang/kgt5-base-wikikg90mv2")
```
```
import torch
def getScores(ids, scores, pad_token_id):
"""get sequence scores from model.generate output"""
scores = torch.stack(scores, dim=1)
log_probs = torch.log_softmax(scores, dim=2)
# remove start token
ids = ids[:,1:]
# gather needed probs
x = ids.unsqueeze(-1).expand(log_probs.shape)
needed_logits = torch.gather(log_probs, 2, x)
final_logits = needed_logits[:, :, 0]
padded_mask = (ids == pad_token_id)
final_logits[padded_mask] = 0
final_scores = final_logits.sum(dim=-1)
return final_scores.cpu().detach().numpy()
def topkSample(input, model, tokenizer,
num_samples=5,
num_beams=1,
max_output_length=30):
tokenized = tokenizer(input, return_tensors="pt")
out = model.generate(**tokenized,
do_sample=True,
num_return_sequences = num_samples,
num_beams = num_beams,
eos_token_id = tokenizer.eos_token_id,
pad_token_id = tokenizer.pad_token_id,
output_scores = True,
return_dict_in_generate=True,
max_length=max_output_length,)
out_tokens = out.sequences
out_str = tokenizer.batch_decode(out_tokens, skip_special_tokens=True)
out_scores = getScores(out_tokens, out.scores, tokenizer.pad_token_id)
pair_list = [(x[0], x[1]) for x in zip(out_str, out_scores)]
sorted_pair_list = sorted(pair_list, key=lambda x:x[1], reverse=True)
return sorted_pair_list
def greedyPredict(input, model, tokenizer):
input_ids = tokenizer([input], return_tensors="pt").input_ids
out_tokens = model.generate(input_ids)
out_str = tokenizer.batch_decode(out_tokens, skip_special_tokens=True)
return out_str[0]
```
```
# an example from validation set that the model predicts correctly
# you can try your own examples here. what's your noble title?
input = "Sophie Valdemarsdottir| noble title"
out = topkSample(input, model, tokenizer, num_samples=5)
out
```
You can further load the list of entity aliases, then filter only those predictions which are valid entities then create a reverse mapping from alias -> integer id to get final predictions in required format.
However, loading these aliases in memory as a dictionary requires a lot of RAM + you need to download the aliases file (made available here https://storage.googleapis.com/kgt5-wikikg90mv2/ent_alias_list.pickle) (relation file: https://storage.googleapis.com/kgt5-wikikg90mv2/rel_alias_list.pickle)
The submitted validation/test results for were obtained by sampling 300 times for each input, then applying above procedure, followed by filtering known entities. The final MRR can vary slightly due to this sampling nature (we found that although beam search gives deterministic output, the results are inferior to sampling large number of times).
```
# download valid.txt. you can also try same url with test.txt. however test does not contain the correct tails
!wget https://storage.googleapis.com/kgt5-wikikg90mv2/valid.txt
```
```
fname = 'valid.txt'
valid_lines = []
f = open(fname)
for line in f:
valid_lines.append(line.rstrip())
f.close()
print(valid_lines[0])
```
```
from tqdm.auto import tqdm
# try unfiltered hits@k. this is approximation since model can sample same seq multiple times
# you should run this on gpu if you want to evaluate on all points with 300 samples each
k = 1
count_at_k = 0
max_predictions = k
max_points = 1000
for line in tqdm(valid_lines[:max_points]):
input, target = line.split('\t')
model_output = topkSample(input, model, tokenizer, num_samples=max_predictions)
prediction_strings = [x[0] for x in model_output]
if target in prediction_strings:
count_at_k += 1
print('Hits at {0} unfiltered: {1}'.format(k, count_at_k/max_points))
```
|
Zarkit/classificationEsp1
|
Zarkit
| 2022-03-23T12:58:27Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-22T17:07:31Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: classificationEsp1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# classificationEsp1
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3864, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Gare/opus-mt-en-ro-finetuned-en-to-ro
|
Gare
| 2022-03-23T12:51:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-23T07:47:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.0527
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2878
- Bleu: 28.0527
- Gen Len: 34.079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7445 | 1.0 | 38145 | 1.2878 | 28.0527 | 34.079 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
jcollado/english-tweet-tokenizer
|
jcollado
| 2022-03-23T12:41:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-23T12:10:39Z |
# Text preprocessing
This tokenizer has been trained with tweets that have been preprocessed as follows:
1) User mentions (@user_name) have been replaced with the word *user*.
2) URLs have been replace with the word *url*.
3) WIP.
If you are going to use this tokenizer, we recommend you to preprocess your own dataset in the same manner.
|
willcai/wav2vec2_common_voice_accents_us
|
willcai
| 2022-03-23T11:03:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-22T18:14:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_us
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_us
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.549 | 1.28 | 400 | 0.8521 |
| 0.4066 | 2.56 | 800 | 0.2407 |
| 0.2262 | 3.83 | 1200 | 0.2070 |
| 0.1828 | 5.11 | 1600 | 0.2134 |
| 0.1565 | 6.39 | 2000 | 0.2060 |
| 0.1448 | 7.67 | 2400 | 0.2100 |
| 0.1333 | 8.95 | 2800 | 0.2036 |
| 0.121 | 10.22 | 3200 | 0.2192 |
| 0.1146 | 11.5 | 3600 | 0.2154 |
| 0.1108 | 12.78 | 4000 | 0.2223 |
| 0.1017 | 14.06 | 4400 | 0.2331 |
| 0.094 | 15.34 | 4800 | 0.2257 |
| 0.0896 | 16.61 | 5200 | 0.2229 |
| 0.0825 | 17.89 | 5600 | 0.2229 |
| 0.0777 | 19.17 | 6000 | 0.2417 |
| 0.0719 | 20.45 | 6400 | 0.2433 |
| 0.0659 | 21.73 | 6800 | 0.2447 |
| 0.0651 | 23.0 | 7200 | 0.2446 |
| 0.0587 | 24.28 | 7600 | 0.2542 |
| 0.056 | 25.56 | 8000 | 0.2587 |
| 0.0521 | 26.84 | 8400 | 0.2640 |
| 0.0494 | 28.12 | 8800 | 0.2753 |
| 0.0465 | 29.39 | 9200 | 0.2722 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Newt007/multi-class-attacks
|
Newt007
| 2022-03-23T10:30:59Z | 0 | 0 | null |
[
"license:afl-3.0",
"region:us"
] | null | 2022-03-23T10:28:31Z |
---
license: afl-3.0
---
---
language:
- python 3.7
---
libraries:
- keras==2.0.2
- tensorflow==2.4.1
|
Daniele/italian-spellchecker
|
Daniele
| 2022-03-23T10:19:19Z | 35 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"seq2seq",
"it",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-21T14:33:20Z |
---
language:
- it
tags:
- seq2seq
license: mit
---
# Italian Contextual Spellchecker
The model is a fine-tuned version of [IT5](https://huggingface.co/models?search=it5)[1], specifically modelled for computing a spellchecking in the shape of a sequence-to-sequence task.
### USAGE
The input sequence should have the structure <b>seq: <i>your text</i>.</b>. Missing the seq token at the beginning or the final punctuation mark may lead to bad performances.
|
Alvenir/bert-punct-restoration-da
|
Alvenir
| 2022-03-23T09:05:15Z | 17,347 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"punctuation restoration",
"da",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-22T17:33:25Z |
---
language: da
tags:
- bert
- punctuation restoration
license: apache-2.0
datasets:
- custom
---
# Bert Punctuation Restoration Danish
This model performs the punctuation restoration task in Danish. The method used is sequence classification similar to how NER models
are trained.
## Model description
TODO
### How to use
The model requires some additional inference code, hence we created an awesome little pip package for inference.
The inference code is based on the `TokenClassificationPipeline` pipeline from huggingface.
First, install the little package by running
```
pip install punctfix
```
Then restoration is as simple as the following snippet:
```python
>>> from punctfix import PunctFixer
>>> fixer = PunctFixer(language="da")
>>> example_text = "mit navn det er rasmus og jeg kommer fra firmaet alvenir det er mig som har trænet denne lækre model"
>>> print(fixer.punctuate(example_text))
'Mit navn det er Rasmus og jeg kommer fra firmaet Alvenir. Det er mig som har trænet denne lækre model.'
>>> example_text = "en dag bliver vi sku glade for at vi nu kan sætte punktummer og kommaer i en sætning det fungerer da meget godt ikke"
>>> print(fixer.punctuate(example_text))
'En dag bliver vi sku glade for, at vi nu kan sætte punktummer og kommaer i en sætning. Det fungerer da meget godt, ikke?'
```
## Training data
To Do
## Training procedure
To Do
### Preprocessing
TODO
## Evaluation results
TODO
|
bigmorning/my-gpt-model-3
|
bigmorning
| 2022-03-23T08:22:22Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-23T05:52:35Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-gpt-model-3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-gpt-model-3
This model is a fine-tuned version of [bigmorning/my-gpt-model](https://huggingface.co/bigmorning/my-gpt-model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.1163
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 5.1163 | 0 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
jkhan447/sentiment-model-sample-group-emotion
|
jkhan447
| 2022-03-23T08:19:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-18T06:53:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentiment-model-sample-group-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample-group-emotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4604
- Accuracy: 0.7004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
cammy/led-base-16384-100-MDS
|
cammy
| 2022-03-23T06:55:50Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-23T05:32:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-base-16384-100-MDS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-base-16384-100-MDS
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1425
- Rouge1: 16.7324
- Rouge2: 5.8501
- Rougel: 13.908
- Rougelsum: 13.8469
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 25 | 3.6187 | 15.1426 | 4.2468 | 13.4488 | 13.38 | 20.0 |
| No log | 2.0 | 50 | 3.9873 | 13.4341 | 3.3283 | 10.2739 | 10.8229 | 20.0 |
| No log | 3.0 | 75 | 4.0264 | 18.1891 | 5.3395 | 15.0797 | 15.3586 | 20.0 |
| No log | 4.0 | 100 | 4.0929 | 17.0091 | 5.5336 | 14.4381 | 14.5149 | 19.5 |
| No log | 5.0 | 125 | 4.1425 | 16.7324 | 5.8501 | 13.908 | 13.8469 | 20.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
mimicheng/codeparrot-ds-sample
|
mimicheng
| 2022-03-23T05:30:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-22T22:13:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5057 | 0.93 | 5000 | 1.6003 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Axon/resnet18-v1
|
Axon
| 2022-03-22T23:30:27Z | 0 | 1 | null |
[
"Axon",
"Elixir",
"dataset:ImageNet",
"arxiv:1512.03385",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- Axon
- Elixir
datasets:
- ImageNet
---
# ResNet
This ResNet18 model was translated from the ONNX ResNetv1 model found
at https://github.com/onnx/models/tree/main/vision/classification/resnet into Axon using [AxonOnnx](https://github.com/elixir-nx/axon_onnx)
The following description is copied from the relevant description at the ONNX repository.
## Use cases
These ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required.
ImageNet trained models are often used as the base layers for a transfer learning approach to training a model in your domain. Transfer learning can significantly reduce the processing necessary to train an accurate model in your domain. This model was published here with the expectation that it would be useful to the Elixir community for transfer learning and other similar approaches.
## Description
Deeper neural networks are more difficult to train. Residual learning framework ease the training of networks that are substantially deeper. The research explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It also provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset the residual nets were evaluated with a depth of up to 152 layers — 8× deeper than VGG nets but still having lower complexity.
## Model
ResNet models consists of residual blocks and came up to counter the effect of deteriorating accuracies with more layers due to network not learning the initial layers.
ResNet v1 uses post-activation for the residual blocks.
### Input
All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224.
The inference was done using jpeg image.
### Preprocessing
The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferably happen at preprocessing.
### Output
The model outputs image scores for each of the 1000 classes of ImageNet.
### Postprocessing
The post-processing involves calculating the softmax probability scores for each class. You can also sort them to report the most probable classes. Check [imagenet_postprocess.py](../imagenet_postprocess.py) for code.
## Dataset
Dataset used for train and validation: [ImageNet (ILSVRC2012)](http://www.image-net.org/challenges/LSVRC/2012/). Check [imagenet_prep](../imagenet_prep.md) for guidelines on preparing the dataset.
## References
* **ResNetv1**
[Deep residual learning for image recognition](https://arxiv.org/abs/1512.03385)
He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016.
* **ONNX source model**
[onnx/models vision/classification/resnet resnet18-v1-7.onnx](https://github.com/onnx/models/tree/main/vision/classification/resnet/README)
|
bigmorning/my-gpt-model
|
bigmorning
| 2022-03-22T20:32:08Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-22T14:15:39Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-gpt-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-gpt-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.3002
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 5.3002 | 0 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
blckwdw61/sysformver1
|
blckwdw61
| 2022-03-22T19:46:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-22T18:35:28Z |
# CES BERT sysform model
Fine-tuned BERT cased model
|
elihoole/distilgpt2-ttds
|
elihoole
| 2022-03-22T19:41:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-22T12:52:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-ttds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-ttds
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 40 | 4.5807 |
| No log | 2.0 | 80 | 4.4023 |
| No log | 3.0 | 120 | 4.3666 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.7.1
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anthonny/dehatebert-mono-spanish-finetuned-sentiments_reviews_politicos
|
anthonny
| 2022-03-22T17:57:11Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-22T15:44:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: robertuito-sentiment-analysis-hate-finetuned-sentiments_reviews_politicos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robertuito-sentiment-analysis-hate-finetuned-sentiments_reviews_politicos
This model is a fine-tuned version of [Hate-speech-CNERG/dehatebert-mono-spanish](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2559
- Accuracy: 0.9368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.29 | 1.0 | 3595 | 0.2559 | 0.9368 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/garyshort
|
huggingtweets
| 2022-03-22T17:44:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/garyshort/1647971079915/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1326680694370734082/wjLz-oO4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gary Short</div>
<div style="text-align: center; font-size: 14px;">@garyshort</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gary Short.
| Data | Gary Short |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 94 |
| Short tweets | 321 |
| Tweets kept | 2833 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2vtmlhlj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @garyshort's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2pfbf1ys) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2pfbf1ys/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/garyshort')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Dahn/wav2vec2-large-xls-r-300m-turkish-colab
|
Dahn
| 2022-03-22T17:29:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-22T12:52:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3965
- Wer: 0.3807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.974 | 3.67 | 400 | 0.7102 | 0.7318 |
| 0.4216 | 7.34 | 800 | 0.4273 | 0.4941 |
| 0.1891 | 11.01 | 1200 | 0.4548 | 0.4864 |
| 0.1267 | 14.68 | 1600 | 0.4208 | 0.4082 |
| 0.0958 | 18.35 | 2000 | 0.4236 | 0.4033 |
| 0.0799 | 22.02 | 2400 | 0.4052 | 0.3829 |
| 0.0624 | 25.69 | 2800 | 0.4088 | 0.3875 |
| 0.0491 | 29.36 | 3200 | 0.3965 | 0.3807 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
msamogh/autonlp-cai-out-of-scope-649919116
|
msamogh
| 2022-03-22T15:27:18Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:msamogh/autonlp-data-cai-out-of-scope",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-19T21:40:42Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- msamogh/autonlp-data-cai-out-of-scope
co2_eq_emissions: 2.438401649319185
---
# What do the class labels mean?
0 - out of scope
1 - in scope
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 649919116
- CO2 Emissions (in grams): 2.438401649319185
## Validation Metrics
- Loss: 0.5314930081367493
- Accuracy: 0.7526881720430108
- Precision: 0.8490566037735849
- Recall: 0.75
- AUC: 0.8515151515151514
- F1: 0.7964601769911505
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/msamogh/autonlp-cai-out-of-scope-649919116
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919116", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919116", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
esiebomajeremiah/autonlp-email-classification-657119381
|
esiebomajeremiah
| 2022-03-22T13:57:29Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:esiebomajeremiah/autonlp-data-email-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-22T13:54:29Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- esiebomajeremiah/autonlp-data-email-classification
co2_eq_emissions: 3.516233232503715
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 657119381
- CO2 Emissions (in grams): 3.516233232503715
## Validation Metrics
- Loss: 0.00037395773688331246
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/esiebomajeremiah/autonlp-email-classification-657119381
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("esiebomajeremiah/autonlp-email-classification-657119381", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("esiebomajeremiah/autonlp-email-classification-657119381", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
edwardjross/xlm-roberta-base-finetuned-panx-en
|
edwardjross
| 2022-03-22T13:33:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-22T13:30:48Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6918378678511938
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3792
- F1: 0.6918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0639 | 1.0 | 74 | 0.5075 | 0.5539 |
| 0.491 | 2.0 | 148 | 0.4118 | 0.6510 |
| 0.355 | 3.0 | 222 | 0.3792 | 0.6918 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
edwardjross/xlm-roberta-base-finetuned-panx-de-fr
|
edwardjross
| 2022-03-22T13:22:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-22T13:12:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1686
- F1: 0.8606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2819 | 1.0 | 1073 | 0.1800 | 0.8231 |
| 0.1484 | 2.0 | 2146 | 0.1655 | 0.8488 |
| 0.0928 | 3.0 | 3219 | 0.1686 | 0.8606 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
edwardjross/xlm-roberta-base-finetuned-panx-de
|
edwardjross
| 2022-03-22T13:06:25Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-22T12:33:44Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8644809364168419
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1360
- F1: 0.8645
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2528 | 1.0 | 787 | 0.1657 | 0.8244 |
| 0.1298 | 2.0 | 1574 | 0.1369 | 0.8555 |
| 0.0787 | 3.0 | 2361 | 0.1360 | 0.8645 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
willcai/wav2vec2_common_voice_accents_indian
|
willcai
| 2022-03-22T10:58:05Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-21T23:09:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_indian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_indian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.5186 | 1.28 | 400 | 0.6937 |
| 0.3485 | 2.56 | 800 | 0.2323 |
| 0.2229 | 3.83 | 1200 | 0.2195 |
| 0.1877 | 5.11 | 1600 | 0.2147 |
| 0.1618 | 6.39 | 2000 | 0.2058 |
| 0.1434 | 7.67 | 2400 | 0.2077 |
| 0.132 | 8.95 | 2800 | 0.1995 |
| 0.1223 | 10.22 | 3200 | 0.2146 |
| 0.1153 | 11.5 | 3600 | 0.2117 |
| 0.1061 | 12.78 | 4000 | 0.2071 |
| 0.1003 | 14.06 | 4400 | 0.2219 |
| 0.0949 | 15.34 | 4800 | 0.2204 |
| 0.0889 | 16.61 | 5200 | 0.2162 |
| 0.0824 | 17.89 | 5600 | 0.2243 |
| 0.0784 | 19.17 | 6000 | 0.2323 |
| 0.0702 | 20.45 | 6400 | 0.2325 |
| 0.0665 | 21.73 | 6800 | 0.2334 |
| 0.0626 | 23.0 | 7200 | 0.2411 |
| 0.058 | 24.28 | 7600 | 0.2473 |
| 0.054 | 25.56 | 8000 | 0.2591 |
| 0.0506 | 26.84 | 8400 | 0.2577 |
| 0.0484 | 28.12 | 8800 | 0.2633 |
| 0.0453 | 29.39 | 9200 | 0.2692 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
saattrupdan/voxpopuli-wav2vec2-large-cv8-da
|
saattrupdan
| 2022-03-22T09:58:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"da",
"dataset:common_voice_8_0",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- da
license: cc-by-nc-4.0
tasks:
- automatic-speech-recognition
datasets:
- common_voice_8_0
metrics:
- wer
model-index:
- name: voxpopuli-wav2vec2-large-cv8-da
results:
- task:
type: automatic-speech-recognition
dataset:
type: mozilla-foundation/common_voice_8_0
args: da
name: Danish Common Voice 8.0
metrics:
- type: wer
value: 40.54
- task:
type: automatic-speech-recognition
dataset:
type: Alvenir/alvenir_asr_da_eval
name: Alvenir ASR test dataset
metrics:
- type: wer
value: 40.66
---
# VoxPopuli-Wav2vec2-large-CV8-da
## Model description
This model is a fine-tuned version of the Swedish acoustic model [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) on the Danish part of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), containing ~6 crowdsourced hours of read-aloud Danish speech.
## Performance
The model achieves the following WER scores (lower is better):
| **Dataset** | **WER without LM** | **WER with 5-gram LM** |
| :---: | ---: | ---: |
| [Danish part of Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/viewer/da/train) | 48.04 | 40.54 |
| [Alvenir test set](https://huggingface.co/datasets/Alvenir/alvenir_asr_da_eval) | 48.43 | 40.66 |
|
celine98/canine-s-finetuned-sst2
|
celine98
| 2022-03-22T09:47:45Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"canine",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-21T22:35:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: canine-s-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8577981651376146
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-s-finetuned-sst2
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5259
- Accuracy: 0.8578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3524 | 1.0 | 4210 | 0.4762 | 0.8257 |
| 0.2398 | 2.0 | 8420 | 0.4169 | 0.8567 |
| 0.1797 | 3.0 | 12630 | 0.5259 | 0.8578 |
| 0.152 | 4.0 | 16840 | 0.5996 | 0.8532 |
| 0.1026 | 5.0 | 21050 | 0.6676 | 0.8578 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Yaxin/xlm-roberta-base-conll2003-ner
|
Yaxin
| 2022-03-22T08:11:52Z | 81 | 3 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-22T07:36:34Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test-conll2003-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9459188783174762
- name: Recall
type: recall
value: 0.9537192864355436
- name: F1
type: f1
value: 0.94980306712478
- name: Accuracy
type: accuracy
value: 0.9911218410498034
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-conll2003-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0470
- Precision: 0.9459
- Recall: 0.9537
- F1: 0.9498
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
aaraki/wav2vec2-base-demo-colab
|
aaraki
| 2022-03-22T07:43:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-22T04:44:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
lazyturtl/WEC-types
|
lazyturtl
| 2022-03-22T04:54:04Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-22T04:53:55Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: WEC-types
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7830188870429993
---
# WEC-types
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Attenuators

#### Oscillating water column

#### Overtopping Devices

#### Point Absorber

|
mimicheng/codeparrot-ds
|
mimicheng
| 2022-03-22T03:45:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-21T19:59:48Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.7397
- eval_runtime: 603.8598
- eval_samples_per_second: 154.281
- eval_steps_per_second: 4.822
- epoch: 0.08
- step: 5000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/InformalToFormalLincoln29
|
BigSalmon
| 2022-03-22T03:35:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-22T03:29:31Z |
```
original: chrome extensions [MASK] accomplish everyday tasks.
infill: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
original: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
original:
```
|
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN
|
StivenLancheros
| 2022-03-21T22:07:55Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-21T20:11:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2276
- Precision: 0.8078
- Recall: 0.8258
- F1: 0.8167
- Accuracy: 0.9629
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical. This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Both datasets (original, augmented) were concatenated.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0842 | 1.0 | 2719 | 0.1765 | 0.7606 | 0.7785 | 0.7695 | 0.9542 |
| 0.0392 | 2.0 | 5438 | 0.1971 | 0.7990 | 0.7958 | 0.7974 | 0.9596 |
| 0.0138 | 3.0 | 8157 | 0.2094 | 0.8013 | 0.8196 | 0.8103 | 0.9620 |
| 0.0082 | 4.0 | 10876 | 0.2276 | 0.8078 | 0.8258 | 0.8167 | 0.9629 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/elonmusk-garyvee
|
huggingtweets
| 2022-03-21T19:57:10Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-21T19:55:22Z |
---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-garyvee/1647892564866/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1493524673962852353/qRxbC9Xq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Gary Vaynerchuk</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-garyvee</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Gary Vaynerchuk.
| Data | Elon Musk | Gary Vaynerchuk |
| --- | --- | --- |
| Tweets downloaded | 2200 | 3247 |
| Retweets | 102 | 712 |
| Short tweets | 671 | 842 |
| Tweets kept | 1427 | 1693 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/abt9l46e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-garyvee's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/4wls4y5v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/4wls4y5v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-garyvee')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rurupang/bert-base-finetuned-sts
|
rurupang
| 2022-03-21T19:23:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-21T15:10:45Z |
---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- pearsonr
model-index:
- name: bert-base-finetuned-sts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: sts
metrics:
- name: Pearsonr
type: pearsonr
value: 0.8722017849942011
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-sts
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4274
- Pearsonr: 0.8722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 365 | 0.5106 | 0.8429 |
| 0.1092 | 2.0 | 730 | 0.5466 | 0.8497 |
| 0.0958 | 3.0 | 1095 | 0.4123 | 0.8680 |
| 0.0958 | 4.0 | 1460 | 0.4336 | 0.8719 |
| 0.0661 | 5.0 | 1825 | 0.4274 | 0.8722 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Datadave09/DA-RoBERTa
|
Datadave09
| 2022-03-21T18:59:26Z | 0 | 2 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-21T17:42:40Z |
---
license: apache-2.0
---
# Model description
This model corresponds to the paper "A Domain-adaptive Pre-training Approach for Language Bias Detection in News" (Krieger et al.,2022): https://github.com/Media-Bias-Group/A-Domain-adaptive-Pre-training-Approach-for-Language-BiasDetection-in-News
The model can be used for sequence classification tasks of biased and non-biased language in news and media. It is initialized with *roberta-base* weights and fine-tuned on the *Wiki Neutrality Corpus* (Pryzant et al., 2020). More details on the training setup and experiments can be found in our paper.
# How to use
You can use the model with the PyTorch framework
```
#imports
!pip install transformers
!pip install openpyxl
import torch
import torch.nn as nn
import numpy as np
from transformers import RobertaTokenizer,RobertaModel
#define model class including binary classification layer
class RobertaClass(torch.nn.Module):
def __init__(self):
super(RobertaClass, self).__init__()
self.roberta = RobertaModel.from_pretrained("roberta-base")
self.vocab_transform = torch.nn.Linear(768, 768)
self.dropout = torch.nn.Dropout(0.2)
self.classifier1 = torch.nn.Linear(768,2)
def forward(self, input_ids, attention_mask):
output_1 = self.roberta(input_ids=input_ids, attention_mask=attention_mask)
hidden_state = output_1[0]
pooler = hidden_state[:, 0]
pooler = self.vocab_transform(pooler)
pooler = self.dropout(pooler)
output = self.classifier1(pooler)
return output
#load model parameters
weight_dict = torch.load('DA-Roberta.bin')
#initialize model with fine-tuned parameters
model = RobertaClass()
model.load_state_dict(weight_dict)
#exemplary bias classification with instance extracted from BABE dataset (Spinde et al.,2021)
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
inputs = tokenizer("A cop shoots a Black man, and a police union flexes its muscle", return_tensors="pt")
outputs = model(**inputs)
if int(torch.argmax(outputs)) == 1:
print("Biased")
else:
print("Non-biased")
```
# Cite as
```
@InProceedings{Krieger2022,
author={Krieger, David and Spinde, Timo and Ruas, Terry and Kulshrestha, Juhi and Gipp, Bela},
booktitle={2022 ACM/IEEE Joint Conference on Digital Libraries (JCDL)},
title={A Domain-adaptive Pre-training Appraoch for Language Bias Detection in News},
year={2022},
address = "Cologne,Germany"
}
```
|
Yanjie/message-preamble
|
Yanjie
| 2022-03-21T18:33:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
This is the concierge preamble model. Fined tuned on DistilBert uncased model.
|
Yanjie/message-intent
|
Yanjie
| 2022-03-21T18:08:08Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
This is the concierge intent model. Fined tuned on DistilBert uncased model.
|
ianMconversica/autonlp-test-654919306
|
ianMconversica
| 2022-03-21T17:29:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autonlp",
"unk",
"dataset:McIan91/autonlp-data-test",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-21T17:28:50Z |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- McIan91/autonlp-data-test
co2_eq_emissions: 0.7013851565380207
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 654919306
- CO2 Emissions (in grams): 0.7013851565380207
## Validation Metrics
- Loss: 2.5570242404937744
- Rouge1: 72.7273
- Rouge2: 44.4444
- RougeL: 72.7273
- RougeLsum: 72.7273
- Gen Len: 17.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/McIan91/autonlp-test-654919306
```
|
espnet/marathi_openslr64
|
espnet
| 2022-03-21T16:23:56Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:mr_openslr64",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-21T16:17:30Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- mr_openslr64
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/marathi_openslr64`
This model was trained by Sujay Suresh Kumar using mr_openslr64 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 91325a1e58ca0b13494b94bf79b186b095fe0b58
pip install -e .
cd egs2/mr_openslr64/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/marathi_openslr64
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Mar 21 16:06:03 UTC 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.11.0+cu102`
- Git hash: `91325a1e58ca0b13494b94bf79b186b095fe0b58`
- Commit date: `Mon Mar 21 00:40:52 2022 +0000`
## asr_train_asr_conformer_xlsr_raw_bpe150_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_asr_model_valid.acc.ave/marathi_test|299|3625|72.9|22.5|4.7|1.7|28.9|88.6|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_asr_model_valid.acc.ave/marathi_test|299|20557|91.4|3.1|5.5|1.9|10.5|88.6|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_asr_model_valid.acc.ave/marathi_test|299|13562|86.5|6.3|7.1|1.4|14.9|88.6|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_xlsr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_xlsr_raw_bpe150_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 60
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 3
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 10000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe150_sp/train/speech_shape
- exp/asr_stats_raw_bpe150_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe150_sp/valid/speech_shape
- exp/asr_stats_raw_bpe150_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/marathi_train_sp/wav.scp
- speech
- sound
- - dump/raw/marathi_train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/marathi_dev/wav.scp
- speech
- sound
- - dump/raw/marathi_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 20000
token_list:
- <blank>
- <unk>
- ▁
- ा
- ी
- े
- त
- र
- ं
- न
- क
- ्
- व
- ि
- ल
- ▁म
- स
- ो
- श
- द
- च
- म
- ▁अ
- ▁आ
- ण
- ु
- ला
- ह
- ▁आहे
- य
- ▁स
- ग
- ▁ह
- ्या
- चा
- ▁प
- ड
- ▁क
- प
- ट
- ▁ब
- ज
- र्
- ्र
- ▁?
- ▁ज
- ब
- ून
- वा
- ▁एक
- ▁या
- ळ
- ात
- ख
- ध
- ▁ति
- ठ
- ल्या
- ले
- ू
- ▁तुम्हाला
- ां
- ार
- घ
- ची
- ▁अस
- थ
- ▁का
- ने
- णि
- ॅ
- ▁त
- ▁परवा
- ▁ते
- ली
- ▁गेल
- ळा
- ष
- ▁कर
- .
- च्या
- ▁न
- वर
- ▁त्या
- ▁प्र
- ▁करू
- ▁ग
- ्ट
- ई
- झ
- ▁फ
- ाय
- क्ष
- ▁काय
- पूर
- ▁होती
- मध
- ▁तिथ
- ▁काही
- ए
- ▁वि
- ▁दोन
- ▁महिन्या
- व्हा
- तील
- जार
- ▁नाही
- ँ
- ▁पुत
- ॉ
- ▁झाला
- ▁दिसल
- ▁साल
- ▁रस्त्यावर
- स्त
- जवळ
- न्म
- मध्य
- ऊ
- ▁इथे
- ▁तुमच
- ▁शकते
- मान
- ▁उद्
- फ
- ै
- ढ
- ','
- इ
- ौ
-
- ृ
- ओ
- ः
- ॲ
- आ
- '-'
- ञ
- औ
- '!'
- ऑ
- ऱ
- ऐ
- छ
- उ
- '?'
- भ
- अ
- ऋ
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram150/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_xlsr
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 4
linear_units: 1024
num_blocks: 3
dropout_rate: 0.3
positional_dropout_rate: 0.3
attention_dropout_rate: 0.3
input_layer: conv2d
normalize_before: true
macaron_style: false
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 17
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 1024
num_blocks: 3
dropout_rate: 0.3
positional_dropout_rate: 0.3
self_attention_dropout_rate: 0.3
src_attention_dropout_rate: 0.3
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
huggingtweets/rupertboneham-rupertskids-survivorcbs
|
huggingtweets
| 2022-03-21T13:31:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-21T13:26:08Z |
---
language: en
thumbnail: http://www.huggingtweets.com/rupertboneham-rupertskids-survivorcbs/1647869465531/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2879716355/bd3a0d75f2ec004c61cf470e66895eda_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/984777181963448321/GZEqLnVr_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488244197467381765/3F2BzfCJ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rupert Boneham & Rupert Boneham & SURVIVOR</div>
<div style="text-align: center; font-size: 14px;">@rupertboneham-rupertskids-survivorcbs</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rupert Boneham & Rupert Boneham & SURVIVOR.
| Data | Rupert Boneham | Rupert Boneham | SURVIVOR |
| --- | --- | --- | --- |
| Tweets downloaded | 3139 | 352 | 3222 |
| Retweets | 710 | 151 | 551 |
| Short tweets | 142 | 17 | 540 |
| Tweets kept | 2287 | 184 | 2131 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2m3rl64a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rupertboneham-rupertskids-survivorcbs's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o5vktei) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o5vktei/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rupertboneham-rupertskids-survivorcbs')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Dahn/wav2vec2-base-timit-demo-colab
|
Dahn
| 2022-03-21T13:04:57Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-21T11:09:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4796
- Wer: 0.3434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4323 | 4.0 | 500 | 1.3259 | 0.9859 |
| 0.5966 | 8.0 | 1000 | 0.4682 | 0.4442 |
| 0.2187 | 12.0 | 1500 | 0.4490 | 0.3875 |
| 0.1274 | 16.0 | 2000 | 0.4595 | 0.3727 |
| 0.0859 | 20.0 | 2500 | 0.4819 | 0.3683 |
| 0.0602 | 24.0 | 3000 | 0.4524 | 0.3514 |
| 0.0449 | 28.0 | 3500 | 0.4796 | 0.3434 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
beston91/gpt2-xl_ft_logits_1k_2
|
beston91
| 2022-03-21T11:27:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-20T22:16:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_logits_1k_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_logits_1k_2
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.91 | 5 | 6.0743 |
| No log | 1.91 | 10 | 6.1649 |
| No log | 2.91 | 15 | 6.3068 |
| No log | 3.91 | 20 | 6.4793 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.59307861328125
|
beston91/gpt2-xl_ft_logits_5k_2
|
beston91
| 2022-03-21T10:16:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-20T23:02:24Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_logits_5k_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_logits_5k_2
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.2407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 27 | 6.1106 |
| No log | 1.99 | 54 | 6.1400 |
| No log | 2.99 | 81 | 6.1875 |
| No log | 3.99 | 108 | 6.2407 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.59415626525879
|
Ameer05/test
|
Ameer05
| 2022-03-21T09:35:03Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-21T08:16:45Z |
---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [Ameer05/tokenizer-repo](https://huggingface.co/Ameer05/tokenizer-repo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6109
- Rouge1: 54.9442
- Rouge2: 45.3299
- Rougel: 50.5219
- Rougelsum: 53.6475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 0.91 | 5 | 2.3705 | 53.62 | 44.3835 | 49.6135 | 52.693 |
| No log | 1.91 | 10 | 1.9035 | 47.478 | 37.0934 | 39.7935 | 45.1881 |
| No log | 2.91 | 15 | 1.7990 | 54.2488 | 45.0782 | 49.8421 | 52.7564 |
| No log | 3.91 | 20 | 1.7125 | 55.7903 | 46.7554 | 52.2733 | 54.9389 |
| 2.4456 | 4.91 | 25 | 1.6421 | 52.2279 | 43.4391 | 49.6955 | 51.2915 |
| 2.4456 | 5.91 | 30 | 1.6102 | 55.8598 | 47.3293 | 53.1337 | 54.8596 |
| 2.4456 | 6.91 | 35 | 1.6164 | 53.7902 | 44.6622 | 49.5045 | 52.2304 |
| 2.4456 | 7.91 | 40 | 1.6015 | 51.5597 | 42.0333 | 47.9639 | 50.1154 |
| 1.239 | 8.91 | 45 | 1.6067 | 53.0301 | 43.7214 | 49.0227 | 51.8109 |
| 1.239 | 9.91 | 50 | 1.6109 | 54.9442 | 45.3299 | 50.5219 | 53.6475 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
nickmuchi/segformer-b4-finetuned-segments-sidewalk
|
nickmuchi
| 2022-03-21T07:32:43Z | 66 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"dataset:segments/sidewalk-semantic",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-03-20T06:54:20Z |
---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
widget:
- src: https://drive.google.com/uc?id=1-ae6Vtvs-fO1j0D2kxEDX4rKxRipda2j
example_title: Sidewalk with traffic
- src: https://drive.google.com/uc?id=1-dwxxF6LzbEvATr_mwvrAjot-DdBLAM4
example_title: Sidewalk with buildings
datasets:
- segments/sidewalk-semantic
model-index:
- name: segformer-b4-finetuned-segments-sidewalk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b4-finetuned-segments-sidewalk
This model is a fine-tuned version of [nvidia/mit-b4](https://huggingface.co/nvidia/mit-b4) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6463
- Mean Accuracy: 0.5168
- Mean Iou: 0.4317
- Overall Accuracy: 0.8895
- Per Category Accuracy: [nan, 0.9354022848098984, 0.9601675641402632, 0.5369719626168225, 0.8337939300328185, 0.6403441237446122, nan, 0.7582108280375539, 0.8834986003700717, 0.24187000289987157, 0.948116751458167, 0.5520704700749156, 0.0, 0.7381320949432405, 0.19649388321352, 0.888963759173865, 0.0, 0.07624433796769041, 0.9231866922167408, 0.1182221559959602, 0.6801081993642044, 0.5121910497873957, 0.04447175819878205, nan, 0.19406837841548813, 0.5788088135238394, 0.5379894086104895, 0.008460918614020952, 0.9391146435745414, 0.9050362370798539, 0.9765451034803329, 0.015450806083965353, 0.41939482614968804, 0.4941702933568719, 0.0]
- Per Category Iou: [nan, 0.8640678937775673, 0.895377615265056, 0.442350332594235, 0.7643727945096741, 0.4849891658522591, nan, 0.6340492784936108, 0.6910083381883088, 0.21346568681218236, 0.8895978581938467, 0.46446072065520405, 0.0, 0.601404187337089, 0.08586860670194003, 0.6029780227646933, 0.0, 0.07410800631139614, 0.7995575849393181, 0.09964415294445995, 0.4716975388811325, 0.4492564945882909, 0.04216548363174065, nan, 0.13932260862707987, 0.43292556418938755, 0.4516033033256454, 0.00821917808219178, 0.8889508587805682, 0.7461158390782254, 0.954070468766836, 0.012555965083260888, 0.23512657506778772, 0.3742610137901782, 0.0]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Accuracy | Mean Iou | Overall Accuracy | Per Category Accuracy | Per Category Iou |
|:-------------:|:-----:|:-----:|:---------------:|:-------------:|:--------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 1.0086 | 0.25 | 100 | 0.9195 | 0.2302 | 0.1742 | 0.7405 | [nan, 0.754391784765388, 0.8738098328493714, 0.0, 0.6095047025690915, 0.04406067496837279, nan, 0.11344860810198232, 0.03344878303363856, 0.0, 0.9451322667227594, 0.0, 0.0, 0.0, 0.0, 8.118464635968046e-06, 0.0, 0.0, 0.8406900175689528, 0.0, 0.33313290995723815, 0.007980320315659196, 0.0, nan, 0.0, 0.01001465431517245, 0.0, 0.0, 0.9094842682836028, 0.9104621468677264, 0.9500069670140131, 0.0, 0.0, 0.030522857924993155, 0.0] | [nan, 0.5181348731869903, 0.7666613623083653, 0.0, 0.3145052392920833, 0.040279298027504136, nan, 0.09896279300890763, 0.0332534621335044, 0.0, 0.707185048053476, 0.0, 0.0, 0.0, 0.0, 8.11839872703508e-06, 0.0, 0.0, 0.6129636976206597, 0.0, 0.21304181051016494, 0.007979819175153202, 0.0, nan, 0.0, 0.009972716399085856, 0.0, 0.0, 0.8032595523715207, 0.5644424403160349, 0.8548000615746258, 0.0, 0.0, 0.02810796628175876, 0.0] |
| 0.6465 | 0.5 | 200 | 0.7250 | 0.2963 | 0.2416 | 0.7963 | [nan, 0.8965158332325365, 0.9203420775747997, 0.0005677570093457944, 0.42947876549598557, 0.20108992228390948, nan, 0.6149826174335852, 0.6106893770460692, 0.0, 0.9320756176369465, 0.0, 0.0, 0.0, 0.0, 0.23413652010131844, 0.0, 0.0, 0.9437607244807804, 0.0, 0.2033741348512844, 0.2597617238717267, 0.0, nan, 0.0, 0.21746480347516617, 0.0, 0.0, 0.8793454644762622, 0.8380851985041863, 0.9445753860505853, 0.0, 0.0, 0.35629926758549024, 0.0] | [nan, 0.6645359168510458, 0.8064416600263559, 0.000566105647428005, 0.4116417722563792, 0.17504073239500048, nan, 0.34611894249410324, 0.4768988514264542, 0.0, 0.7872815412923856, 0.0, 0.0, 0.0, 0.0, 0.22760454893418883, 0.0, 0.0, 0.6497218142931416, 0.0, 0.16433182458127107, 0.24025960226620707, 0.0, nan, 0.0, 0.1865917623179034, 0.0, 0.0, 0.8237045305017561, 0.6485287252686867, 0.8916263487480074, 0.0, 0.0, 0.23161660227979464, 0.0] |
| 0.6777 | 1.0 | 400 | 0.6645 | 0.3343 | 0.2755 | 0.8205 | [nan, 0.8955600256602996, 0.9528284776336102, 0.20619042056074766, 0.4578573681184769, 0.34171859852352976, nan, 0.5150824142204389, 0.8000759121317076, 0.0, 0.9308408861203066, 0.0, 0.0, 0.0, 0.0, 0.8202247191011236, 0.0, 0.0, 0.931415684238172, 0.0, 0.22729327499111263, 0.2807173404242283, 0.0, nan, 0.0, 0.3332993143873973, 0.0, 0.0, 0.904612735522824, 0.9085503237620377, 0.9531456202767545, 0.0, 0.0, 0.2395403274915222, 0.0] | [nan, 0.7091852218081763, 0.8215012473174504, 0.20316384883142716, 0.449169741519482, 0.2820828827399737, nan, 0.4034439348068946, 0.5801054036574794, 0.0, 0.8406284073872154, 0.0, 0.0, 0.0, 0.0, 0.5491287380561565, 0.0, 0.0, 0.6833033543785748, 0.0, 0.196701947180513, 0.26816266986235426, 0.0, nan, 0.0, 0.2624543573765898, 0.0, 0.0, 0.8319417451247856, 0.6328739755697549, 0.9148380247362377, 0.0, 0.0, 0.18610354253000033, 0.0] |
| 0.4931 | 1.25 | 500 | 0.6513 | 0.3693 | 0.2930 | 0.8232 | [nan, 0.8195930838546497, 0.9565826472101743, 0.3660338785046729, 0.502483997738174, 0.5101274819814215, nan, 0.6120499018406388, 0.8168524932390757, 0.0, 0.9680832750475287, 0.0, 0.0, 0.0, 0.0, 0.7678687406637656, 0.0, 0.0, 0.9132467503439181, 0.07463699730127982, 0.3080053777834345, 0.3700341269744017, 0.0, nan, 0.0, 0.3144554351808238, 0.0, 0.0, 0.8719933435243034, 0.9280312013943278, 0.9461371807749148, 0.0, 0.3623930581804142, 0.40862556355693114, 0.0] | [nan, 0.7255301419742964, 0.8322765227346863, 0.3328323011716717, 0.4866977152337555, 0.31646114214929966, nan, 0.4116248877039441, 0.584768070212383, 0.0, 0.7940437031847611, 0.0, 0.0, 0.0, 0.0, 0.5384221282312557, 0.0, 0.0, 0.7148576049798162, 0.06922710729587371, 0.23689839512021127, 0.330131038978254, 0.0, nan, 0.0, 0.25964434649208096, 0.0, 0.0, 0.8276496500163791, 0.5924934568973941, 0.9145898275185997, 0.0, 0.10460157785142388, 0.3046522912622977, 0.0] |
| 0.1718 | 2.0 | 800 | 0.5338 | 0.3766 | 0.3117 | 0.8521 | [nan, 0.9149980619048741, 0.9439616375983239, 0.49970093457943926, 0.7343188057936092, 0.4654595153245685, nan, 0.4401632944315461, 0.7951368790624852, 0.0, 0.9516775700030986, 0.0, 0.0, 0.0, 0.0, 0.7842599207637851, 0.0, 0.0, 0.9120325078402151, 0.0, 0.5436783980174178, 0.289193941696178, 0.0, nan, 0.0, 0.4040691893023499, 0.04438191043850125, 0.0, 0.9289921718405059, 0.9105179916825697, 0.9579859465374478, 0.0, 0.00014225040134934668, 0.5310102962619485, 0.0] | [nan, 0.7682867926029272, 0.863978713337328, 0.3619354489331745, 0.619807980106986, 0.4001297195410576, nan, 0.37693255173950874, 0.6055069405805374, 0.0, 0.8443884543167844, 0.0, 0.0, 0.0, 0.0, 0.5757144134211389, 0.0, 0.0, 0.7512958252799772, 0.0, 0.35684944134400076, 0.2822025918120264, 0.0, nan, 0.0, 0.3086991377431782, 0.04423000485801351, 0.0, 0.8578322873273115, 0.6920597473565505, 0.9258143343645202, 0.0, 0.00013209541062801931, 0.3399454223242722, 0.0] |
| 1.7925 | 2.25 | 900 | 0.5745 | 0.3877 | 0.3157 | 0.8463 | [nan, 0.9373443718928436, 0.8936817705653165, 0.5237184579439252, 0.785620810686892, 0.5932309765570626, nan, 0.5731998228133042, 0.7751909664563268, 0.0, 0.9330254836699918, 0.0, 0.0, 0.0, 0.0, 0.8874780801454829, 0.0, 0.0, 0.9253989025665076, 0.0, 0.49743326413606553, 0.3720606075459213, 0.0, nan, 0.0, 0.362670748940179, 0.2263189382021227, 0.0, 0.9355852115710428, 0.9121195658169062, 0.9653801272784691, 0.0, 0.09587677050945966, 0.21074794549629322, 0.0] | [nan, 0.7666762008063966, 0.8459820722288737, 0.35589376130270695, 0.6602856629180212, 0.391087786259542, nan, 0.4283483218139711, 0.618615992154992, 0.0, 0.8563419873974479, 0.0, 0.0, 0.0, 0.0, 0.4695442264821982, 0.0, 0.0, 0.7387838557909564, 0.0, 0.3568544684209477, 0.3548962568907604, 0.0, nan, 0.0, 0.28509334019028026, 0.21794051124482566, 0.0, 0.8588025306782998, 0.6960344960020876, 0.927551192360457, 0.0, 0.09183812508516147, 0.18221393560509547, 0.0] |
| 0.4287 | 2.5 | 1000 | 0.5140 | 0.4156 | 0.3337 | 0.8596 | [nan, 0.9114284539509796, 0.9599424299786812, 0.3729602803738318, 0.6955020648206622, 0.6337076451002155, nan, 0.648796319756489, 0.9076149357119134, 0.0, 0.9333320442069727, 0.0, 0.0, 0.0, 0.0, 0.837638825745275, 0.0, 0.0, 0.8487128760410935, 0.14962168247818672, 0.7450834097721757, 0.4416333770387344, 0.0, nan, 0.005162707675408485, 0.4304364892447794, 0.29855310097272386, 0.0, 0.9243997842101966, 0.9100753698167738, 0.9780073694330464, 0.0, 0.3377837387469772, 0.3283183517042185, 0.0] | [nan, 0.8056652041667661, 0.868478873207236, 0.36872340720413566, 0.648560287656455, 0.4227995307199668, nan, 0.5211383920382058, 0.5417303836612635, 0.0, 0.8614512323591124, 0.0, 0.0, 0.0, 0.0, 0.4902451772308277, 0.0, 0.0, 0.7414797203702529, 0.1034994187677877, 0.37103542329614997, 0.38941938864817555, 0.0, nan, 0.004775330844065127, 0.3339817219387496, 0.27392303157209946, 0.0, 0.8695462814099766, 0.7123344518279238, 0.9249476057387171, 0.0, 0.15441354067963511, 0.2686663032210652, 0.0] |
| 0.2477 | 2.75 | 1100 | 0.5852 | 0.3976 | 0.3245 | 0.8501 | [nan, 0.9240898770490549, 0.9130342916084687, 0.5360268691588785, 0.6767027987344469, 0.5151102302165186, nan, 0.6523417772790812, 0.8782321962328604, 0.0, 0.9459085723287141, 0.01212233473285585, 0.0, 0.0, 0.0, 0.8298613366240176, 0.0, 0.0, 0.8996769125664682, 0.0046441166244474245, 0.58637589184745, 0.4359797566385237, 0.0, nan, 0.0, 0.4451038886272047, 0.26994748620682013, 0.0, 0.9522730369995648, 0.9058973503358962, 0.9744264856283144, 0.024141075054913176, 0.024040317828039587, 0.315675681715336, 0.0] | [nan, 0.7635041179698989, 0.8504428879888529, 0.32134395517814934, 0.5814428391874907, 0.4398125968608028, nan, 0.5183108660060791, 0.5876442483214019, 0.0, 0.8637126471579993, 0.010904378413403684, 0.0, 0.0, 0.0, 0.5582717546245474, 0.0, 0.0, 0.7543635882159604, 0.004548919124920941, 0.3707771520336274, 0.37139606254827867, 0.0, nan, 0.0, 0.32640450731902027, 0.25674365674787153, 0.0, 0.8589069009951039, 0.7216899081490464, 0.9303705560523882, 0.023933704665274814, 0.02273469779955799, 0.24717820737291407, 0.0] |
| 0.2092 | 3.5 | 1400 | 0.5305 | 0.4215 | 0.3450 | 0.8615 | [nan, 0.8854690236777607, 0.9752597083363964, 0.4837301401869159, 0.7543174059151941, 0.32120495047431574, nan, 0.6121067808383275, 0.8640129050623903, 0.006110443680351299, 0.9472197081638014, 0.22567300568041493, 0.0, 0.0, 0.0, 0.849337533285705, 0.0, 0.0, 0.9323370763681338, 0.09924833192602527, 0.4992824257958052, 0.5897763059541461, 0.0, nan, 0.005025401620211451, 0.5194038833935207, 0.26516141898030177, 0.0, 0.9098213390526053, 0.9140251839431679, 0.9696367307434691, 0.0, 0.46129773009002417, 0.39953043905763785, 0.0] | [nan, 0.8279523588823188, 0.8503094621684615, 0.4166789099025304, 0.6531647345358885, 0.2970569371138754, nan, 0.4891076127233826, 0.6267720763107083, 0.0060749588138385505, 0.8628731375345856, 0.1638621555382868, 0.0, 0.0, 0.0, 0.5868382377688277, 0.0, 0.0, 0.766351782387915, 0.08906272053962098, 0.3548571571167739, 0.42844759670807536, 0.0, nan, 0.004661470273574813, 0.3559905085937402, 0.24649831094998764, 0.0, 0.8706735405566627, 0.7172875061476175, 0.937101627261161, 0.0, 0.18277266944717308, 0.30403604315996224, 0.0] |
| 0.1763 | 3.75 | 1500 | 0.5284 | 0.4184 | 0.3549 | 0.8725 | [nan, 0.9155522786024052, 0.9647682266779387, 0.44949532710280377, 0.7917047766525447, 0.5148885009996292, nan, 0.6544609508444807, 0.8639037813730607, 0.006400430838062886, 0.9591118988406824, 0.21581460442907713, 0.0, 0.0, 0.0, 0.8629440800155874, 0.0, 0.0, 0.9189088001847752, 0.0, 0.553022223587637, 0.46456492702831864, 0.0, nan, 0.09048469037484554, 0.4453708065107029, 0.3956482240588509, 0.0, 0.9463804808607508, 0.8827003794689641, 0.9646183286805874, 0.0, 0.10191225182385336, 0.42574316887992536, 0.0] | [nan, 0.8411073731152799, 0.8690976727110442, 0.4122661523625844, 0.6761261173524866, 0.4325420396336731, nan, 0.5235010874548043, 0.6267662599177323, 0.006377182482354398, 0.8589461626478264, 0.21441570391575504, 0.0, 0.0, 0.0, 0.5785872529434498, 0.0, 0.0, 0.7644870697544361, 0.0, 0.3931242258826368, 0.4137160566746283, 0.0, nan, 0.07477420233286435, 0.3486446014515762, 0.35308773803167826, 0.0, 0.8775350307334798, 0.7615382190401359, 0.9362335277343948, 0.0, 0.08161239401780339, 0.3123361865981938, 0.0] |
| 0.227 | 4.0 | 1600 | 0.5923 | 0.4426 | 0.3538 | 0.8544 | [nan, 0.9577374173182539, 0.9166854278467985, 0.1959217289719626, 0.7810987315371373, 0.5809225413617377, nan, 0.5835888579214346, 0.8662428239312995, 0.024607481668668958, 0.960621119945819, 0.44992590763151397, 0.0, 0.0, 0.0, 0.890757939858414, 0.0, 0.0, 0.8824976680624833, 0.23107998476795974, 0.6677916708726317, 0.5485129952087443, 0.0, nan, 0.13447755045997528, 0.4840215627780395, 0.4094524827723738, 0.0, 0.9258667409261705, 0.8784809934585728, 0.9680485743444954, 0.0, 0.5403279887825397, 0.2843078375615234, 0.0] | [nan, 0.732742632898181, 0.85248637631468, 0.1937195271972472, 0.6916132972252533, 0.4613544304478555, nan, 0.5019837033874182, 0.6339381818434339, 0.024391746227286727, 0.8507334888775837, 0.3399262956570416, 0.0, 0.0, 0.0, 0.5118086361876507, 0.0, 0.0, 0.7596215991272331, 0.14059847786558677, 0.3924964359231432, 0.4511581321221818, 0.0, nan, 0.11381225741975969, 0.3543174804464886, 0.36413975210357263, 0.0, 0.8783724167054704, 0.7445500851078998, 0.9377100490542223, 0.0, 0.1494074611014649, 0.24185599444907813, 0.0] | |
| 0.3219 | 4.75 | 1900 | 0.5306 | 0.4360 | 0.3684 | 0.8771 | [nan, 0.9383015101174155, 0.9581139041020363, 0.4607803738317757, 0.811509517207101, 0.6291153866526402, nan, 0.6505845609717001, 0.814323670351568, 0.021541903144289325, 0.9406027168809682, 0.41314727916357946, 0.0, 0.0, 0.0, 0.8354955510813795, 0.0, 0.0, 0.9418887586641801, 0.05121773539297008, 0.6343575406735104, 0.518250578994449, 0.0, nan, 0.027131676506933957, 0.4585466059559324, 0.39812988854667525, 0.0, 0.9202410996786, 0.895342680330491, 0.9736189575948254, 0.00016059513448547392, 0.336889593367067, 0.32415208076113006, 0.0] | [nan, 0.8286943759948178, 0.8911330146359255, 0.44085585238189445, 0.7563455702043241, 0.44281982228819555, nan, 0.5389345827619121, 0.6390151642075557, 0.02125355077350663, 0.8721853143259732, 0.34406869718732325, 0.0, 0.0, 0.0, 0.6106328062420269, 0.0, 0.0, 0.7642481786905918, 0.04822404265103627, 0.40217085841005906, 0.4365575304022451, 0.0, nan, 0.02300777793302594, 0.35943746679548483, 0.36207556675062974, 0.0, 0.8758467465629671, 0.7286601531442717, 0.9422882468777368, 0.00016028416831905857, 0.18664925297515172, 0.274341743647937, 0.0] | |
| 0.3758 | 5.25 | 2100 | 0.5413 | 0.4400 | 0.3618 | 0.8749 | [nan, 0.9446099997724584, 0.9535776804748952, 0.5333586448598131, 0.7118822151738956, 0.5725146926401914, nan, 0.637704053404208, 0.8958248327560848, 0.02011268072413936, 0.9449676672959805, 0.4536305260558163, 0.0, 0.0, 0.0, 0.8527716438267194, 0.0, 0.0, 0.9263943868758329, 0.13527541846719315, 0.6231382204452325, 0.5343291629394538, 0.0, nan, 0.07845667993958534, 0.48360548490082167, 0.39496133478097095, 0.0, 0.9342636737434504, 0.9081380373512183, 0.9754223113378334, 0.0, 0.0686053364221992, 0.4949887428280921, 0.0] | [nan, 0.8421459412186475, 0.884886678991681, 0.3243137842681656, 0.6975183850797184, 0.4470212561315764, nan, 0.5491953906967838, 0.5880944000946866, 0.01971493543409405, 0.8720965863289499, 0.2829941580535405, 0.0, 0.0, 0.0, 0.5648458841496203, 0.0, 0.0, 0.7876641278543601, 0.11773309221380866, 0.4507472099997672, 0.4306682617343027, 0.0, nan, 0.053795025325274436, 0.35687388479928317, 0.3506028598965402, 0.0, 0.8763044901374653, 0.7342806685419377, 0.9417441335611155, 0.0, 0.05263732322996086, 0.3527909231538019, 0.0] |
| 0.1962 | 6.0 | 2400 | 0.5252 | 0.4591 | 0.3755 | 0.8678 | [nan, 0.8788767058796604, 0.9301585587737999, 0.5368457943925233, 0.8328600223823257, 0.6594750437607246, nan, 0.7274099889861577, 0.8314845566257058, 0.20671941671154564, 0.9452567774639331, 0.5536552235119783, 0.0, 0.0, 0.0, 0.8969685653049295, 0.0, 0.0, 0.9273548947094251, 0.04859351976026093, 0.6165535079211122, 0.5024186037962429, 0.0, nan, 0.07840175751750653, 0.49256293504998166, 0.4105160532671556, 0.0, 0.928572042963352, 0.9119196275909236, 0.976082967184019, 0.09759262712918065, 0.23430673250828102, 0.4679128700481014, 0.0] | [nan, 0.8020983983063393, 0.8683865888896747, 0.4544978013913642, 0.6680523786513721, 0.4517445785165809, nan, 0.5857034011566181, 0.6746845091894639, 0.18334129404416358, 0.8638403093611754, 0.3497406295097313, 0.0, 0.0, 0.0, 0.5136113874503752, 0.0, 0.0, 0.7818072530904586, 0.04626054062573883, 0.40338464571865573, 0.41853055526845995, 0.0, nan, 0.05885020509966401, 0.3764221220090192, 0.37385233165849424, 0.0, 0.8760216287329546, 0.7184759765101966, 0.9447723343539753, 0.07888984275215143, 0.17396158662623154, 0.3506487661563549, 0.0] |
| 0.2721 | 6.25 | 2500 | 0.5120 | 0.4726 | 0.3905 | 0.8834 | [nan, 0.9352277032235452, 0.9553332100455781, 0.5201098130841122, 0.8315588432600179, 0.6507746356557826, nan, 0.7171028251625792, 0.8676946434502064, 0.12399022329011143, 0.9414992885437384, 0.5631225817074175, 0.0, 0.0, 0.0, 0.8815434824965902, 0.0, 0.0, 0.9265160801760165, 0.12371893574396928, 0.6983379489227609, 0.496123187961817, 0.0, nan, 0.1353837704242757, 0.5335426806929398, 0.5267111298220735, 0.0, 0.9267000099723489, 0.9157963608485102, 0.9708294620227798, 0.0039371710389987154, 0.44802779979272084, 0.43061615557802646, 0.0] | [nan, 0.847290915944923, 0.8918843187400161, 0.4215259288995603, 0.7694117638497967, 0.498788432969163, nan, 0.5567520477680967, 0.6726198795136411, 0.11618337797445752, 0.8753637372987935, 0.42321077786886513, 0.0, 0.0, 0.0, 0.581673157378788, 0.0, 0.0, 0.7933263418076343, 0.10532064834390416, 0.437053368284101, 0.4288208971032145, 0.0, nan, 0.09955372468245795, 0.3973712316699539, 0.442531089433316, 0.0, 0.880946087123613, 0.7345359613309864, 0.9452321649786941, 0.003849095209395844, 0.23329171252010497, 0.3386007935784502, 0.0] |
| 0.2409 | 6.5 | 2600 | 0.5224 | 0.4636 | 0.3840 | 0.8786 | [nan, 0.8731382676849351, 0.9738163801183563, 0.5331343457943926, 0.8196854363098576, 0.6540081867354192, nan, 0.6300072908533401, 0.8875978554822792, 0.13449190107295247, 0.955765201040042, 0.6083600889108421, 0.0, 0.03281733746130031, 0.0, 0.8703400012989544, 0.0, 0.0, 0.9262836625295774, 0.08389211741916257, 0.6663345782989761, 0.5452994228436286, 0.0, nan, 0.13288480021968968, 0.47811535039514313, 0.4147924929649243, 0.0, 0.9382028859601423, 0.8756597961457425, 0.965266610679491, 0.010467176426706453, 0.4342701538336483, 0.3917412023665201, 0.0] | [nan, 0.8209592404927408, 0.8860938595226477, 0.41218836114746504, 0.7196016259460952, 0.4954368536125842, nan, 0.545313357840212, 0.6491223200313668, 0.12371625097650668, 0.8633659080664855, 0.4708871648638746, 0.0, 0.03281733746130031, 0.0, 0.5802203868677137, 0.0, 0.0, 0.7907500494259085, 0.06952381605757291, 0.447113968783744, 0.44327869995554786, 0.0, nan, 0.08728984775236309, 0.38119151688382136, 0.37855655092920265, 0.0, 0.8832564638909316, 0.7526222693644393, 0.9416404778849121, 0.009589327157183334, 0.18190330268981955, 0.32252322488728213, 0.0] | |
| 0.1524 | 10.5 | 4200 | 0.5353 | 0.5128 | 0.4237 | 0.8872 | [nan, 0.9268517790355991, 0.9602839791773874, 0.537267523364486, 0.8456677302072528, 0.6567083558655384, nan, 0.7076703913792123, 0.8633391848934858, 0.3143875056961763, 0.9515964493686976, 0.6206264921379765, 0.0, 0.7490196078431373, 0.08954470929499306, 0.8721747743066831, 0.0, 0.005131830440133009, 0.9147190737070242, 0.11450520703985165, 0.6915674424660561, 0.5259122991900205, 0.0019833510251969382, nan, 0.2044761773994233, 0.5593918459203433, 0.4851432496510159, 0.0, 0.9463960710558084, 0.8834918590669917, 0.9670624325154579, 0.012832069294210286, 0.5599179011969355, 0.44183701402816805, 0.0] | [nan, 0.8497898154944094, 0.8911284588944798, 0.4558941463477496, 0.7715538102169041, 0.5041805687956784, nan, 0.5916295134976238, 0.6664176289411136, 0.25352865518566153, 0.8836310493548173, 0.5013133395398324, 0.0, 0.6053882725832013, 0.05452311472892029, 0.5946321429362145, 0.0, 0.005111887747118043, 0.802846410488875, 0.09434940383618455, 0.47282749487636766, 0.44441582446257716, 0.001977936260307555, nan, 0.14078808047194072, 0.4107132907440319, 0.42875046507529324, 0.0, 0.8865359213150946, 0.7513094837462199, 0.9478585417349973, 0.011508324602586469, 0.19474424489161243, 0.34180230893483227, 0.0] |
| 0.052 | 10.75 | 4300 | 0.5611 | 0.5030 | 0.4222 | 0.8855 | [nan, 0.932148839850802, 0.9568949634271852, 0.5225233644859814, 0.8511642191077112, 0.6031687568751455, nan, 0.7201923889006668, 0.8793424111590834, 0.1743029951530718, 0.9511564170902311, 0.5728369144644768, 0.018116900290928325, 0.7155830753353973, 0.08790515827973262, 0.8945492628434111, 0.0, 0.0, 0.9018928482213427, 0.19409261742744086, 0.6978142148450815, 0.5187192887865012, 0.004106374657802112, nan, 0.18591239873678428, 0.5679096666143298, 0.48372515565797347, 0.0, 0.9465148790940053, 0.8887757437702006, 0.9729464658947179, 0.03061668531642422, 0.3269727082444268, 0.4968253657882534, 0.0] | [nan, 0.8544673632153686, 0.8915093314898118, 0.4824501321862451, 0.7281104549174552, 0.4796578889108752, nan, 0.5955885392390377, 0.6806501724220245, 0.15806082007550856, 0.8869557339277052, 0.5018390970394144, 0.017487873372478938, 0.5719234576047509, 0.08299595141700405, 0.5743453150410742, 0.0, 0.0, 0.7988127196821454, 0.14769412965284384, 0.4636640495670947, 0.44194705232908676, 0.004079706927175844, nan, 0.14373978216098007, 0.4138202592132837, 0.4263783910470499, 0.0, 0.8825003483580057, 0.7459231292221788, 0.9497549296351595, 0.022555788364877087, 0.19864442770898405, 0.36609089056617755, 0.0] |
| 0.0897 | 11.0 | 4400 | 0.5797 | 0.4966 | 0.4137 | 0.8864 | [nan, 0.9266090680496935, 0.9675701132103213, 0.5286179906542056, 0.8135055236213754, 0.6141498963415911, nan, 0.7310209435363914, 0.8153911847037054, 0.24547412900285845, 0.9446611067589995, 0.6598542850086441, 0.0, 0.5599071207430341, 0.13658721150208097, 0.8912937585243879, 0.0, 0.004870002356452753, 0.9252981123672058, 0.10847033891289591, 0.6586394910124014, 0.4795176884335903, 0.01181630258673669, nan, 0.18618701084717837, 0.5559088292248914, 0.4992355587068755, 0.0, 0.9406880436912528, 0.9118086274033954, 0.9573602602596679, 0.003960483235940155, 0.3327033672702148, 0.4804871031358067, 0.0] | [nan, 0.8565575968459415, 0.8928102104157912, 0.43275555700074025, 0.7654702047573079, 0.47074416606474334, nan, 0.6054622841435586, 0.6863363711152467, 0.21403286978508218, 0.8828456438079144, 0.4322928605137194, 0.0, 0.4530688935281837, 0.09709521247982786, 0.5749041704195555, 0.0, 0.004865289040020926, 0.7951008940737603, 0.09395592969976839, 0.4548604901862724, 0.41665801557197046, 0.011736958934517204, nan, 0.1216732767438939, 0.41094472698150475, 0.430227229329769, 0.0, 0.8867287999971621, 0.7466484878252573, 0.9415279772911855, 0.0036285882442284325, 0.19204917359734425, 0.36246293958863207, 0.0] |
| 0.0936 | 11.25 | 4500 | 0.5731 | 0.5011 | 0.4193 | 0.8864 | [nan, 0.9324196276009762, 0.9569564158641476, 0.5246004672897197, 0.8364710008894733, 0.6578250088383729, nan, 0.7038215792022807, 0.8665369834416663, 0.21309913418120055, 0.9410960435297098, 0.49318761834197744, 0.028167151547209734, 0.5808565531475748, 0.11010215664018161, 0.8849288822497889, 0.0, 0.0565548660749352, 0.9216694582309478, 0.11269226311693903, 0.6871508134702065, 0.5262584704743466, 0.01969383764456115, nan, 0.2076616778799945, 0.571397916993772, 0.476856262879174, 0.0, 0.9377623285515337, 0.907275545210859, 0.973954665451519, 0.050830950308757096, 0.38818102379646, 0.4678081196891568, 0.0] | [nan, 0.858380886499719, 0.8914561596816896, 0.45129869803574746, 0.786844102694609, 0.48464472942061587, nan, 0.6094618696875397, 0.6854209198991233, 0.18657623184200503, 0.8857526637100221, 0.394797106941035, 0.023946037099494097, 0.49684424239749303, 0.062077792789589706, 0.5615273263032089, 0.0, 0.055464256368118324, 0.7962485307269822, 0.09311408578835408, 0.4733745462314789, 0.44196131097098196, 0.019312422955759485, nan, 0.14722087024238295, 0.4185961804636968, 0.4181839379748557, 0.0, 0.8886792481667263, 0.7473472827679579, 0.9501856968302422, 0.031198480139267574, 0.2030701847638892, 0.3556589318498682, 0.0] |
| 0.033 | 14.25 | 5700 | 0.5935 | 0.5181 | 0.4292 | 0.8880 | [nan, 0.9232290780535377, 0.9550432923803572, 0.5331775700934579, 0.8469649770868216, 0.6796985960845084, nan, 0.7591958688611619, 0.8564643924657209, 0.21028211607771655, 0.9524029393967549, 0.6051700008232486, 0.0, 0.6860681114551084, 0.21654685332324378, 0.8960592972657011, 0.0, 0.03558243657214673, 0.9155229117646998, 0.140697693670425, 0.711005584058588, 0.5227324249145294, 0.037180848092072186, nan, 0.2080186736235068, 0.5726225990474695, 0.5346435930956549, 0.0, 0.9410130186192625, 0.9154633602859255, 0.9760592954761752, 0.01645064030834266, 0.4608913003718832, 0.4701447510293469, 0.0] | [nan, 0.8573293198744064, 0.8916240779976521, 0.48186665258934697, 0.7676170029872194, 0.4823511054134466, nan, 0.6260715377125842, 0.6901341142647419, 0.1894206549118388, 0.8862935130575381, 0.49201833941300493, 0.0, 0.5435813573180703, 0.1092586700604518, 0.5822497006272321, 0.0, 0.035439538946984116, 0.8016860332567224, 0.11209233305853257, 0.4701563285996208, 0.45173968006036097, 0.03573442156415282, nan, 0.1250185671139278, 0.43006031638093856, 0.44816121842496287, 0.0, 0.8878007481353359, 0.7386750898148962, 0.9519721480330992, 0.013876810802543318, 0.25855582662623405, 0.3720678838361397, 0.0] |
| 0.0548 | 14.5 | 5800 | 0.5902 | 0.5151 | 0.4174 | 0.8882 | [nan, 0.9249082282350853, 0.9577153821767257, 0.5438259345794393, 0.8625692959476665, 0.6265525664540941, nan, 0.7491911978889274, 0.8432461925321441, 0.249306102158333, 0.951930364538209, 0.6013830575450728, 0.0, 0.7704850361197111, 0.20002522386177324, 0.8704780151977658, 0.0, 0.0013615060351373288, 0.9208633435979287, 0.11193893938641368, 0.6970564096712325, 0.4979168453686571, 0.03908039555282418, nan, 0.18904297679527668, 0.5623985973726906, 0.5131506060136048, 0.0, 0.9399214361687687, 0.9123994793332818, 0.9756660223299524, 0.04515831571967342, 0.4303481070535878, 0.49404040291178064, 0.0] | [0.0, 0.8607762479438139, 0.8922939816555095, 0.45337232891467816, 0.7416336434657338, 0.4957900790517687, nan, 0.6227225352163122, 0.6905205002583658, 0.2142437565638406, 0.8883435707029895, 0.4944664432937354, 0.0, 0.5822804554671658, 0.1227364185110664, 0.6143083859952676, 0.0, 0.0013572770933389015, 0.7986526753983755, 0.09318127002721979, 0.47663610300281495, 0.44101175423554057, 0.037423427761281866, nan, 0.14246983588236511, 0.42780903014161104, 0.4432599000899573, 0.0, 0.8868797486244817, 0.7354235169834137, 0.9525392249964284, 0.03855126495647117, 0.2526545610728006, 0.37165059315614124, 0.0] |
| 0.1047 | 14.75 | 5900 | 0.5997 | 0.5159 | 0.4159 | 0.8881 | [nan, 0.9210892560336101, 0.9617335675034919, 0.5317464953271028, 0.8683264925417152, 0.6381114337134347, nan, 0.7416693813461018, 0.862755610380984, 0.2719665271966527, 0.9489817238040484, 0.570408331275212, 0.0005289605924358636, 0.6938596491228071, 0.22575356287047546, 0.8948821198934858, 0.0, 0.011022962322938758, 0.9258684979714679, 0.17593834335005545, 0.6548460763101033, 0.4725421838812847, 0.04097994301357618, nan, 0.22218865851984074, 0.5752629926205056, 0.5366821032106535, 0.0, 0.936931478673554, 0.9021336855923136, 0.9725860103434604, 0.020141738157403954, 0.43632262391026033, 0.4934216774582814, 0.0] | [0.0, 0.8607109591035689, 0.8928295853674818, 0.4670190706507743, 0.7523185639791471, 0.4845338501499847, nan, 0.6282224979925543, 0.6928170564904808, 0.23142272983643541, 0.8873278318309525, 0.46953884728763595, 0.0005215803885773895, 0.5542412002308136, 0.10845198424719782, 0.5869154300379641, 0.0, 0.010907018316536697, 0.793456051943224, 0.12649239962384984, 0.4589822701689517, 0.42143872921678477, 0.03893105461493551, nan, 0.13440869146302972, 0.4245448084603441, 0.46174816509389, 0.0, 0.8878226827336242, 0.7447736277446672, 0.951929183073613, 0.018382891806658124, 0.25878028202964926, 0.37484668044597425, 0.0] |
| 0.1363 | 15.0 | 6000 | 0.6052 | 0.5193 | 0.4155 | 0.8887 | [nan, 0.9281772418265013, 0.9663767872895684, 0.5342161214953272, 0.8447924129735698, 0.6015187219527939, nan, 0.7291077408868643, 0.8812164919106135, 0.23211400637971746, 0.9479408328730995, 0.633386844488351, 0.0030415234065062154, 0.789422084623323, 0.21314163198385672, 0.8954179385594596, 0.0, 0.0066242505171104655, 0.9164480291997693, 0.1360949684597427, 0.6964961019847766, 0.4960711090960334, 0.03860550868763618, nan, 0.19802279280516272, 0.5609541005914063, 0.5661075535662848, 0.0, 0.9376398917610389, 0.9059173441584945, 0.9782134208899593, 0.041454266650089104, 0.43892377410636263, 0.49969692229478707, 0.0] | [0.0, 0.8633930449091305, 0.8952460293484353, 0.42706756384454103, 0.7593774610091322, 0.47377891058119026, nan, 0.6217821374684249, 0.6898326802726141, 0.20124995510218743, 0.8868864734587292, 0.4952526552944963, 0.0028388052332757345, 0.6066698390038862, 0.10356026717323365, 0.5863739068024136, 0.0, 0.00656256484747873, 0.7990222508044155, 0.11130896362146828, 0.4768559231889487, 0.4358850122678166, 0.03689958080794596, nan, 0.14020726799012267, 0.42208907144066693, 0.46374312526092243, 0.0, 0.889531203939725, 0.7432560391610733, 0.952160090573041, 0.03558025789239662, 0.21245893254116582, 0.3712419453581397, 0.0] |
| 0.0804 | 15.25 | 6100 | 0.6205 | 0.5110 | 0.4268 | 0.8877 | [nan, 0.9338093608996594, 0.9656453309931633, 0.5360116822429907, 0.8032054069910557, 0.6059132718486427, nan, 0.7301936126609202, 0.8766143189258433, 0.22587928248891834, 0.9574923159422327, 0.619350456902939, 0.0011901613329806928, 0.7703818369453045, 0.07655442048177576, 0.8504335260115607, 0.0, 0.020239310868483754, 0.9198111518664089, 0.12485306048113379, 0.7319227623900414, 0.495000428884777, 0.03547684228169171, nan, 0.1875600713991487, 0.5538912440466844, 0.5455451906671689, 0.0, 0.9362906678973961, 0.9101525873385327, 0.9729007364591106, 0.02293143105806291, 0.4597532971610884, 0.48345782331547454, 0.0] | [nan, 0.856464729269542, 0.8942823604125036, 0.4347924144963024, 0.7282825257603309, 0.4836585626064097, nan, 0.6163747573889081, 0.6892970262677814, 0.20072891932188414, 0.888225522138808, 0.5066929332727181, 0.0011893749174045195, 0.6024777046931117, 0.05147557666214383, 0.6220782459974346, 0.0, 0.020031615227137266, 0.7981944383082095, 0.09975989363883506, 0.476298280003313, 0.4345003764655265, 0.03419217618393775, nan, 0.1330243066375818, 0.42041703246719714, 0.45861972618049734, 0.0, 0.8892991369897043, 0.7440154875361404, 0.9524152608652374, 0.021443727473549588, 0.22949422815524131, 0.36944182958821886, 0.0] |
| 0.0627 | 15.5 | 6200 | 0.6244 | 0.5088 | 0.4226 | 0.8864 | [nan, 0.9363099227676078, 0.9557843398515034, 0.5258376168224299, 0.8250218829308421, 0.6537759869721766, nan, 0.7370216777925434, 0.8573990605873701, 0.24421061352997225, 0.944441326435564, 0.6453651107269285, 0.0, 0.574406604747162, 0.202547610039097, 0.9001834773007729, 0.0, 0.08682219254837274, 0.9295308868150898, 0.08372655176410206, 0.6741101275248591, 0.4846229490117269, 0.03799094921503995, nan, 0.18766991624330634, 0.5747971947453813, 0.5357957944650019, 0.0, 0.9393777953152539, 0.9065412893119918, 0.9711350422513085, 0.01408833768494343, 0.423479444817005, 0.43092900998340755, 0.0] | [nan, 0.8597774723874926, 0.8905873458192073, 0.4468008441348313, 0.7358981742624778, 0.4808541172889169, nan, 0.6284059730270303, 0.6908370828825592, 0.2063894967177243, 0.8877064612239235, 0.5085303752716421, 0.0, 0.4786515887689728, 0.07696731524968849, 0.5910784632525015, 0.0, 0.08625308882819613, 0.7927730663764808, 0.07191564097641445, 0.4573643410852713, 0.43199170940310977, 0.036449399656946824, nan, 0.12474672799956191, 0.42888997799442735, 0.45055805027110624, 0.0, 0.8884059722861457, 0.7421115189770542, 0.9513756980737487, 0.012830765528906378, 0.21910649885920366, 0.3464300992446894, 0.0] |
| 0.0906 | 15.75 | 6300 | 0.6277 | 0.5077 | 0.4232 | 0.8874 | [nan, 0.9291486180310576, 0.9587963707454238, 0.5362032710280373, 0.8561640657502444, 0.6342631999714216, nan, 0.7070024940578683, 0.8671632585282536, 0.2429056713202701, 0.9448969225566771, 0.5583271589692929, 0.0010579211848717272, 0.6710010319917441, 0.23294236347584815, 0.9067513151912711, 0.0, 0.020684418610740187, 0.9250756288677204, 0.07677279425156046, 0.6503387447644879, 0.5319197495312902, 0.03860550868763618, nan, 0.18569270904846905, 0.5416470403517035, 0.5072344951363807, 0.0, 0.9414354322663816, 0.9037269864207472, 0.9731874869200364, 0.013277591280202247, 0.39988619967892053, 0.4915501377118052, 0.0] | [nan, 0.8573471144295101, 0.892101583588469, 0.4449642809016976, 0.7400242676373722, 0.48442379031764893, nan, 0.6140014998720169, 0.6924650683478314, 0.21178574008524165, 0.8871035802257583, 0.4782118177972077, 0.00099601593625498, 0.5315565729234794, 0.08438028233359221, 0.5871221081515825, 0.0, 0.020441960358122443, 0.7966462351239197, 0.06850549580427845, 0.4652701824381677, 0.4532145005879428, 0.03686906413403052, nan, 0.1488673139158576, 0.4142177021859072, 0.4423489401170992, 0.0, 0.888882064716084, 0.7468477974750474, 0.9515378343546987, 0.012387656809223801, 0.2237051521076804, 0.3671609871108074, 0.0] |
| 0.0798 | 16.0 | 6400 | 0.6190 | 0.5286 | 0.4172 | 0.8869 | [nan, 0.926680657145317, 0.9583277241233551, 0.5414509345794393, 0.8395448350384849, 0.6163055970613488, nan, 0.729106879083869, 0.8763296484319401, 0.26653962467376446, 0.94462856417892, 0.6354449658351856, 0.0, 0.7736326109391125, 0.21591625677891285, 0.8849045268558811, 0.34363411619283063, 0.10316026497002069, 0.9218656576332847, 0.10944717627775294, 0.7009902670312324, 0.5122599776979916, 0.038968657466897594, nan, 0.1919538651654538, 0.5525226356832574, 0.538875717356141, 0.0, 0.9457572762531493, 0.901183634297817, 0.9780756945897774, 0.023115338389489825, 0.3853969802271942, 0.4585034944719744, 0.0] | [0.0, 0.8564334135192141, 0.8938306198574103, 0.41026489890361634, 0.7353951913707414, 0.47809949912634986, nan, 0.6215698951590981, 0.6951678039270297, 0.23431724238396126, 0.8861469346690092, 0.5033256170323759, 0.0, 0.5823655078656049, 0.06725329981143935, 0.60684460181721, 0.013995167136528394, 0.10232968859569384, 0.80017144909153, 0.09089721553798556, 0.48491411153457703, 0.44620918590626235, 0.03736540418921091, nan, 0.14435885256397019, 0.42539846918525115, 0.4624629192971781, 0.0, 0.8873440144497453, 0.7475156108906514, 0.9524719380738451, 0.01972869725160058, 0.22189851053623036, 0.35861227450389216, 0.0] |
| 0.0901 | 16.25 | 6500 | 0.5917 | 0.5200 | 0.4299 | 0.8896 | [nan, 0.9258199912150333, 0.9603701848856869, 0.5186892523364486, 0.8721793039773063, 0.647948819969426, nan, 0.7465402918754385, 0.8815201404374436, 0.21442478975931065, 0.9491194402298921, 0.6424219972009549, 0.00039672044432689763, 0.7311661506707946, 0.1943498549627948, 0.8921543157758005, 0.15327564894932014, 0.07967428586390177, 0.9293905669893677, 0.12015927416016821, 0.6698895330720515, 0.5201315450880439, 0.040560925191351474, nan, 0.17654812577234655, 0.5835060449050087, 0.5231215794021847, 0.0, 0.9400508616673928, 0.8957790972168599, 0.9722137189382809, 0.011464420406979153, 0.38557987360035767, 0.46186248931546336, 0.0] | [nan, 0.866351138156412, 0.8939541036386832, 0.46360912979965524, 0.7507890322152613, 0.48660598648618647, nan, 0.6225598103833513, 0.6911588008377322, 0.19347001326929186, 0.887840691207522, 0.5082802755206722, 0.00036527456471447707, 0.5638678869876641, 0.0832837918175431, 0.6045529063562446, 0.006450606044842116, 0.07925304719241588, 0.7975401296695107, 0.09911841629051973, 0.4713279486495917, 0.45141671341630396, 0.03856573705179283, nan, 0.12819285757013818, 0.4279405668488608, 0.45535903716704923, 0.0, 0.8891564381205536, 0.7534260714863522, 0.9520390401591446, 0.010587073054631307, 0.21693992819738858, 0.3621346900827125, 0.0] |
| 0.0653 | 16.5 | 6600 | 0.6069 | 0.5188 | 0.4270 | 0.8875 | [nan, 0.9290124922971863, 0.9589720557965155, 0.5377873831775701, 0.8408719669628694, 0.6464453726960179, nan, 0.7621001449552638, 0.8857807088295299, 0.2068851236588094, 0.9480908117204224, 0.6177862846793447, 0.0, 0.7590299277605779, 0.18791777021061926, 0.9075956355134117, 0.0, 0.058230565810488834, 0.9227427600247443, 0.14023410983625556, 0.6694696680432973, 0.503836987023172, 0.03972288954690206, nan, 0.19629273650968007, 0.5403046004082274, 0.5528350801001529, 0.0, 0.9376581699207615, 0.901014031526811, 0.9752275577414824, 0.015813440258609972, 0.5130362332093723, 0.44827147941026946, 0.0] | [nan, 0.8616804147441266, 0.8938918495590652, 0.4436595217282778, 0.7588707802865634, 0.4758728817247983, nan, 0.628730181301102, 0.688001179245283, 0.18745190773792766, 0.8877420745200684, 0.49290617097441625, 0.0, 0.5890833366705378, 0.07141145458902469, 0.5823605098793022, 0.0, 0.05773773981671383, 0.7947286013642479, 0.11004573329175761, 0.45664170004530313, 0.44804481905654414, 0.037985842126352344, nan, 0.1362925675933341, 0.4181863845162963, 0.46249953657361065, 0.0, 0.888743313770925, 0.7487091113564399, 0.952506386954324, 0.013629087889199198, 0.23068137169799252, 0.34552559761867596, 0.0] |
| 0.0946 | 16.75 | 6700 | 0.6065 | 0.5143 | 0.4299 | 0.8883 | [nan, 0.9366806425081413, 0.9542471674446813, 0.5289754672897197, 0.8420186089455377, 0.6348452391657562, nan, 0.7554582292706217, 0.8872989514636808, 0.24603338994987364, 0.95065695923075, 0.5426442743064132, 0.0, 0.6714138286893705, 0.17089166351368396, 0.8694632071182697, 0.0, 0.019113450108658656, 0.9217120922782911, 0.13903375883706684, 0.6740194249750934, 0.5118203708015244, 0.03178948544611431, nan, 0.20950157901963476, 0.5704453865075627, 0.5623407413972658, 0.0, 0.9411122045154043, 0.9100815747962009, 0.9743145830094165, 0.0857785237680799, 0.4308967871730781, 0.48645508025274165, 0.0] | [nan, 0.8651947384722789, 0.8930717543250574, 0.4526545293143849, 0.7524401466986995, 0.4887861010723328, nan, 0.6214073859834178, 0.6850152009083916, 0.21553648224427951, 0.8870252213407757, 0.45774305555555556, 0.0, 0.5674414547991802, 0.07292395457725634, 0.6296601151175575, 0.0, 0.018957592126106943, 0.7990749594007368, 0.11146433406780111, 0.4733450112755498, 0.44892412444043184, 0.03086520206129645, nan, 0.14343460931037075, 0.423674789416196, 0.4623610858079796, 0.0, 0.8878002154581935, 0.7401265142858424, 0.9527410923966566, 0.060905676756307404, 0.2440383021821195, 0.37124052036090577, 0.0] |
| 0.0849 | 17.0 | 6800 | 0.6239 | 0.5140 | 0.4277 | 0.8874 | [nan, 0.9305970330977147, 0.9554562297838712, 0.5320046728971962, 0.8489963736857462, 0.6542095907740937, nan, 0.7229605001215142, 0.8664610713099588, 0.28969717055387545, 0.9528962660454964, 0.4980859471474438, 0.0, 0.7176470588235294, 0.20759238239374447, 0.8862034811976359, 0.0, 0.031864477783887096, 0.9191836449171626, 0.12003509991887283, 0.6955934653201726, 0.5165258494982048, 0.04092407397061288, nan, 0.19217355485376905, 0.5895090804417229, 0.503489840686003, 0.0, 0.9408365537389992, 0.904218558679801, 0.9778653391859837, 0.011972108251481619, 0.48105021439167633, 0.4599672061542931, 0.0] | [nan, 0.8636437394553574, 0.8929500733790351, 0.4345244853931126, 0.7599993804727837, 0.46696218452852767, nan, 0.6206510046358703, 0.6983976442693793, 0.2497009515987931, 0.8874926753329814, 0.43156730923551545, 0.0, 0.5706314364255529, 0.11078207026517702, 0.6145475017593244, 0.0, 0.03131271548397056, 0.8003820861050736, 0.10237293400828867, 0.4670301606353909, 0.4459244664251144, 0.038865601952565394, nan, 0.13528195016335132, 0.4290314962729347, 0.43912572952498746, 0.0, 0.8877216097613865, 0.738180307717246, 0.9528556585267144, 0.010467599586006663, 0.24685847767824554, 0.3594826033565289, 0.0] |
| 0.0623 | 17.25 | 6900 | 0.6172 | 0.5119 | 0.4289 | 0.8887 | [nan, 0.9328785695913208, 0.9578098581195325, 0.5317383177570093, 0.8561058685577084, 0.6304827168234579, nan, 0.7396010541574238, 0.8636618114532428, 0.2868801524503915, 0.9518605630620964, 0.4947929529925084, 0.0009256810367627612, 0.7112487100103199, 0.18766553159288688, 0.8812836916282393, 0.0, 0.01743775037310502, 0.9291997485832975, 0.11260120200665574, 0.6826961479212292, 0.49109604568235565, 0.042125258394323704, nan, 0.18536317451599615, 0.5637959909980635, 0.5345549622210897, 0.0, 0.9375897612200349, 0.9104269853176398, 0.9785152351649676, 0.016857308632765553, 0.471885224247597, 0.4792468588859031, 0.0] | [nan, 0.8649230898296971, 0.8934913832615394, 0.4476893494179728, 0.7525214888224941, 0.47904609433387446, nan, 0.6239313691633799, 0.6925921698436251, 0.24592492631130367, 0.887597908356459, 0.43200359389038634, 0.000914435009797518, 0.5808680994521702, 0.10441372535260683, 0.6200052546206393, 0.0, 0.01701975415910659, 0.7967171468468032, 0.09773096322694678, 0.46324810420871126, 0.4373241271317872, 0.03999681722939819, nan, 0.13242564545240523, 0.42549338304851775, 0.45084188297733174, 0.0, 0.888754441570771, 0.7411121674604253, 0.9532170914369867, 0.015176070871411481, 0.2681904277926638, 0.37097400203468917, 0.0] |
| 0.087 | 17.5 | 7000 | 0.5958 | 0.5165 | 0.4323 | 0.8903 | [nan, 0.9358029442279695, 0.9581817889436154, 0.5173516355140186, 0.8565989717971686, 0.667348278703771, nan, 0.7453587599689061, 0.8783982540209707, 0.2597456398359501, 0.9499820544177967, 0.5674240553223018, 0.0, 0.7777605779153767, 0.14150586454786226, 0.8944761966616873, 0.0, 0.04935459377372817, 0.9190064859631538, 0.13516780079140384, 0.6902990697136872, 0.5223050718688348, 0.039750824068383706, nan, 0.1931621584511877, 0.5658763803841524, 0.501960958099754, 0.0, 0.9402762475045608, 0.9019702878007346, 0.9759436269037568, 0.012736230262339924, 0.4254506289499888, 0.5057514930417828, 0.0] | [nan, 0.8672982432946728, 0.8947683772895187, 0.45221659685446863, 0.7622893195763734, 0.4902560352855047, nan, 0.6223052874324095, 0.6932109212359029, 0.22966612333107453, 0.8909383965244376, 0.46376665320952765, 0.0, 0.5938460326215428, 0.08434187777193114, 0.602773750581284, 0.0, 0.048440150074523305, 0.8000458716174862, 0.11235893201211121, 0.479082966550413, 0.45730325325150806, 0.03797907547774101, nan, 0.13441877352901832, 0.42968388297967464, 0.43185024209844064, 0.0, 0.8885136898541194, 0.7448990572757507, 0.9530770665482792, 0.011476439106252173, 0.27282086031874275, 0.3826734258440253, 0.0] |
| 0.0493 | 17.75 | 7100 | 0.6044 | 0.5187 | 0.4325 | 0.8897 | [nan, 0.9240685866116948, 0.9622943353488201, 0.5353317757009346, 0.853514520592762, 0.6373741840672775, nan, 0.7478235165354141, 0.8836883806993405, 0.21751108165209826, 0.9509281473980792, 0.5420474191158311, 0.0, 0.7930340557275541, 0.22083490982469417, 0.8908310060401377, 0.0, 0.0858534286387558, 0.9207060529378274, 0.1411447209390884, 0.681761326480902, 0.5542661781464825, 0.03930387172467736, nan, 0.1931621584511877, 0.5752080389386088, 0.49312002836187985, 0.0, 0.9390712329452002, 0.9078367511279274, 0.9729394719810368, 0.022296821252434828, 0.4083602593021602, 0.5050154471862657, 0.0] | [nan, 0.8665364871726114, 0.892965816013915, 0.4547348114599635, 0.7642413653965189, 0.4857421136997843, nan, 0.6253954022706847, 0.6870444418213474, 0.19578268327242895, 0.8874360309454634, 0.462182366980205, 0.0, 0.6077345881608605, 0.08939146416173167, 0.6003337345442609, 0.0, 0.0839241381075478, 0.8010272384750775, 0.11626241894020498, 0.4793339806464354, 0.46760060321222136, 0.03759519038076152, nan, 0.13732648718299134, 0.4276941756073643, 0.42612058896739236, 0.0, 0.8882284916106664, 0.7388891943971531, 0.9525770980335972, 0.01913195000088903, 0.25993428881875097, 0.3840528604415517, 0.0] |
| 0.0609 | 18.0 | 7200 | 0.6040 | 0.5216 | 0.4331 | 0.8892 | [nan, 0.9227158454479248, 0.9619075870212453, 0.5316542056074767, 0.8629644863429278, 0.6514016366079864, nan, 0.7428586694795917, 0.8715519286425962, 0.2045030862918928, 0.9466966687245525, 0.5841977442990038, 0.005950806664903465, 0.7702786377708978, 0.22789759112120064, 0.8969036175878418, 0.0, 0.10873720315241013, 0.9154051507310187, 0.16112021722213943, 0.6850397847716271, 0.5074181749114659, 0.04494664506397005, nan, 0.19590827955512838, 0.5833045480713874, 0.5258912942323458, 0.0, 0.940934664449275, 0.8882331527914135, 0.9774381724580755, 0.014391396245182146, 0.43477819098132453, 0.5255548975681157, 0.0] | [nan, 0.8627327541149343, 0.8943888286230383, 0.44826842363954605, 0.7637335274754071, 0.48244240753868006, nan, 0.625331534198079, 0.6944541055496749, 0.18654700047236655, 0.8893611006867107, 0.4845014167207183, 0.005280450598451068, 0.5995903120857935, 0.10169968482665466, 0.5777541863213714, 0.0, 0.10625831542319107, 0.8006913747953047, 0.12712606139777924, 0.4783386384345389, 0.44333322627096416, 0.042293134265587215, nan, 0.148674558186062, 0.4270657907089471, 0.4375414792419438, 0.0, 0.8881646826265218, 0.746841100561318, 0.9521439225045568, 0.01294715575036877, 0.24666520631333802, 0.38409386690619945, 0.0] |
| 0.0594 | 18.25 | 7300 | 0.6184 | 0.5184 | 0.4328 | 0.8884 | [nan, 0.9404973526006469, 0.9537239028155554, 0.5275303738317757, 0.8254461719223712, 0.6778219046293364, nan, 0.7472383523016173, 0.8659581534373962, 0.2943783918140768, 0.9543757743601257, 0.5650160533465053, 0.0, 0.7537667698658411, 0.19283642325640055, 0.8840439696044684, 0.0, 0.053517660304244236, 0.9223867864255677, 0.14299077799301313, 0.6933990487935829, 0.5170742093202789, 0.040644728755796417, nan, 0.19868186187010847, 0.5769927251792537, 0.5184906162061554, 0.005237711522965351, 0.936523983230326, 0.8965774712364731, 0.9780089834131267, 0.013717932777984998, 0.4056981446483367, 0.5054707620798113, 0.0] | [nan, 0.8646951423015076, 0.8916557550473645, 0.4456280068092665, 0.7798208455321158, 0.4668012972723517, nan, 0.6275296552822227, 0.693191442493572, 0.24416726797924612, 0.8882015249296725, 0.4734908589168679, 0.0, 0.6010533245556287, 0.10449699289229086, 0.6037870806764625, 0.0, 0.0522041170761608, 0.8024731726060429, 0.12131790023739622, 0.47577199080928667, 0.44858497899759875, 0.038707102952913006, nan, 0.1414826837710464, 0.42720162129381883, 0.43218883327484625, 0.005164878823996822, 0.8886286814206171, 0.7396195316490108, 0.952706951959097, 0.011655776057680246, 0.24503522596165647, 0.3835704565398948, 0.0] |
| 0.0616 | 18.5 | 7400 | 0.6177 | 0.5082 | 0.4272 | 0.8887 | [nan, 0.9388723599691342, 0.9564944313754319, 0.5251226635514019, 0.8417103211148066, 0.6482573931295971, nan, 0.7321895483979944, 0.8855861839920293, 0.2417250093210158, 0.9506753528629689, 0.5459990121017535, 0.0, 0.656656346749226, 0.11275066212637155, 0.8765912190686498, 0.0, 0.07320713219699945, 0.9230813488667519, 0.11395056209539893, 0.703570900866502, 0.5234722511549255, 0.043466115425442764, nan, 0.1751201427982974, 0.5677919087245512, 0.4888879041013937, 0.00040290088638195, 0.9391572478144832, 0.8977247029883181, 0.9766107386702634, 0.018289713622611795, 0.4217114755430917, 0.4846827041793997, 0.0] | [nan, 0.8641564182971058, 0.8921133993393542, 0.4501424016407233, 0.7647378890792713, 0.4769587373086239, nan, 0.6209624017506187, 0.6859163987138264, 0.20884410959394406, 0.8903311694707657, 0.45434149683164926, 0.0, 0.5354933726067747, 0.07164035579774021, 0.6122940826221327, 0.0, 0.06951938138690669, 0.8003213370838211, 0.09716584900998836, 0.4828652554046836, 0.45382137270368395, 0.04121417598135297, nan, 0.13381035314854062, 0.43221966358833797, 0.42342013855571975, 0.00040160642570281126, 0.8881950211846364, 0.7398417591158966, 0.9530845970447974, 0.014810386777414213, 0.2365547272188405, 0.37402163767775426, 0.0] |
| 0.0611 | 18.75 | 7500 | 0.6099 | 0.5177 | 0.4324 | 0.8902 | [nan, 0.9345079533755389, 0.9638643589649342, 0.5356553738317757, 0.8422997643013702, 0.6257334001805861, nan, 0.7471220088972541, 0.8814537173221996, 0.2763370479307345, 0.9466207360377004, 0.6049436074750967, 0.0, 0.7059855521155831, 0.14970361962416445, 0.8782149119958433, 0.0, 0.0958028958186055, 0.9234898906602255, 0.14089637245649764, 0.6854742792438918, 0.5173606430820885, 0.04232080004469523, nan, 0.19343677056158176, 0.5813811692050034, 0.5071015488245331, 0.00040290088638195, 0.9400356746670351, 0.8951641148114238, 0.9764509546423178, 0.03372756848605413, 0.4723729399093662, 0.4701335776577261, 0.0] | [nan, 0.8647971283970989, 0.8977857991553266, 0.4345779290016539, 0.7684148484664771, 0.4855945598832977, nan, 0.6259089780170273, 0.686933822387541, 0.2366516479228013, 0.8888089337936385, 0.48289741736216074, 0.0, 0.5985650538104821, 0.061681563084597796, 0.6094675222969052, 0.0, 0.09345866005976859, 0.7993214394154491, 0.11438556403104944, 0.4762232900770807, 0.45242021144786737, 0.04009209272785011, nan, 0.14212501513256123, 0.43339055459103054, 0.4277836968915307, 0.00040032025620496394, 0.8873505568836287, 0.7422385564869821, 0.9528040989243474, 0.029041136219678652, 0.23652292476444373, 0.3661642120469451, 0.0] |
| 0.0526 | 19.0 | 7600 | 0.6228 | 0.5108 | 0.4297 | 0.8909 | [nan, 0.9405315503656566, 0.9623814025398809, 0.5330642523364486, 0.8317861268903274, 0.6622725273804787, nan, 0.7263120519701678, 0.8674004839398396, 0.27552922656282364, 0.9455175897361646, 0.5819338108174859, 0.0, 0.6111971104231166, 0.16710808424769832, 0.8864145612781711, 0.0, 0.0827900400596968, 0.930233313789279, 0.11843739134753886, 0.6995346374019279, 0.5042107294717365, 0.042153192915805354, nan, 0.18371550185363175, 0.5630920605013869, 0.5005871795439941, 0.0056406124093473006, 0.9407823912509976, 0.8985265242187241, 0.9751204970628252, 0.012990074184591156, 0.42681216850576115, 0.4687243361620586, 0.0] | [nan, 0.8642299686902748, 0.8983701844671692, 0.4505770666371748, 0.7744797343632894, 0.49247659714013137, nan, 0.623426329007179, 0.696151825084343, 0.23867367627796818, 0.8898312419634539, 0.48430193720774883, 0.0, 0.5244863620262132, 0.07708866651151966, 0.5993412927130506, 0.0, 0.08080962968642183, 0.7977044198782267, 0.10166926045153175, 0.47672785170429793, 0.4451483954200063, 0.04006265597621197, nan, 0.1264172335600907, 0.43160647951283304, 0.42598284151975113, 0.00554016620498615, 0.8878311660408268, 0.74270285241124, 0.9536917187049466, 0.011887351052557973, 0.24007269734586106, 0.3689853153957455, 0.0] |
| 0.054 | 19.25 | 7700 | 0.6199 | 0.5112 | 0.4157 | 0.8897 | [nan, 0.9383711032345364, 0.9577791893332354, 0.532998831775701, 0.8352225138198671, 0.6740592830016223, nan, 0.7513879337239024, 0.8669212886084358, 0.21351340154935997, 0.9451751851979368, 0.5077796986910348, 0.0, 0.7028895768833849, 0.18400807163576743, 0.8914236539585634, 0.0, 0.1072709658838007, 0.9291372462420467, 0.11183132171062435, 0.6577470949582549, 0.5160479493180732, 0.04262807978099335, nan, 0.1900590416037347, 0.5664154498351389, 0.5106689415257805, 0.0012087026591458502, 0.9410463493811095, 0.8949234994980861, 0.9775344732695309, 0.011246839902192383, 0.42160986811355644, 0.47790186427705494, 0.0] | [0.0, 0.8647432445871411, 0.896112476860621, 0.45036567465468447, 0.76789556797279, 0.4910576591298745, nan, 0.6249728507663073, 0.6958387758910245, 0.19385049365303245, 0.8887827463711233, 0.4413911550021468, 0.0, 0.5792159197210647, 0.08409221902017291, 0.5936591009850886, 0.0, 0.10176353700943865, 0.7979000623472865, 0.09749989173896098, 0.46787846117983983, 0.45133395403669296, 0.04032236755185625, nan, 0.1322593590552084, 0.4340972401884397, 0.4265909006774516, 0.0011904761904761906, 0.8880726081330668, 0.743872268803543, 0.953516990645358, 0.009541850530053972, 0.23069652626428858, 0.3703797514940341, 0.0] |
| 0.0671 | 19.5 | 7800 | 0.6217 | 0.5094 | 0.4146 | 0.8892 | [nan, 0.9331891438463118, 0.9574927175990591, 0.5350619158878505, 0.834028291700058, 0.6744756411977813, nan, 0.7431025597272566, 0.8738719931679082, 0.2327354074319566, 0.9446516741270925, 0.5379723388490986, 0.0, 0.669969040247678, 0.18249463992937318, 0.8913668247061116, 0.0, 0.09954703741523316, 0.9238793920053711, 0.0888259739399659, 0.6886532573187448, 0.5368212898403323, 0.03941560981060394, nan, 0.18061238500617877, 0.5652404877793479, 0.5268662338525626, 0.0060435132957292505, 0.9420171078199074, 0.9042006331836784, 0.9732816357580515, 0.009485473911061379, 0.3114064500396269, 0.49469125180868956, 0.0] | [0.0, 0.8617017485872825, 0.8957626230741332, 0.4508312580591182, 0.7683050299189929, 0.4878950714613818, nan, 0.624948812708509, 0.6911476098809349, 0.20973251451290761, 0.8882723484572987, 0.46124933827421916, 0.0, 0.5501928047798635, 0.07156988821841923, 0.5965012359764214, 0.0, 0.09680704791974334, 0.7988314631673791, 0.07901907356948229, 0.4711932405689982, 0.46080549284533756, 0.03769502030348365, nan, 0.13494050061551088, 0.43071416464770335, 0.43780380026513477, 0.005912495072920773, 0.8877312783085815, 0.7390862578001592, 0.9533931934816451, 0.008087813065948142, 0.20454363437358178, 0.3783462459982845, 0.0] |
| 0.0512 | 19.75 | 7900 | 0.6300 | 0.5080 | 0.4263 | 0.8887 | [nan, 0.9391756156362827, 0.957153465687716, 0.531875, 0.8363349452907067, 0.6442373192444947, nan, 0.7406369413577534, 0.8858234094036154, 0.26463399478023114, 0.9530349257345309, 0.5036634559973656, 0.0, 0.6101651186790505, 0.1925841846386682, 0.8746996168084692, 0.0, 0.0674207315476658, 0.9178750280173988, 0.11324690806139175, 0.6909895794473874, 0.5175153479480927, 0.042963294038773116, nan, 0.2016476726623644, 0.5813497671010625, 0.5020052735370366, 0.008058017727639, 0.9412167663408764, 0.897734355178538, 0.9747767193057303, 0.01633407932363546, 0.3496514865166941, 0.49998742995692663, 0.0] | [nan, 0.8625082043880324, 0.8957494129402008, 0.43782876705742063, 0.7496431303023787, 0.48514174134060595, nan, 0.6274006504670441, 0.6871961161760971, 0.2302687309626372, 0.8882991958037961, 0.4373045513839996, 0.0, 0.5170981283890153, 0.08045310853530031, 0.6189258899694966, 0.0, 0.06474078543772313, 0.7999986290910134, 0.09763826734899257, 0.47261393142851427, 0.4453505921742053, 0.040873817370043586, nan, 0.1437999373335422, 0.43193558986563074, 0.42771380026430056, 0.007840062720501764, 0.887320160440498, 0.7455157136812743, 0.9534156947680599, 0.013436060460141392, 0.21404224616226705, 0.3788044726196485, 0.0] |
| 0.0535 | 20.0 | 8000 | 0.6326 | 0.5129 | 0.4292 | 0.8889 | [nan, 0.9375849538350132, 0.9591767441005661, 0.5300221962616822, 0.8259597228240738, 0.6596635135950806, nan, 0.7492101575548236, 0.8658110736822129, 0.2693152160404325, 0.9484445354169388, 0.5863176092862435, 0.0, 0.6744066047471621, 0.20784462101147685, 0.883142820029876, 0.0, 0.07781530646977194, 0.9271092315337143, 0.10147518998658918, 0.678314629589805, 0.497267391277709, 0.043242639253589586, nan, 0.18442949334065634, 0.576354215732454, 0.5145022268507234, 0.007252215954875101, 0.939646591781763, 0.9018448093278766, 0.9767371671098836, 0.012725869285921506, 0.41707817675628445, 0.45857891473041446, 0.0] | [nan, 0.8619435562270654, 0.8965635233177199, 0.4407369269775891, 0.7663725441548623, 0.48239880840583743, nan, 0.6305089171096815, 0.6940516487277982, 0.23291892085557667, 0.8902205646366161, 0.48581173260572985, 0.0, 0.5452649144764289, 0.09688988182726792, 0.6044686963431372, 0.0, 0.07672845562038519, 0.7962772336784573, 0.08572747363415112, 0.4690486788330029, 0.43758222088032955, 0.04117568825641708, nan, 0.13543326140878018, 0.4322105242501251, 0.4339781328847771, 0.007067137809187279, 0.8877484539815808, 0.7395098273111396, 0.9530623665306688, 0.010661406489721605, 0.2371072088724584, 0.3613527133617203, 0.0] |
| 0.0467 | 20.25 | 8100 | 0.6268 | 0.5170 | 0.4303 | 0.8886 | [nan, 0.9395265086570245, 0.956900821509961, 0.5300023364485982, 0.8314043061203785, 0.6477819071422676, nan, 0.7464739330448017, 0.8916828770697918, 0.24499772152947513, 0.9451416993546665, 0.549950605087676, 0.0, 0.687203302373581, 0.1523521251103544, 0.8917889848671819, 0.0, 0.08004084518105412, 0.915062008738324, 0.1551515753572079, 0.6881485415176292, 0.526278382981852, 0.04472316889211688, nan, 0.18451187697377455, 0.5879677605066206, 0.549156898805699, 0.007655116841257051, 0.940224100990058, 0.9054685173132715, 0.9762965505479732, 0.02776741680135936, 0.449734804608913, 0.49033782689095345, 0.0] | [nan, 0.8644696780108341, 0.8944980656632955, 0.440104340976533, 0.7641389998117053, 0.4770745740308388, nan, 0.6297284505666034, 0.6844286473848664, 0.21773065311832707, 0.8890008282328474, 0.46004855121119775, 0.0, 0.5750680081177943, 0.06133536430566133, 0.6000371448704572, 0.0, 0.07885979620791951, 0.8006806868947128, 0.1252363801594355, 0.4706566275608475, 0.45444853884552, 0.04241284306453322, nan, 0.13328969033307544, 0.4323046138453842, 0.45063456852976475, 0.007448059584476676, 0.888463849852071, 0.7450400534159003, 0.9535229169698916, 0.021638336996913712, 0.23653075402126864, 0.371412309599829, 0.0] |
| 0.0566 | 20.5 | 8200 | 0.6333 | 0.5121 | 0.4287 | 0.8890 | [nan, 0.9382327153916955, 0.9575874232706021, 0.5340771028037383, 0.8342787755625269, 0.6541523107263972, nan, 0.7406429739787204, 0.8870285144944726, 0.2079415054476159, 0.9479172512933317, 0.5500535111550177, 0.0, 0.7218266253869969, 0.17152226005801488, 0.8854728193803988, 0.0, 0.06920116251669153, 0.9246219694901651, 0.12077186708389212, 0.6759797704055135, 0.5097310892447952, 0.045561204536566285, nan, 0.1750377591651792, 0.5736405505835558, 0.5156101127827879, 0.00684931506849315, 0.9398823262828916, 0.9029458484550981, 0.9765633952545758, 0.017017903767251024, 0.4133390233493873, 0.48943837047548283, 0.0] | [nan, 0.8643736263008805, 0.8951902105356352, 0.44089650982245326, 0.7609522214327652, 0.4848458703216258, nan, 0.6265179780801705, 0.6811413623628766, 0.1878590542487696, 0.887796763348636, 0.46558542236468475, 0.0, 0.5934331650617232, 0.06971498872257535, 0.6047629609093429, 0.0, 0.06810626948746361, 0.7983954196511591, 0.10178182731484066, 0.4720678124715856, 0.44954610542241913, 0.0431413003227001, nan, 0.12741374485267662, 0.432512153928718, 0.4367328553732968, 0.006685017695635077, 0.8879940574069723, 0.7494547941207608, 0.9536808104413358, 0.013580974233357105, 0.23932508912918143, 0.374424364423531, 0.0] |
| 0.0445 | 20.75 | 8300 | 0.6446 | 0.5134 | 0.4274 | 0.8856 | [nan, 0.9405399334753671, 0.9458917035764169, 0.5273960280373832, 0.8282526135651365, 0.6846166732980127, nan, 0.7372879749180856, 0.8847701285761731, 0.2182567629147852, 0.9486374327394391, 0.565180703054252, 0.0, 0.6657378740970072, 0.14856854584436877, 0.8831509384945119, 0.0, 0.06705417223051345, 0.9206841150299712, 0.12586301097700292, 0.6806553405515008, 0.5199094440427905, 0.04444382367730041, nan, 0.17805849237951393, 0.5833280996493432, 0.5248720391748466, 0.007252215954875101, 0.9356924613611799, 0.9010464353082633, 0.9759161892423923, 0.023617845745783083, 0.4449998983925705, 0.5172488924395381, 0.0] | [nan, 0.8666434932726657, 0.8860462410088557, 0.4516813574923211, 0.7742782740775649, 0.4555874524449895, nan, 0.6267926037830955, 0.6896407624091181, 0.1957204153277486, 0.8882182070612508, 0.46149838666308146, 0.0, 0.5469962267350659, 0.06421718273004798, 0.6011771207515888, 0.0, 0.06543011164763292, 0.79986647852113, 0.10526898843730527, 0.4713830230218466, 0.45188595346756627, 0.04203767801939388, nan, 0.1276553855846278, 0.42972506139948413, 0.441923808813104, 0.007075471698113208, 0.8884781477624152, 0.7456781431206605, 0.9535186762124032, 0.016432559463950374, 0.2430653450400151, 0.37996353686275436, 0.0] |
| 0.0523 | 21.0 | 8400 | 0.6334 | 0.5087 | 0.4256 | 0.8903 | [nan, 0.933221079502352, 0.9637948085900169, 0.5297546728971962, 0.8356436570172051, 0.6448230539257773, nan, 0.7465713167832686, 0.8749679745694359, 0.2327354074319566, 0.9465962111947419, 0.5354408495924919, 0.0, 0.6270897832817337, 0.14024467145920042, 0.8939972072481652, 0.009888751545117428, 0.05998481397114654, 0.9259419692666467, 0.10259275815824766, 0.6911110038285254, 0.5109028637249255, 0.044248282026928876, nan, 0.19286008512975422, 0.5704035170356414, 0.5006314949812767, 0.0, 0.9387582194599503, 0.9072224581646499, 0.9775237134023292, 0.011000766712254964, 0.4426019630555386, 0.48799979887931083, 0.0] | [nan, 0.8627899844290204, 0.898045292380419, 0.4429741700156492, 0.7733528050732301, 0.48122023215814036, nan, 0.6285033134107889, 0.6922586045743415, 0.2067303269489062, 0.888126363728484, 0.4555339601828019, 0.0, 0.512374046123361, 0.062230678829257376, 0.5926462119703566, 0.00044943820224719103, 0.05796624750145485, 0.8002256522783529, 0.08795100349163994, 0.4798915494731881, 0.45172247073689, 0.0420103434557751, nan, 0.13598869181318254, 0.4315342675118884, 0.4297071129707113, 0.0, 0.8889534278458562, 0.7430008362351238, 0.9537407288817968, 0.009678051537276564, 0.23964350552896518, 0.3711983987778357, 0.0] |
| 0.0715 | 21.25 | 8500 | 0.6366 | 0.5151 | 0.4287 | 0.8894 | [nan, 0.9370145031789949, 0.9615540919282511, 0.5349906542056074, 0.8234293246215806, 0.6427307923986297, nan, 0.7520265297434068, 0.877506286473407, 0.2407929077426571, 0.9458038701145451, 0.5871614390384458, 0.0, 0.6843137254901961, 0.1972505990667171, 0.8854890563096707, 0.054388133498145856, 0.06252454638284502, 0.9220868993644009, 0.11473699895693637, 0.6793299129694406, 0.505244648130675, 0.04341024638247947, nan, 0.19102018399011397, 0.5753257968283875, 0.5107132569630631, 0.0, 0.9400241164189752, 0.9050651936505135, 0.9789779094546415, 0.014533859670935389, 0.41945579060740923, 0.49523735034665384, 0.0] | [nan, 0.8636190041686136, 0.8961979040679402, 0.44008160621637177, 0.7735135302856915, 0.47552992149378714, nan, 0.6295369121222396, 0.6946632262523146, 0.2137970353477765, 0.8882677382290695, 0.4793581450054608, 0.0, 0.555406650473239, 0.08438545376065609, 0.5980720618958058, 0.002378506946321423, 0.06108823002737203, 0.7997681127577295, 0.0970839783417272, 0.47365876347968716, 0.44734126160727244, 0.041260653691952316, nan, 0.13688871396241267, 0.4310366799265186, 0.42952982613070945, 0.0, 0.8887487055026462, 0.7433844306901257, 0.9533070831491001, 0.012093141544284045, 0.23472485984284203, 0.3736148179836323, 0.0] |
| 0.0856 | 21.5 | 8600 | 0.6332 | 0.5104 | 0.4282 | 0.8891 | [nan, 0.9354302285089335, 0.9598914301992207, 0.5326285046728972, 0.8348257505275104, 0.6418013774311685, nan, 0.7519851631996333, 0.8757413294112065, 0.2316790256431501, 0.9473149777460632, 0.5441672841030707, 0.0, 0.6676986584107327, 0.19119687224114013, 0.8908797168279535, 0.0, 0.05576938182389443, 0.9230974918555517, 0.1150019040050332, 0.6832652332737915, 0.5057945396840957, 0.04410860941952064, nan, 0.19250308938624194, 0.5698984665305908, 0.50395515277747, 0.0040290088638195, 0.9408126308534799, 0.8986623443239606, 0.9766785258336341, 0.01867306975009325, 0.40035359385478264, 0.4951898635172656, 0.0] | [nan, 0.8652175117062043, 0.8949487144681932, 0.4437434730009742, 0.7611759319446382, 0.47865894832193984, nan, 0.6331643341293494, 0.6931150372692965, 0.2068423485899214, 0.8889820786499946, 0.4611976486594917, 0.0, 0.5675936485656636, 0.08603859250851305, 0.595085736597217, 0.0, 0.05421502748930971, 0.799696203512091, 0.09667497111998775, 0.4707822447654798, 0.4485026865801383, 0.041887733446519526, nan, 0.13581323258742614, 0.4329091328339933, 0.42695701145109816, 0.003957261574990107, 0.8887286680634571, 0.7476012702986532, 0.953293396822863, 0.014771330218834523, 0.23667139184546263, 0.3740649694565481, 0.0] | |
| 0.0426 | 22.25 | 8900 | 0.6388 | 0.5153 | 0.4321 | 0.8907 | [nan, 0.9365843032790866, 0.9619280328787767, 0.5323341121495327, 0.832118008177492, 0.6589330390083284, nan, 0.7530012289310712, 0.8876025999905109, 0.2356145656406645, 0.9495151391383951, 0.5967728657281633, 0.0, 0.6851909184726522, 0.16698196493883213, 0.8856433071377541, 0.0, 0.046160291152829054, 0.9249913955800083, 0.14087981589099158, 0.6780864102710397, 0.5070796622838727, 0.043214704732107936, nan, 0.19390361114925167, 0.577557963050191, 0.5263122908865303, 0.009266720386784852, 0.9401577082628303, 0.9045005405226523, 0.9759350190099954, 0.014261884039951924, 0.44343514397772765, 0.48190053464583205, 0.0] | [nan, 0.8638275353000382, 0.8975929370440341, 0.44847327680807825, 0.7680456934961463, 0.4896127563059361, nan, 0.6344922288860472, 0.6906430201049919, 0.21071058091286307, 0.8908914064913077, 0.4893922260291313, 0.0, 0.5741773684438103, 0.0915502696722445, 0.6133303348044865, 0.0, 0.045543787135107205, 0.799706519605589, 0.11493135050077327, 0.47303106132662764, 0.44896719237169413, 0.04119511090991399, nan, 0.13769769301273427, 0.43323479414732197, 0.4435750434181777, 0.008966861598440545, 0.8892865533176849, 0.7464162172003368, 0.9537521470921787, 0.012501163611760084, 0.24370386088743454, 0.37164396457569027, 0.0] |
| 0.0544 | 22.5 | 9000 | 0.6275 | 0.5126 | 0.4297 | 0.8902 | [nan, 0.9362912936349177, 0.962198079008307, 0.5305654205607476, 0.829452734049054, 0.6501778145136554, nan, 0.7606583485441561, 0.8785880343502396, 0.2379137495339492, 0.9477460490242178, 0.5748332921709064, 0.0, 0.6779153766769865, 0.15399167612561482, 0.8968792621939339, 0.0, 0.062053255832220565, 0.9268894385323623, 0.11712114438980778, 0.6830882170073133, 0.515366328868847, 0.046119894966199226, nan, 0.1939585335713305, 0.5666535824566913, 0.5097161596242051, 0.0064464141821112, 0.9399919952412273, 0.8983810519232679, 0.9745475341343337, 0.015694289029798168, 0.43490011989676686, 0.47604289457365206, 0.0] | [nan, 0.8648796447130465, 0.8972780355218145, 0.44448663694053075, 0.7723828909831303, 0.4856595115662902, nan, 0.6367705951823552, 0.693571040656192, 0.2097133467226584, 0.8885713515050402, 0.47493538294109644, 0.0, 0.5753448653382964, 0.07485745815707191, 0.589861603519713, 0.0, 0.060925449871465295, 0.7986432258569581, 0.09907840555757864, 0.4719490094091225, 0.45171147174755927, 0.04363338442835245, nan, 0.13716960245479792, 0.4304074481173985, 0.4370060790273556, 0.00631163708086785, 0.8878797422918536, 0.748175287257327, 0.9535688641919678, 0.013234083170064194, 0.2360317635381052, 0.36728912241605793, 0.0] |
| 0.0701 | 22.75 | 9100 | 0.6508 | 0.5132 | 0.4302 | 0.8902 | [nan, 0.9420095059141509, 0.9626173339520694, 0.5384521028037383, 0.8237863722622742, 0.6345902505663333, nan, 0.7493342571861443, 0.8728092233240025, 0.24462488089813164, 0.9462424874982255, 0.5649748909195687, 0.0, 0.6890092879256966, 0.18148568545844368, 0.8978859518087939, 0.0, 0.06417406331003063, 0.926905788482557, 0.10334608188877299, 0.6837845785184178, 0.5068636881640055, 0.044555561763226996, nan, 0.19329946450638474, 0.5856309206050139, 0.5353969555294587, 0.008058017727639, 0.9389002783925003, 0.9000722535382172, 0.9752872750044519, 0.01801255750341912, 0.4159604950313967, 0.4749814242696805, 0.0] | [nan, 0.8667971887550201, 0.8964523921395798, 0.43883250929953793, 0.7789739251684871, 0.4822597903246794, nan, 0.6338344499902683, 0.6949882507612449, 0.21506355392067597, 0.8897027195058894, 0.47454492022058187, 0.0, 0.5744214058332616, 0.09034404821697639, 0.5890266504761296, 0.0, 0.06334315397736083, 0.7983683031468644, 0.08797806890816708, 0.47160166966502776, 0.4468892814313033, 0.04230993686667728, nan, 0.13598253612549263, 0.43447527412791603, 0.442910823939144, 0.007836990595611285, 0.8890303591865106, 0.7479650947941834, 0.9538041433738902, 0.014260666277030976, 0.23761100470137558, 0.3677322595225377, 0.0] |
| 0.0588 | 23.0 | 9200 | 0.6510 | 0.5156 | 0.4306 | 0.8898 | [nan, 0.9386450845503147, 0.9615407102293612, 0.5321039719626168, 0.8252994992682097, 0.646236577683447, nan, 0.7500099107344458, 0.8891493096740523, 0.2356145656406645, 0.948320024675765, 0.5611467852144563, 0.0, 0.7061919504643963, 0.15790137470046664, 0.8929012145223095, 0.0, 0.06268164323305318, 0.9247904360655894, 0.12226195797943674, 0.6746470281016981, 0.5158947761834156, 0.04522599027878652, nan, 0.1926953178635178, 0.5791620871931753, 0.5486694289955906, 0.014504431909750202, 0.9393220200484532, 0.9030809791181759, 0.9764800062837624, 0.014337001118985454, 0.46371598691296306, 0.476005184444432, 0.0] | [nan, 0.8636880663267268, 0.8963496684957871, 0.4393286431075093, 0.7694031519559503, 0.48618816019454364, nan, 0.6323091767222339, 0.6843731284418411, 0.20910695246148756, 0.8901931512501616, 0.4713865836791148, 0.0, 0.594294150853272, 0.07763859605605854, 0.5971841386537511, 0.0, 0.061455525606469004, 0.799169285452784, 0.10285033809898536, 0.4708681854568623, 0.4517361674617981, 0.04280237937871778, nan, 0.1379100253532753, 0.432983014903532, 0.45285296269202635, 0.013830195927775643, 0.8892098290384068, 0.7459428984706676, 0.9536680185853351, 0.012051498108992573, 0.23353802067342136, 0.36591936147117593, 0.0] |
| 0.067 | 23.25 | 9300 | 0.6275 | 0.5128 | 0.4311 | 0.8905 | [nan, 0.9372797021893622, 0.9638153118797325, 0.5312441588785046, 0.8278251787794161, 0.6422768634184979, nan, 0.7515353020360958, 0.8786212459078616, 0.24139359542648825, 0.9490656742280216, 0.5420885815427677, 0.0, 0.7038183694530443, 0.17707150964812712, 0.8822822627784633, 0.0, 0.06734218312256172, 0.9252767953435341, 0.10501829500488419, 0.6879495810858851, 0.5059293320425944, 0.04416447846248394, nan, 0.19404091720444872, 0.5719029674988224, 0.5293478983403869, 0.008058017727639, 0.9393905631474131, 0.9031768115782158, 0.9770540451989742, 0.01500269385386879, 0.4205734723322969, 0.4884174036436365, 0.0] | [nan, 0.8641485198316792, 0.897149130251509, 0.4431534355853929, 0.7712457425720085, 0.4882715323914724, nan, 0.6318488634618116, 0.69528994349434, 0.21461061083181407, 0.890398769558611, 0.46117346313448776, 0.0, 0.5855585129217824, 0.08629909644108427, 0.608788204714529, 0.0, 0.0658912742737101, 0.7992632312490636, 0.09043857647998176, 0.47160302909046053, 0.44752081120336445, 0.04198645598194131, nan, 0.13798894682367646, 0.43383933729163815, 0.44664223751121745, 0.007836990595611285, 0.8889539638268134, 0.7463182889742939, 0.9538402391601662, 0.01284986599932556, 0.2406063988095238, 0.3716953276213374, 0.0] |
| 0.0513 | 23.5 | 9400 | 0.6472 | 0.5144 | 0.4306 | 0.8897 | [nan, 0.938401309042541, 0.9600648179629494, 0.5333469626168225, 0.832045261686822, 0.6450022850427629, nan, 0.7455948939896135, 0.883593490534706, 0.23551099879862464, 0.9506135691239773, 0.5523380258500041, 0.0, 0.6968524251805985, 0.18312523647370413, 0.8904413197376112, 0.0, 0.06160814808996413, 0.9256348385566595, 0.12978691700193712, 0.6801915871922148, 0.5208407367015084, 0.04416447846248394, nan, 0.1951942880681038, 0.5735463442717329, 0.5357736367463606, 0.010072522159548751, 0.9380115028759878, 0.9056712133078884, 0.9770508172388136, 0.017681006258029756, 0.4195573980369445, 0.4783152790270228, 0.0] | [nan, 0.8645788687513425, 0.8959992534632647, 0.44551363683824813, 0.7647562903055005, 0.48403962995403316, nan, 0.6342904860496079, 0.6900071507171095, 0.2094308344078099, 0.8896775711392028, 0.4683431642874594, 0.0, 0.5778034484233945, 0.08829968377523717, 0.5990191205946445, 0.0, 0.060376680693831467, 0.7987594181280973, 0.10780592458123607, 0.47080665968645763, 0.45253694794349175, 0.04196862307876085, nan, 0.13750677087363616, 0.4326699094290159, 0.44833404409174343, 0.009754194303550527, 0.8891644113783483, 0.7456061236432407, 0.9539508207140677, 0.014409173235161254, 0.23587072008774035, 0.3678274990977986, 0.0] |
| 0.0514 | 23.75 | 9500 | 0.6439 | 0.5126 | 0.4298 | 0.8893 | [nan, 0.9377822895762951, 0.9605358193045652, 0.5385, 0.8340916008081545, 0.6271635536295225, nan, 0.7452691324573968, 0.884822318166722, 0.22701851775135673, 0.9488086350085531, 0.537766526714415, 0.0, 0.6666150670794634, 0.20002522386177324, 0.8838085341300254, 0.0, 0.05781164087660042, 0.9238019884436897, 0.11829666054073742, 0.6694155391023081, 0.5142496967171933, 0.043549918989887706, nan, 0.19379376630509407, 0.5833176322813628, 0.5375905696749462, 0.014101531023368252, 0.9389680151020606, 0.9049790133806934, 0.9761012589582619, 0.02082556260101952, 0.414029953870227, 0.5005852053386369, 0.0] | [nan, 0.863411965165267, 0.894931428278196, 0.4402552004737254, 0.7611011560258087, 0.4837046157587918, nan, 0.6314089786667951, 0.6898753375504013, 0.2022476056909819, 0.8895664124405706, 0.4596777031068576, 0.0, 0.5673444293179922, 0.08523215821152193, 0.6083079089415631, 0.0, 0.056674965989886805, 0.7993862287218525, 0.09987768652804473, 0.4710007534678047, 0.450200875376809, 0.041379127295891285, nan, 0.1393342283999368, 0.4316562226473846, 0.44881423656073105, 0.013539651837524178, 0.8892954904899649, 0.7457058534465373, 0.9537927510495554, 0.016624966398544282, 0.24126375122858124, 0.37717282181124784, 0.0] |
| 0.0396 | 24.0 | 9600 | 0.6535 | 0.5114 | 0.4293 | 0.8894 | [nan, 0.9355970923117436, 0.9613217787436595, 0.5374941588785047, 0.8288621111896686, 0.642493049404965, nan, 0.7527694039253403, 0.878070882952982, 0.22343510501677782, 0.9446323372316829, 0.5478719025273731, 0.0, 0.6478844169246646, 0.1983856728465128, 0.8865769305708905, 0.0, 0.07386170240620009, 0.92611209153323, 0.1052169737909568, 0.6754384809956214, 0.5089943264670923, 0.04279568690988323, nan, 0.19272277907455718, 0.5795022766525357, 0.533735126631362, 0.008058017727639, 0.9392768622420797, 0.9018779025514876, 0.9758392561919, 0.014779932860872808, 0.4110833384137048, 0.4900487159002665, 0.0] | [nan, 0.8639528354166897, 0.8950065886128323, 0.44207385913246505, 0.7660355663095111, 0.48472638815638147, nan, 0.632634318964356, 0.6931134697057083, 0.20094633110411506, 0.8905903659512103, 0.4648726053472574, 0.0, 0.5535911115030201, 0.08658556723729839, 0.604755865918694, 0.0, 0.0724857392466211, 0.7980282230680995, 0.09017126154632008, 0.4707250951496855, 0.44738482499754295, 0.04074793201585233, nan, 0.13850404578646142, 0.43285457950063133, 0.4469182529964006, 0.007840062720501764, 0.8885988668670501, 0.746866946124605, 0.9537924535842215, 0.012023161337086795, 0.24114295250810605, 0.37191019096397804, 0.0] |
| 0.0572 | 24.25 | 9700 | 0.6468 | 0.5169 | 0.4312 | 0.8893 | [nan, 0.9401996856733055, 0.9583929096522826, 0.5344988317757009, 0.8275082400146594, 0.6494017622545427, nan, 0.7543103076809053, 0.8711154338852778, 0.24802187331703882, 0.9453213909924968, 0.5670947559068082, 0.0, 0.7040763673890609, 0.20204313280363223, 0.8891017730726765, 0.0, 0.06668761291336109, 0.9255172844843733, 0.1113677378764549, 0.6754443327730256, 0.5202249807001851, 0.044248282026928876, nan, 0.19305231360703007, 0.5827890301983566, 0.55261350291374, 0.014101531023368252, 0.9394324953961886, 0.9048990380903004, 0.9755035483352065, 0.0154197231547101, 0.45343331504399603, 0.47399118420979125, 0.0] | [nan, 0.863689319961114, 0.895499199129711, 0.4429491151299229, 0.765606502579043, 0.48571154804691785, nan, 0.6324972973597951, 0.6956526681114833, 0.21654760828284655, 0.8900625950293436, 0.47545424740738185, 0.0, 0.5803666368933691, 0.08725014977397745, 0.5992339680455242, 0.0, 0.06544361365913821, 0.7982999807741021, 0.09452243441114062, 0.4717078672807595, 0.4521680319629779, 0.04200588718873478, nan, 0.13927135130851676, 0.4339583670272156, 0.4507663389242337, 0.01348747591522158, 0.8884945203133995, 0.7465496843182982, 0.9537005332798949, 0.012399112712579277, 0.24028127759471044, 0.3662329926099869, 0.0] |
| 0.1 | 24.5 | 9800 | 0.6434 | 0.5135 | 0.4300 | 0.8895 | [nan, 0.9377224102212196, 0.9606645248290818, 0.5361588785046729, 0.8331230894215592, 0.6375564947567199, nan, 0.7494747310743753, 0.8814869288798216, 0.23789303616554125, 0.9491298161249899, 0.5208281880299662, 0.0, 0.7291537667698659, 0.1923319460209358, 0.8872670000649477, 0.0, 0.058754221977849345, 0.9251466166261608, 0.10029967383565953, 0.684280516653427, 0.5108906098741529, 0.04338231186099782, nan, 0.1931896196622271, 0.581302663945151, 0.5429748953047794, 0.014101531023368252, 0.939044218900316, 0.9053540699149504, 0.9762874046608516, 0.016517986655062374, 0.4174033205307972, 0.4717006430275368, 0.0] | [nan, 0.8641608155359141, 0.8958643122776131, 0.4417664033758718, 0.7644541831979321, 0.4846296892790795, nan, 0.6335999382179972, 0.6905137105945841, 0.21054850773630565, 0.8890883354259757, 0.44958072768618534, 0.0, 0.6023700925018117, 0.08546290069491146, 0.6030192343768966, 0.0, 0.057282891713891865, 0.7981027891830667, 0.08634672672073433, 0.470738722708764, 0.44815859378883993, 0.04122753457750405, nan, 0.1376066035521477, 0.4340720968586592, 0.4532255678035067, 0.01352918438345574, 0.888563607775072, 0.7458284701692807, 0.9538944088343424, 0.01350879014029907, 0.2349899322716456, 0.3667384437299315, 0.0] |
| 0.0547 | 24.75 | 9900 | 0.6482 | 0.5155 | 0.4313 | 0.8898 | [nan, 0.9397340904212859, 0.9603330836947732, 0.5307733644859813, 0.8309005858255233, 0.6429241895489165, nan, 0.7515697741559071, 0.8821369265075675, 0.23520029827250508, 0.948613379528076, 0.5628961883592657, 0.0, 0.7383384932920537, 0.19170134947660486, 0.8888176268104176, 0.0, 0.06747309716440185, 0.9241314709843229, 0.1176757893342605, 0.6804680836745651, 0.509839842170402, 0.04290742499580982, nan, 0.19313469724014828, 0.5775631967341812, 0.5366821032106535, 0.009669621273166801, 0.9403802717370998, 0.9035215326574961, 0.9734618635336802, 0.012358054623067678, 0.41701721229856326, 0.48626373626373626, 0.0] | [nan, 0.8640778611527823, 0.8958137823018933, 0.4460626314967881, 0.7641756445447411, 0.4858917928580605, nan, 0.6328187132466054, 0.6908867956078256, 0.20850548118768247, 0.8893168906380365, 0.47044860327507915, 0.0, 0.6030682345007797, 0.08536927829261444, 0.6011740028114567, 0.0, 0.06583048076431819, 0.7992350659678636, 0.09887388797306791, 0.4713607906006725, 0.44755617108819296, 0.040873892333484124, nan, 0.13801020408163264, 0.4335135793399971, 0.45185060816356987, 0.0093603744149766, 0.8886009280250379, 0.7464543006342957, 0.9536265277974683, 0.010431767147039596, 0.2352570275599578, 0.3719794479055262, 0.0] |
| 0.0627 | 25.0 | 10000 | 0.6463 | 0.5168 | 0.4317 | 0.8895 | [nan, 0.9354022848098984, 0.9601675641402632, 0.5369719626168225, 0.8337939300328185, 0.6403441237446122, nan, 0.7582108280375539, 0.8834986003700717, 0.24187000289987157, 0.948116751458167, 0.5520704700749156, 0.0, 0.7381320949432405, 0.19649388321352, 0.888963759173865, 0.0, 0.07624433796769041, 0.9231866922167408, 0.1182221559959602, 0.6801081993642044, 0.5121910497873957, 0.04447175819878205, nan, 0.19406837841548813, 0.5788088135238394, 0.5379894086104895, 0.008460918614020952, 0.9391146435745414, 0.9050362370798539, 0.9765451034803329, 0.015450806083965353, 0.41939482614968804, 0.4941702933568719, 0.0] | [nan, 0.8640678937775673, 0.895377615265056, 0.442350332594235, 0.7643727945096741, 0.4849891658522591, nan, 0.6340492784936108, 0.6910083381883088, 0.21346568681218236, 0.8895978581938467, 0.46446072065520405, 0.0, 0.601404187337089, 0.08586860670194003, 0.6029780227646933, 0.0, 0.07410800631139614, 0.7995575849393181, 0.09964415294445995, 0.4716975388811325, 0.4492564945882909, 0.04216548363174065, nan, 0.13932260862707987, 0.43292556418938755, 0.4516033033256454, 0.00821917808219178, 0.8889508587805682, 0.7461158390782254, 0.954070468766836, 0.012555965083260888, 0.23512657506778772, 0.3742610137901782, 0.0] |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
IsaacSST/gpt2-xl-ft-d4-0.15-n-3
|
IsaacSST
| 2022-03-21T07:29:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-21T04:55:00Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-d4-0.15-n-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-d4-0.15-n-3
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 156 | 1.3294 |
| No log | 2.0 | 312 | 1.3466 |
| No log | 3.0 | 468 | 1.4295 |
| 1.1304 | 4.0 | 624 | 1.4877 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
IsaacSST/gpt2-xl-ft-d4-0.3
|
IsaacSST
| 2022-03-21T04:24:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-21T01:38:11Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-d4-0.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-d4-0.3
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 156 | 1.2334 |
| No log | 2.0 | 312 | 1.2392 |
| No log | 3.0 | 468 | 1.2944 |
| 1.1868 | 4.0 | 624 | 1.3401 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
beston91/gpt2-xl_ft_mult_10k
|
beston91
| 2022-03-20T22:27:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-18T15:46:08Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_mult_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_mult_10k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 54 | 1.3358 |
| No log | 1.99 | 108 | 0.7486 |
| No log | 2.99 | 162 | 0.6997 |
| No log | 3.99 | 216 | 0.6916 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 25.89222526550293
### Dataset Size
Size: 5000
|
jcai1/similarity6
|
jcai1
| 2022-03-20T21:38:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-20T21:32:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: similarity6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# similarity6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 393 | 0.2287 | 0.9341 | 0.9112 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
KoboldAI/GPT-Neo-2.7B-Shinen
|
KoboldAI
| 2022-03-20T18:49:18Z | 669 | 22 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
language: en
license: mit
---
# GPT-Neo 2.7B - Shinen
## Model Description
GPT-Neo 2.7B-Shinen is a finetune created using EleutherAI's GPT-Neo 2.7B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training data
The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way:
```
[Theme: <theme1>, <theme2> ,<theme3>]
<Story goes here>
```
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/GPT-Neo-2.7B-Shinen')
>>> generator("She was staring at me", do_sample=True, min_length=50)
[{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo-Shinen was trained on a dataset known to contain profanity, lewd, and otherwise abrasive language. GPT-Neo-Shinen *WILL* produce socially unacceptable text without warning.
GPT-Neo-Shinen will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model is made using the following software:
```bibtex
@software{gpt-neo,
author = {Black, Sid and
Leo, Gao and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
```
|
KoboldAI/GPT-J-6B-Shinen
|
KoboldAI
| 2022-03-20T18:48:45Z | 1,746 | 24 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"en",
"arxiv:2101.00027",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
language: en
license: mit
---
# GPT-J 6B - Shinen
## Model Description
GPT-J 6B-Shinen is a finetune created using EleutherAI's GPT-J 6B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training data
The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way:
```
[Theme: <theme1>, <theme2> ,<theme3>]
<Story goes here>
```
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Shinen')
>>> generator("She was staring at me", do_sample=True, min_length=50)
[{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}]
```
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model uses the following model as base:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
|
beston91/gpt2-xl_ft_mult_5k
|
beston91
| 2022-03-20T17:31:57Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-19T08:50:34Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_mult_5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_mult_5k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 27 | 6.3035 |
| No log | 1.99 | 54 | 1.2709 |
| No log | 2.99 | 81 | 0.7482 |
| No log | 3.99 | 108 | 0.6758 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 21.267963409423828
### Dataset Size
Size: 5000
|
cammy/pegasus-cnn_dailymail-1000-lit-evalMA-ga
|
cammy
| 2022-03-20T14:36:20Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-20T13:26:27Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-cnn_dailymail-1000-lit-evalMA-ga
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-cnn_dailymail-1000-lit-evalMA-ga
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6852
- Rouge1: 25.789
- Rouge2: 11.0694
- Rougel: 20.7716
- Rougelsum: 22.4851
- Gen Len: 46.32
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 250 | 1.7061 | 25.8286 | 10.8156 | 20.9502 | 22.6588 | 44.36 |
| 1.4533 | 2.0 | 500 | 1.6876 | 26.0862 | 11.5197 | 21.1282 | 23.0963 | 45.65 |
| 1.4533 | 3.0 | 750 | 1.6852 | 25.789 | 11.0694 | 20.7716 | 22.4851 | 46.32 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
KoboldAI/GPT-J-6B-Janeway
|
KoboldAI
| 2022-03-20T12:59:44Z | 4,477 | 13 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"en",
"arxiv:2101.00027",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
language: en
license: mit
---
# GPT-J 6B - Janeway
## Model Description
GPT-J 6B-Janeway is a finetune created using EleutherAI's GPT-J 6B model.
## Training data
The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres.
Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]`
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Janeway')
>>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
[{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
```
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model uses the following model as base:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
|
KoboldAI/GPT-Neo-2.7B-Janeway
|
KoboldAI
| 2022-03-20T12:57:50Z | 124 | 6 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
language: en
license: mit
---
# GPT-Neo 2.7B - Janeway
## Model Description
GPT-Neo 2.7B-Janeway is a finetune created using EleutherAI's GPT-Neo 2.7B model.
## Training data
The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres.
Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]`
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/GPT-Neo-2.7B-Janeway')
>>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
[{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model is made using the following software:
```bibtex
@software{gpt-neo,
author = {Black, Sid and
Leo, Gao and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
```
|
mitiku/AmharicWICPostag10Tags
|
mitiku
| 2022-03-20T10:11:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-06T20:46:20Z |
---
tags:
- generated_from_trainer
model-index:
- name: AmharicWICPostag10Tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AmharicWICPostag10Tags
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
mitiku/AmharicCacoPostag
|
mitiku
| 2022-03-20T10:11:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-06T20:34:40Z |
---
tags:
- generated_from_trainer
model-index:
- name: AmharicCacoPostag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AmharicCacoPostag
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
mitiku/AmharicWICPostag
|
mitiku
| 2022-03-20T10:10:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-06T20:42:53Z |
---
tags:
- generated_from_trainer
model-index:
- name: AmharicWICPostag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AmharicWICPostag
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
mrp/simcse-model-wangchanberta
|
mrp
| 2022-03-20T09:00:47Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"feature-extraction",
"arxiv:2104.08821",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-20T08:34:14Z |
# {mrp/simcse-model-wangchanberta}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SimCSE [here](https://arxiv.org/pdf/2104.08821.pdf) by using mBERT as the baseline model and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["ฉันนะคือคนรักชาติยังไงละ!", "พวกสามกีบล้มเจ้า!"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
|
espnet/ftshijt_espnet2_asr_dsing_hubert_conformer
|
espnet
| 2022-03-20T04:46:53Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:dsing",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-20T04:45:28Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- dsing
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/ftshijt_espnet2_asr_dsing_hubert_conformer`
This model was trained by jiatong using dsing recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/dsing/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_dsing_hubert_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat Mar 19 23:02:37 EDT 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `c1ed71c6899e54c0b3dad82687886b1183cd0885`
- Commit date: `Wed Mar 16 23:34:49 2022 -0400`
## asr_train_asr_conformer7_hubert_ll60k_large_raw_bpe500_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/dev|482|4018|83.6|9.4|7.0|6.4|22.8|58.3|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/test|480|4632|81.4|12.3|6.3|4.5|23.1|52.1|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/dev|482|18692|88.5|3.1|8.4|5.9|17.4|58.3|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/test|480|21787|87.9|4.3|7.8|4.5|16.6|52.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/dev|482|6097|82.2|7.1|10.7|5.7|23.5|58.3|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/test|480|7736|81.7|9.2|9.1|4.0|22.3|52.1|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer7_hubert_ll60k_large.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer7_hubert_ll60k_large_raw_bpe500_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 35
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 8
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe500_sp/train/speech_shape
- exp/asr_stats_raw_bpe500_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe500_sp/valid/speech_shape
- exp/asr_stats_raw_bpe500_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train30_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train30_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0025
scheduler: warmuplr
scheduler_conf:
warmup_steps: 40000
token_list:
- <blank>
- <unk>
- ▁I
- ''''
- ▁YOU
- S
- T
- ▁THE
- M
- ▁ME
- ▁A
- ▁AND
- ▁TO
- E
- A
- ING
- D
- ▁MY
- ▁
- O
- ▁IT
- I
- N
- RE
- Y
- ▁BE
- ▁IN
- ▁ON
- ▁LOVE
- U
- ▁WE
- LL
- H
- ▁YOUR
- ▁S
- IN
- ▁OF
- ▁DO
- ▁THAT
- ▁ALL
- L
- ▁DON
- ▁OH
- ▁LIKE
- ▁KNOW
- ▁FOR
- ▁CAN
- ▁JUST
- P
- ▁BUT
- ED
- K
- ▁WHEN
- ▁SO
- R
- ▁GO
- ▁WHAT
- ▁C
- ▁WITH
- W
- ▁F
- C
- ▁NO
- ER
- ▁ONE
- ▁LET
- VE
- ES
- ▁NOW
- ▁BABY
- G
- ▁GOT
- ▁COME
- CAUSE
- LE
- B
- ▁B
- AR
- ▁UP
- ▁'
- ▁W
- ▁SEE
- ▁TIME
- ▁ARE
- ▁G
- ▁LOOK
- ▁THIS
- F
- ▁IS
- ▁NEVER
- ▁M
- ▁P
- AN
- ▁WAS
- ▁WAY
- ▁IF
- OR
- ▁SAY
- V
- ▁R
- ▁T
- ▁DOWN
- RA
- ▁THERE
- ▁HEART
- ▁NOT
- RO
- ▁WILL
- ▁OUT
- CE
- ▁WANT
- ▁YEAH
- ▁HAVE
- ▁GIVE
- ▁TOO
- ▁GONNA
- ▁HOW
- ▁NEED
- ▁GET
- ▁TAKE
- ▁EVERY
- ▁FEEL
- ▁HE
- EN
- ▁FROM
- ▁HA
- ▁K
- ▁SHE
- 'ON'
- ▁DI
- RI
- ▁ONLY
- NE
- ▁WHO
- ▁AWAY
- ▁E
- ▁D
- ▁LIFE
- ▁MAKE
- IC
- ▁BACK
- ▁WHERE
- ▁MADE
- ▁DAY
- ▁HERE
- ▁LO
- ▁HER
- ▁AS
- ▁GOOD
- ▁WANNA
- ▁OOH
- ▁TELL
- LY
- TH
- ▁WON
- ▁LIGHT
- ▁KEEP
- ▁MA
- ▁LA
- ▁SH
- ▁WORLD
- ▁MORE
- ▁LI
- AL
- ▁COULD
- ▁GIRL
- ▁NOTHING
- ▁EVER
- ▁THINK
- IE
- ▁BY
- ▁AT
- ▁TONIGHT
- ▁THEY
- ▁CALL
- ▁HO
- ▁WOULD
- IL
- ▁OUR
- ▁FALL
- ▁NIGHT
- ▁THAN
- ▁DE
- ▁SOME
- ▁WAIT
- ▁RIGHT
- ▁RE
- ▁HALLELUJAH
- ▁TH
- NG
- ▁CO
- ▁WERE
- ▁TALK
- ET
- ▁BO
- ▁HOLD
- UR
- ▁BEEN
- ▁US
- ▁PA
- VER
- ▁EYES
- ▁DREAM
- ▁SONG
- ▁SHOULD
- ▁STILL
- ▁OVER
- TA
- ▁ANYMORE
- IGHT
- ▁STAY
- ▁BETTER
- LESS
- ▁THROUGH
- ▁LITTLE
- X
- ▁GONE
- ▁AIN
- ▁DA
- ▁HOLDING
- ▁HURT
- ▁TRY
- ▁FIND
- Z
- DE
- ▁LAST
- ▁SAID
- ▁ALWAYS
- ▁BODY
- ▁MIND
- ▁CRY
- ▁EVEN
- ▁RUN
- ▁HOPE
- ▁WITHOUT
- ▁MISS
- ▁ABOUT
- ▁HAND
- ▁J
- ▁AGAIN
- ▁THOUGH
- ▁NAH
- ▁LIVE
- ▁BA
- ▁OLD
- ▁HEAD
- ▁FIRE
- ▁MAN
- ▁SOMETHING
- ▁WHY
- THER
- ▁HOME
- ▁OR
- ▁INSIDE
- ▁NEW
- ▁HEY
- TION
- ▁EVERYTHING
- ▁HAD
- ▁SOMETIMES
- ▁HARD
- ▁TOUCH
- ▁HEAR
- ▁AM
- ▁MUCH
- ▁LONG
- ▁STAR
- GETTING
- ▁WALK
- ▁PEOPLE
- ▁BEFORE
- ▁CLOSE
- ▁TWO
- ▁FAR
- ▁SHOW
- ▁STAND
- ▁LOSE
- ▁HELP
- ▁NAME
- ▁BOY
- ▁TRUE
- ▁PLAY
- ▁DARK
- ▁THINGS
- ▁NA
- ▁TEAR
- ▁END
- ▁NOBODY
- ▁SEA
- ▁ROCKABYE
- ▁BELIEVE
- ▁BROKE
- ▁AROUND
- ▁START
- ▁KISS
- ▁FEELING
- ▁BREAK
- ▁SOMEONE
- ▁FRIEND
- ▁ALONE
- ▁BEAUTIFUL
- ▁CRAZY
- ▁OWN
- OSE
- ▁STOP
- ▁LOST
- ▁HIM
- ▁BAD
- ▁CHANCE
- ▁REALLY
- ▁WISH
- ▁MOVE
- ▁SKY
- ▁PLACE
- AKE
- ▁LEAVE
- ▁YA
- ▁STRONG
- ▁PUT
- ▁OPEN
- ▁WRONG
- ▁COLD
- OCK
- ▁USED
- ▁FOUND
- ▁LONELY
- ▁DANCE
- EACH
- ▁ANOTHER
- ▁SIDE
- ▁UNDER
- ▁MATTER
- ▁THESE
- ▁CARE
- ▁MINE
- ▁SHINE
- ▁AFRAID
- ▁TURN
- ▁PLEASE
- ▁SUN
- ▁DIAMOND
- ▁UNTIL
- ▁FACE
- ▁LEARN
- ▁TRUST
- ▁WONDER
- ▁BREATH
- ATE
- ▁SORRY
- ▁HU
- ▁WATCH
- ▁LATE
- ROUND
- ▁ARMS
- ▁PERFECT
- ▁MAYBE
- ▁PULL
- ▁REMEMBER
- ▁FIGHT
- ▁MYSELF
- ▁INTO
- ▁DARLING
- ▁THUNDER
- ▁FOLLOW
- ▁REASON
- ▁BURN
- ▁HIS
- ▁MUST
- ▁FREE
- ▁FLASHLIGHT
- ▁1
- ▁ENOUGH
- ▁DRINK
- ▁WORDS
- ▁HIDE
- ▁UN
- ▁FORGET
- ▁SURE
- ▁CHANGE
- ▁SMILE
- ▁PROMISE
- ▁FOREVER
- '2'
- ▁SWEET
- ▁SAME
- ▁OOOH
- ▁PART
- ▁SOMEBODY
- NESS
- ▁BRIGHT
- ▁HEAVEN
- ▁DEEP
- ▁HIGH
- ▁INSTEAD
- ▁MOMENT
- ▁ALONG
- ▁ALRIGHT
- ▁SLOW
- ▁TOMORROW
- ▁SOUL
- ▁QU
- ▁PUSH
- ▁CHANDELIER
- ▁LEFT
- SIDE
- ▁TOLD
- ▁KNEW
- READY
- ▁LOVING
- ▁SAW
- '3'
- ▁WORK
- ▁DANCING
- ▁THREE
- ▁SAVE
- ▁SHOOT
- ▁LEAD
- ▁SKI
- ▁WILD
- ▁WIND
- ▁WHILE
- ▁EDGE
- ▁HAPPY
- ▁FEAR
- STUCK
- ▁MOST
- ▁LISTEN
- ▁WOAH
- ▁FIRST
- ▁JOLENE
- ▁VOICE
- ▁COMP
- ▁MILLION
- FUL
- ▁OOOOOH
- ▁CAME
- ▁RISE
- ▁NEXT
- ▁COUNT
- ▁MOUNTAIN
- ▁ROOM
- ▁BLUE
- ▁HIT
- ▁RAISE
- J
- ▁THOUSAND
- ▁SHAP
- ▁TREAT
- ▁DRY
- ▁FINALLY
- ▁TITANIUM
- ▁CARRY
- ▁TRUTH
- ▁WATER
- ▁MORNING
- TIME
- ▁BELONG
- ▁UMA
- ▁ALIVE
- ▁ELSE
- ▁ANGEL
- ▁BRAND
- ▁APART
- ▁EVERYBODY
- ▁SOUND
- ▁GUESS
- ▁PRAY
- ▁FAITH
- ▁AFTER
- ▁THROW
- ▁TRIED
- ▁SLEEP
- ▁FOOL
- ▁DISCOVERING
- ▁FUCK
- ▁TASTE
- ▁UNDERSTAND
- ▁SHAME
- ▁POWER
- ▁WELCOME
- ▁FELT
- ▁SAFE
- ▁DESERVE
- ▁GAME
- ▁SUPERMA
- ▁SWEAR
- ▁BETWEEN
- ▁GLASS
- ▁CATCH
- ▁TOGETHER
- '0'
- '4'
- '6'
- '5'
- '1'
- '8'
- '7'
- '9'
- Q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: hubert_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d2
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
beston91/gpt2-xl_ft_mult_1k
|
beston91
| 2022-03-19T23:56:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-18T23:49:34Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_mult_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_mult_1k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.91 | 5 | 6.7968 |
| No log | 1.91 | 10 | 6.6621 |
| No log | 2.91 | 15 | 6.4335 |
| No log | 3.91 | 20 | 6.1137 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
beston91/gpt2-xl-ft-logits-1k
|
beston91
| 2022-03-19T22:46:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-18T12:21:42Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-logits-1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-logits-1k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.91 | 5 | 5.5302 |
| No log | 1.91 | 10 | 5.5310 |
| No log | 2.91 | 15 | 5.5323 |
| No log | 3.91 | 20 | 5.5341 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.59481430053711
### Dataset Size
Size: 5000
|
Ketzu/koelectra-sts-v0.5
|
Ketzu
| 2022-03-19T22:19:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-19T13:07:38Z |
---
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: koelectra-sts-v0.5
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Spearmanr
type: spearmanr
value: 0.87026647480689
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-sts-v0.5
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0213
- Pearson: 0.9958
- Spearmanr: 0.8703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------:|
| 0.058 | 1.0 | 6250 | 0.0428 | 0.9915 | 0.8702 |
| 0.0433 | 2.0 | 12500 | 0.0448 | 0.9911 | 0.8685 |
| 0.0362 | 3.0 | 18750 | 0.0261 | 0.9950 | 0.8705 |
| 0.0107 | 4.0 | 25000 | 0.0234 | 0.9953 | 0.8702 |
| 0.0075 | 5.0 | 31250 | 0.0213 | 0.9958 | 0.8703 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
msamogh/autonlp-cai-out-of-scope-649919118
|
msamogh
| 2022-03-19T21:40:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:msamogh/autonlp-data-cai-out-of-scope",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-19T21:40:15Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- msamogh/autonlp-data-cai-out-of-scope
co2_eq_emissions: 0.3996916853309825
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 649919118
- CO2 Emissions (in grams): 0.3996916853309825
## Validation Metrics
- Loss: 0.48289698362350464
- Accuracy: 0.8064516129032258
- Precision: 0.828125
- Recall: 0.8833333333333333
- AUC: 0.8353535353535354
- F1: 0.8548387096774193
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/msamogh/autonlp-cai-out-of-scope-649919118
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919118", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919118", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
huggingtweets/planetmoney
|
huggingtweets
| 2022-03-19T20:19:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/planetmoney/1647721191942/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/473888336449269761/vIurMh9f_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NPR's Planet Money</div>
<div style="text-align: center; font-size: 14px;">@planetmoney</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NPR's Planet Money.
| Data | NPR's Planet Money |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 601 |
| Short tweets | 37 |
| Tweets kept | 2608 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7jiqlr8t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @planetmoney's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1t6h63jy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1t6h63jy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/planetmoney')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
vinaykudari/distilGPT-ft-eli5
|
vinaykudari
| 2022-03-19T17:24:50Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-19T16:05:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilGPT-ft-eli5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilGPT-ft-eli5
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 281 | 5.8277 |
| 5.7427 | 2.0 | 562 | 5.7525 |
| 5.7427 | 3.0 | 843 | 5.7016 |
| 5.5614 | 4.0 | 1124 | 5.6593 |
| 5.5614 | 5.0 | 1405 | 5.6273 |
| 5.4408 | 6.0 | 1686 | 5.6029 |
| 5.4408 | 7.0 | 1967 | 5.5855 |
| 5.3522 | 8.0 | 2248 | 5.5739 |
| 5.2948 | 9.0 | 2529 | 5.5670 |
| 5.2948 | 10.0 | 2810 | 5.5643 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.