repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DioLiu/distilbert-base-uncased-finetuned-sst2-nostop
|
DioLiu
|
distilbert
| 13 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,484 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-nostop
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0701
- Accuracy: 0.9888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.125 | 1.0 | 1116 | 0.0975 | 0.9743 |
| 0.0599 | 2.0 | 2232 | 0.0692 | 0.9840 |
| 0.0191 | 3.0 | 3348 | 0.0570 | 0.9871 |
| 0.0109 | 4.0 | 4464 | 0.0660 | 0.9882 |
| 0.0092 | 5.0 | 5580 | 0.0701 | 0.9888 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
8fb8dacecb52053bb83d86c6171d71b7
|
Ashraf-kasem/custom_gpt2_frames_text_continue
|
Ashraf-kasem
|
gpt2
| 16 | 18 |
transformers
| 0 |
text-generation
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 5,535 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Ashraf-kasem/custom_gpt2_frames_text_continue
This model is a fine-tuned version of [Ashraf-kasem/custom_gpt2_frames_text_continue](https://huggingface.co/Ashraf-kasem/custom_gpt2_frames_text_continue) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6337
- Validation Loss: 2.3028
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'LinearWarmup', 'config': {'after_warmup_lr_sched': {'initial_learning_rate': 5e-05, 'decay_steps': 628900, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'warmup_steps': 125780, 'warmup_learning_rate': 0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0060 | 2.0768 | 0 |
| 1.0147 | 2.0771 | 1 |
| 1.0238 | 2.0821 | 2 |
| 1.0331 | 2.0851 | 3 |
| 1.0422 | 2.0870 | 4 |
| 1.0525 | 2.0945 | 5 |
| 1.0618 | 2.1005 | 6 |
| 1.0718 | 2.1014 | 7 |
| 1.0823 | 2.1056 | 8 |
| 1.0921 | 2.1099 | 9 |
| 1.1028 | 2.1106 | 10 |
| 1.1127 | 2.1127 | 11 |
| 1.1230 | 2.1183 | 12 |
| 1.1329 | 2.1207 | 13 |
| 1.1423 | 2.1270 | 14 |
| 1.1521 | 2.1234 | 15 |
| 1.1614 | 2.1283 | 16 |
| 1.1700 | 2.1236 | 17 |
| 1.1784 | 2.1320 | 18 |
| 1.1864 | 2.1359 | 19 |
| 1.1873 | 2.1272 | 20 |
| 1.1766 | 2.1250 | 21 |
| 1.1652 | 2.1260 | 22 |
| 1.1537 | 2.1224 | 23 |
| 1.1415 | 2.1278 | 24 |
| 1.1296 | 2.1254 | 25 |
| 1.1178 | 2.1213 | 26 |
| 1.1059 | 2.1301 | 27 |
| 1.0950 | 2.1253 | 28 |
| 1.0838 | 2.1264 | 29 |
| 1.0729 | 2.1273 | 30 |
| 1.0625 | 2.1355 | 31 |
| 1.0519 | 2.1345 | 32 |
| 1.0414 | 2.1364 | 33 |
| 1.0317 | 2.1324 | 34 |
| 1.0217 | 2.1410 | 35 |
| 1.0126 | 2.1428 | 36 |
| 1.0027 | 2.1427 | 37 |
| 0.9936 | 2.1494 | 38 |
| 0.9846 | 2.1502 | 39 |
| 0.9752 | 2.1490 | 40 |
| 0.9665 | 2.1501 | 41 |
| 0.9582 | 2.1552 | 42 |
| 0.9497 | 2.1533 | 43 |
| 0.9411 | 2.1621 | 44 |
| 0.9331 | 2.1618 | 45 |
| 0.9248 | 2.1655 | 46 |
| 0.9172 | 2.1755 | 47 |
| 0.9093 | 2.1759 | 48 |
| 0.9014 | 2.1751 | 49 |
| 0.8942 | 2.1813 | 50 |
| 0.8867 | 2.1831 | 51 |
| 0.8795 | 2.1856 | 52 |
| 0.8723 | 2.1909 | 53 |
| 0.8651 | 2.1950 | 54 |
| 0.8581 | 2.1955 | 55 |
| 0.8511 | 2.2007 | 56 |
| 0.8444 | 2.2002 | 57 |
| 0.8380 | 2.2078 | 58 |
| 0.8312 | 2.2077 | 59 |
| 0.8246 | 2.2161 | 60 |
| 0.8186 | 2.2103 | 61 |
| 0.8120 | 2.2180 | 62 |
| 0.8053 | 2.2202 | 63 |
| 0.7994 | 2.2232 | 64 |
| 0.7934 | 2.2290 | 65 |
| 0.7872 | 2.2301 | 66 |
| 0.7816 | 2.2327 | 67 |
| 0.7757 | 2.2369 | 68 |
| 0.7698 | 2.2408 | 69 |
| 0.7640 | 2.2439 | 70 |
| 0.7582 | 2.2451 | 71 |
| 0.7528 | 2.2505 | 72 |
| 0.7475 | 2.2524 | 73 |
| 0.7420 | 2.2520 | 74 |
| 0.7366 | 2.2561 | 75 |
| 0.7313 | 2.2616 | 76 |
| 0.7260 | 2.2628 | 77 |
| 0.7211 | 2.2654 | 78 |
| 0.7158 | 2.2701 | 79 |
| 0.7107 | 2.2704 | 80 |
| 0.7061 | 2.2743 | 81 |
| 0.7008 | 2.2749 | 82 |
| 0.6962 | 2.2769 | 83 |
| 0.6916 | 2.2813 | 84 |
| 0.6869 | 2.2838 | 85 |
| 0.6823 | 2.2853 | 86 |
| 0.6780 | 2.2867 | 87 |
| 0.6737 | 2.2883 | 88 |
| 0.6691 | 2.2921 | 89 |
| 0.6651 | 2.2931 | 90 |
| 0.6608 | 2.2946 | 91 |
| 0.6568 | 2.2957 | 92 |
| 0.6533 | 2.2984 | 93 |
| 0.6494 | 2.2981 | 94 |
| 0.6459 | 2.2994 | 95 |
| 0.6425 | 2.3006 | 96 |
| 0.6395 | 2.3019 | 97 |
| 0.6363 | 2.3026 | 98 |
| 0.6337 | 2.3028 | 99 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
6b233fc91eac79d92c8ee01ffe7fe71d
|
blackp2/finetuning-sentiment-model-3000-samples
|
blackp2
|
distilbert
| 16 | 10 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,053 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3230
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
815df848f549df427f1aa3040d2047cc
|
responsibility-framing/predict-perception-bert-cause-object
|
responsibility-framing
|
bert
| 12 | 19 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 10,465 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-cause-object
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4120
- Rmse: 1.0345
- Rmse Cause::a Causata da un oggetto (es. una pistola): 1.0345
- Mae: 0.6181
- Mae Cause::a Causata da un oggetto (es. una pistola): 0.6181
- R2: 0.3837
- R2 Cause::a Causata da un oggetto (es. una pistola): 0.3837
- Cos: 0.9130
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.8986
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un oggetto (es. una pistola) | Mae | Mae Cause::a Causata da un oggetto (es. una pistola) | R2 | R2 Cause::a Causata da un oggetto (es. una pistola) | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------------------------------------------------:|:------:|:----------------------------------------------------:|:-------:|:---------------------------------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0824 | 1.0 | 15 | 0.6651 | 1.3143 | 1.3143 | 1.0930 | 1.0930 | 0.0052 | 0.0052 | 0.3043 | 0.0 | 0.5 | 0.4393 | nan |
| 0.9574 | 2.0 | 30 | 0.7088 | 1.3568 | 1.3568 | 1.1945 | 1.1945 | -0.0601 | -0.0601 | 0.0435 | 0.0 | 0.5 | 0.3380 | nan |
| 0.8151 | 3.0 | 45 | 0.6300 | 1.2791 | 1.2791 | 1.0206 | 1.0206 | 0.0577 | 0.0577 | 0.3043 | 0.0 | 0.5 | 0.3613 | nan |
| 0.6401 | 4.0 | 60 | 0.4871 | 1.1247 | 1.1247 | 0.7285 | 0.7285 | 0.2715 | 0.2715 | 0.5652 | 0.0 | 0.5 | 0.6424 | nan |
| 0.448 | 5.0 | 75 | 0.5005 | 1.1401 | 1.1401 | 0.7216 | 0.7216 | 0.2514 | 0.2514 | 0.4783 | 0.0 | 0.5 | 0.6077 | nan |
| 0.2893 | 6.0 | 90 | 0.4761 | 1.1119 | 1.1119 | 0.7237 | 0.7237 | 0.2879 | 0.2879 | 0.5652 | 0.0 | 0.5 | 0.6348 | nan |
| 0.174 | 7.0 | 105 | 0.4771 | 1.1131 | 1.1131 | 0.6836 | 0.6836 | 0.2865 | 0.2865 | 0.6522 | 0.0 | 0.5 | 0.6785 | nan |
| 0.1383 | 8.0 | 120 | 0.4313 | 1.0583 | 1.0583 | 0.6462 | 0.6462 | 0.3550 | 0.3550 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.1105 | 9.0 | 135 | 0.4660 | 1.1001 | 1.1001 | 0.6737 | 0.6737 | 0.3030 | 0.3030 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0903 | 10.0 | 150 | 0.4866 | 1.1241 | 1.1241 | 0.7192 | 0.7192 | 0.2723 | 0.2723 | 0.7391 | 0.0 | 0.5 | 0.6833 | nan |
| 0.0571 | 11.0 | 165 | 0.4361 | 1.0642 | 1.0642 | 0.6130 | 0.6130 | 0.3478 | 0.3478 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0623 | 12.0 | 180 | 0.4578 | 1.0904 | 1.0904 | 0.6844 | 0.6844 | 0.3152 | 0.3152 | 0.6522 | 0.0 | 0.5 | 0.6785 | nan |
| 0.0526 | 13.0 | 195 | 0.4605 | 1.0936 | 1.0936 | 0.6697 | 0.6697 | 0.3112 | 0.3112 | 0.6522 | 0.0 | 0.5 | 0.6785 | nan |
| 0.0472 | 14.0 | 210 | 0.4440 | 1.0738 | 1.0738 | 0.6589 | 0.6589 | 0.3360 | 0.3360 | 0.7391 | 0.0 | 0.5 | 0.7327 | nan |
| 0.0492 | 15.0 | 225 | 0.4593 | 1.0922 | 1.0922 | 0.6812 | 0.6812 | 0.3130 | 0.3130 | 0.7391 | 0.0 | 0.5 | 0.6833 | nan |
| 0.0389 | 16.0 | 240 | 0.4195 | 1.0437 | 1.0437 | 0.6252 | 0.6252 | 0.3726 | 0.3726 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0396 | 17.0 | 255 | 0.4087 | 1.0302 | 1.0302 | 0.6119 | 0.6119 | 0.3888 | 0.3888 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0328 | 18.0 | 270 | 0.4274 | 1.0535 | 1.0535 | 0.6457 | 0.6457 | 0.3608 | 0.3608 | 0.8261 | 0.0 | 0.5 | 0.7431 | nan |
| 0.0345 | 19.0 | 285 | 0.4306 | 1.0574 | 1.0574 | 0.6576 | 0.6576 | 0.3560 | 0.3560 | 0.8261 | 0.0 | 0.5 | 0.7431 | nan |
| 0.0328 | 20.0 | 300 | 0.4067 | 1.0277 | 1.0277 | 0.6160 | 0.6160 | 0.3918 | 0.3918 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0344 | 21.0 | 315 | 0.4056 | 1.0263 | 1.0263 | 0.5948 | 0.5948 | 0.3934 | 0.3934 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0312 | 22.0 | 330 | 0.4236 | 1.0488 | 1.0488 | 0.6277 | 0.6277 | 0.3665 | 0.3665 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0241 | 23.0 | 345 | 0.4272 | 1.0533 | 1.0533 | 0.6444 | 0.6444 | 0.3610 | 0.3610 | 0.8261 | 0.0 | 0.5 | 0.7431 | nan |
| 0.0302 | 24.0 | 360 | 0.4046 | 1.0250 | 1.0250 | 0.6030 | 0.6030 | 0.3949 | 0.3949 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0244 | 25.0 | 375 | 0.4194 | 1.0436 | 1.0436 | 0.6320 | 0.6320 | 0.3728 | 0.3728 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0259 | 26.0 | 390 | 0.4025 | 1.0224 | 1.0224 | 0.6009 | 0.6009 | 0.3980 | 0.3980 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0265 | 27.0 | 405 | 0.4103 | 1.0323 | 1.0323 | 0.6180 | 0.6180 | 0.3863 | 0.3863 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0184 | 28.0 | 420 | 0.4059 | 1.0268 | 1.0268 | 0.6046 | 0.6046 | 0.3929 | 0.3929 | 0.8261 | 0.0 | 0.5 | 0.7586 | nan |
| 0.0257 | 29.0 | 435 | 0.4088 | 1.0304 | 1.0304 | 0.6122 | 0.6122 | 0.3885 | 0.3885 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
| 0.0262 | 30.0 | 450 | 0.4120 | 1.0345 | 1.0345 | 0.6181 | 0.6181 | 0.3837 | 0.3837 | 0.9130 | 0.0 | 0.5 | 0.8986 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
8eca5c6ac575ea357caa5140e52a25f3
|
Padomin/t5-base-TEDxJP-1front-1body-0rear
|
Padomin
|
t5
| 20 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-sa-4.0
| null |
['te_dx_jp']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,953 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-1front-1body-0rear
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4787
- Wer: 0.1786
- Mer: 0.1722
- Wil: 0.2608
- Wip: 0.7392
- Hits: 55434
- Substitutions: 6554
- Deletions: 2599
- Insertions: 2380
- Cer: 0.1399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:|
| 0.6606 | 1.0 | 1457 | 0.5113 | 0.2142 | 0.2017 | 0.2939 | 0.7061 | 54751 | 6976 | 2860 | 4000 | 0.1909 |
| 0.5636 | 2.0 | 2914 | 0.4669 | 0.1913 | 0.1832 | 0.2728 | 0.7272 | 55086 | 6669 | 2832 | 2852 | 0.1700 |
| 0.5115 | 3.0 | 4371 | 0.4543 | 0.1815 | 0.1747 | 0.2633 | 0.7367 | 55384 | 6559 | 2644 | 2519 | 0.1504 |
| 0.4463 | 4.0 | 5828 | 0.4512 | 0.1796 | 0.1733 | 0.2617 | 0.7383 | 55344 | 6534 | 2709 | 2358 | 0.1422 |
| 0.4001 | 5.0 | 7285 | 0.4564 | 0.1779 | 0.1718 | 0.2600 | 0.7400 | 55394 | 6509 | 2684 | 2295 | 0.1395 |
| 0.3683 | 6.0 | 8742 | 0.4600 | 0.1790 | 0.1726 | 0.2611 | 0.7389 | 55436 | 6546 | 2605 | 2413 | 0.1405 |
| 0.391 | 7.0 | 10199 | 0.4651 | 0.1781 | 0.1718 | 0.2599 | 0.7401 | 55424 | 6505 | 2658 | 2338 | 0.1391 |
| 0.337 | 8.0 | 11656 | 0.4705 | 0.1775 | 0.1714 | 0.2595 | 0.7405 | 55439 | 6511 | 2637 | 2316 | 0.1382 |
| 0.3233 | 9.0 | 13113 | 0.4757 | 0.1790 | 0.1726 | 0.2612 | 0.7388 | 55414 | 6554 | 2619 | 2386 | 0.1401 |
| 0.3204 | 10.0 | 14570 | 0.4787 | 0.1786 | 0.1722 | 0.2608 | 0.7392 | 55434 | 6554 | 2599 | 2380 | 0.1399 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
8233b052843ef54bac4df28c90e9a5e3
|
ahmad1289/distilbert-base-uncased-finetuned-emotion
|
ahmad1289
|
distilbert
| 14 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,338 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1468
- Accuracy: 0.9345
- F1: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1695 | 1.0 | 250 | 0.1757 | 0.93 | 0.9298 |
| 0.107 | 2.0 | 500 | 0.1468 | 0.9345 | 0.9346 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0
- Datasets 2.9.0
- Tokenizers 0.10.3
|
bfda29c265e82e92512e5830d06ccd29
|
gngpostalsrvc/BERiT
|
gngpostalsrvc
|
roberta
| 11 | 4 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,940 | false |
# BERiT
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the [Tanakh dataset](https://huggingface.co/datasets/gngpostalsrvc/Tanakh).
It achieves the following results on the evaluation set:
- Loss: 3.9931
## Model description
BERiT is a masked-language model for Biblical Hebrew, a low-resource ancient language preserved primarily in the text of the Hebrew Bible. Building on the work of [Sennrich and Zhang (2019)](https://arxiv.org/abs/1905.11901) and [Wodiak (2021)](https://arxiv.org/abs/2110.01938) on low-resource machine translation, it employs a modified version of the encoder block from Wodiak’s Seq2Seq model. Accordingly, BERiT is much smaller than models designed for modern languages like English. It features a single attention block with four attention heads, smaller embedding and feedforward dimensions (256 and 1024), a smaller max input length (128), and an aggressive dropout rate (.5) at both the attention and feedforward layers.
The BERiT tokenizer performs character level byte-pair encoding using a 2000 word base vocabulary, which has been enriched with common grammatical morphemes.
## How to Use
```
from transformers import RobertaModel, RobertaTokenizerFast
BERiT_tokenizer = RobertaTokenizerFast.from_pretrained('gngpostalsrvc/BERiT')
BERiT = RobertaModel.from_pretrained('gngpostalsrvc/BERiT')
```
## Training procedure
BERiT was trained on the Tanakh dataset for 150 epochs using a Tesla T4 GPU. Further training did not yield significant improvements in performance.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
57e0949fe0d52adcf889b2f8ce43e984
|
kasumi222/segformer-b0-finetuned-busigt2
|
kasumi222
|
segformer
| 48 | 3 |
transformers
| 0 |
image-segmentation
| true | false | false |
other
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-segmentation', 'generated_from_trainer']
| true | true | true | 14,146 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-busigt2
This model is a fine-tuned version of [nvidia/mit-b1](https://huggingface.co/nvidia/mit-b1) on the kasumi222/busigt5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2904
- Mean Iou: 0.4458
- Mean Accuracy: 0.6980
- Overall Accuracy: 0.6969
- Per Category Iou: [0.0, 0.6551336334577343, 0.6821319425157643]
- Per Category Accuracy: [nan, 0.6913100552356098, 0.70464740289276]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00013
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:---------------------------------------------:|:---------------------------------------------:|
| 0.1095 | 0.77 | 20 | 0.2086 | 0.4674 | 0.7410 | 0.7419 | [0.0, 0.6978460673452154, 0.704309291034096] | [nan, 0.7461995349612959, 0.7357650020760118] |
| 0.1156 | 1.54 | 40 | 0.1980 | 0.4186 | 0.6721 | 0.6783 | [0.0, 0.6446507442278364, 0.6112330250576428] | [nan, 0.7089917293749448, 0.635300900559587] |
| 0.1039 | 2.31 | 60 | 0.1987 | 0.3706 | 0.5810 | 0.5757 | [0.0, 0.5345322994102119, 0.5773860979625277] | [nan, 0.5495831330265778, 0.6123860258526792] |
| 0.0672 | 3.08 | 80 | 0.1960 | 0.4099 | 0.6407 | 0.6439 | [0.0, 0.6194380206711395, 0.6103561290824698] | [nan, 0.6596136450596995, 0.6218662960315686] |
| 0.0992 | 3.85 | 100 | 0.1969 | 0.4201 | 0.6684 | 0.6695 | [0.0, 0.6251984513525223, 0.6351366565306488] | [nan, 0.675036447653713, 0.661700391303438] |
| 0.085 | 4.62 | 120 | 0.2075 | 0.4383 | 0.6997 | 0.6964 | [0.0, 0.6407576836532538, 0.6742246105299582] | [nan, 0.6804532655724195, 0.718889834811138] |
| 0.0561 | 5.38 | 140 | 0.2037 | 0.4401 | 0.7033 | 0.7071 | [0.0, 0.6545188689920507, 0.665783897448558] | [nan, 0.7263735810923504, 0.6801427547189345] |
| 0.0841 | 6.15 | 160 | 0.2119 | 0.3651 | 0.5891 | 0.5934 | [0.0, 0.5494216923933923, 0.5458843877102458] | [nan, 0.6146571565924632, 0.5634664881039569] |
| 0.1034 | 6.92 | 180 | 0.2371 | 0.3684 | 0.6193 | 0.6367 | [0.0, 0.6047004430113216, 0.5003660220404046] | [nan, 0.7229919452156935, 0.5156554415186935] |
| 0.0691 | 7.69 | 200 | 0.2266 | 0.4285 | 0.6991 | 0.7117 | [0.0, 0.6730686627556878, 0.6124621276402561] | [nan, 0.7742042834577688, 0.6240342690621383] |
| 0.0601 | 8.46 | 220 | 0.2106 | 0.4198 | 0.6674 | 0.6704 | [0.0, 0.6308213023617786, 0.6287108585057931] | [nan, 0.6851880267250091, 0.6497046776895365] |
| 0.0647 | 9.23 | 240 | 0.2234 | 0.4229 | 0.6746 | 0.6777 | [0.0, 0.6338885508159525, 0.6349404984513296] | [nan, 0.6928998204597407, 0.6563077167064432] |
| 0.0626 | 10.0 | 260 | 0.2322 | 0.3991 | 0.6540 | 0.6655 | [0.0, 0.6267222060572648, 0.570544858752452] | [nan, 0.7227113522422911, 0.5852409330048426] |
| 0.0604 | 10.77 | 280 | 0.2021 | 0.4660 | 0.7283 | 0.7288 | [0.0, 0.6990308020264264, 0.6989818924111941] | [nan, 0.7310753774760368, 0.7255727204344536] |
| 0.0573 | 11.54 | 300 | 0.2227 | 0.4513 | 0.7014 | 0.6951 | [0.0, 0.6488805486358904, 0.7049138389320693] | [nan, 0.6638350976679388, 0.7389417956785915] |
| 0.0474 | 12.31 | 320 | 0.2108 | 0.4781 | 0.7468 | 0.7371 | [0.0, 0.6761855871787447, 0.7580093480444655] | [nan, 0.6890590324447889, 0.8044529075728725] |
| 0.0805 | 13.08 | 340 | 0.2257 | 0.4325 | 0.6902 | 0.6940 | [0.0, 0.6550347525850334, 0.6423545682885212] | [nan, 0.7128733309133007, 0.6675247882412931] |
| 0.0545 | 13.85 | 360 | 0.2155 | 0.4609 | 0.7230 | 0.7167 | [0.0, 0.6629649481906197, 0.7196967289093881] | [nan, 0.6853650161390015, 0.7606061073292577] |
| 0.0628 | 14.62 | 380 | 0.2397 | 0.4150 | 0.6561 | 0.6611 | [0.0, 0.6377593821077956, 0.6070948266377257] | [nan, 0.6861969841160831, 0.6259296622984148] |
| 0.0576 | 15.38 | 400 | 0.2177 | 0.4661 | 0.7274 | 0.7272 | [0.0, 0.6936915190759695, 0.7046022162863222] | [nan, 0.7263017649886684, 0.7284576609239519] |
| 0.0808 | 16.15 | 420 | 0.2263 | 0.4248 | 0.6707 | 0.6740 | [0.0, 0.6438773235874202, 0.6304024210524071] | [nan, 0.6904172594111472, 0.6510802419847774] |
| 0.0458 | 16.92 | 440 | 0.2342 | 0.4006 | 0.6449 | 0.6525 | [0.0, 0.6208902028936363, 0.5809796433249929] | [nan, 0.6898132977523129, 0.6000533044931062] |
| 0.0477 | 17.69 | 460 | 0.2683 | 0.3789 | 0.6170 | 0.6232 | [0.0, 0.5741692028709614, 0.5625631837395161] | [nan, 0.6539633266945951, 0.5800762342358019] |
| 0.0501 | 18.46 | 480 | 0.2364 | 0.4280 | 0.6700 | 0.6675 | [0.0, 0.6223049989658083, 0.6617065588280534] | [nan, 0.6552936905824757, 0.6846169180090992] |
| 0.039 | 19.23 | 500 | 0.2378 | 0.4500 | 0.7052 | 0.6986 | [0.0, 0.6391919313721981, 0.7106968345576296] | [nan, 0.665670921345669, 0.7446979100013106] |
| 0.041 | 20.0 | 520 | 0.2477 | 0.4142 | 0.6612 | 0.6659 | [0.0, 0.6273087938535062, 0.6153514032911991] | [nan, 0.6890233206118104, 0.6333526433632052] |
| 0.0331 | 20.77 | 540 | 0.2488 | 0.4353 | 0.6814 | 0.6778 | [0.0, 0.6267198588955959, 0.6791644212315564] | [nan, 0.6603973431966015, 0.7023153313193633] |
| 0.0316 | 21.54 | 560 | 0.2468 | 0.4500 | 0.7025 | 0.6974 | [0.0, 0.6405571933079939, 0.7093320446678179] | [nan, 0.6719456081313097, 0.7331179494069875] |
| 0.0333 | 22.31 | 580 | 0.2477 | 0.4384 | 0.6899 | 0.6906 | [0.0, 0.6520329743081146, 0.6630535380613215] | [nan, 0.6937796658392771, 0.6860558089232162] |
| 0.0269 | 23.08 | 600 | 0.2603 | 0.4477 | 0.7018 | 0.6996 | [0.0, 0.6514078130357787, 0.6916101875532822] | [nan, 0.6888588892050193, 0.7147725032516842] |
| 0.033 | 23.85 | 620 | 0.2424 | 0.4499 | 0.7061 | 0.6986 | [0.0, 0.6447352671115818, 0.7048670621273163] | [nan, 0.6616131152687708, 0.750523958937919] |
| 0.0555 | 24.62 | 640 | 0.2471 | 0.4342 | 0.6830 | 0.6823 | [0.0, 0.636756610371055, 0.6659104633164847] | [nan, 0.6791280033749645, 0.6868014110272018] |
| 0.0583 | 25.38 | 660 | 0.2517 | 0.4434 | 0.6922 | 0.6879 | [0.0, 0.6386719513699022, 0.6913843141331489] | [nan, 0.6666374954624388, 0.7178391636040445] |
| 0.154 | 26.15 | 680 | 0.2535 | 0.4235 | 0.6597 | 0.6487 | [0.0, 0.5750726006840868, 0.695285501846172] | [nan, 0.5943477194462704, 0.7250215035171054] |
| 0.0292 | 26.92 | 700 | 0.2768 | 0.3679 | 0.6035 | 0.6135 | [0.0, 0.5756677002657924, 0.5279750019379379] | [nan, 0.6631412677700708, 0.5438385402498483] |
| 0.0288 | 27.69 | 720 | 0.2455 | 0.4676 | 0.7235 | 0.7188 | [0.0, 0.6761224569996822, 0.7268002447671437] | [nan, 0.6954373227898398, 0.7515024928661187] |
| 0.0321 | 28.46 | 740 | 0.2618 | 0.4324 | 0.6745 | 0.6691 | [0.0, 0.6201514037000198, 0.6770266576179022] | [nan, 0.6425218048210974, 0.7064552401951121] |
| 0.0309 | 29.23 | 760 | 0.2742 | 0.3944 | 0.6348 | 0.6407 | [0.0, 0.6008533572398147, 0.5822751024176394] | [nan, 0.6701804232440864, 0.599451426280657] |
| 0.0244 | 30.0 | 780 | 0.2667 | 0.4386 | 0.6819 | 0.6750 | [0.0, 0.6224630782821559, 0.693390305711243] | [nan, 0.6412495217165226, 0.7224713681082742] |
| 0.0642 | 30.77 | 800 | 0.2501 | 0.4581 | 0.7121 | 0.7096 | [0.0, 0.6722145834845955, 0.7021141065136746] | [nan, 0.6976031865943273, 0.7265325317101161] |
| 0.0481 | 31.54 | 820 | 0.2685 | 0.4137 | 0.6689 | 0.6766 | [0.0, 0.6379976664903103, 0.6031984018650592] | [nan, 0.7145859291453688, 0.6231961550279683] |
| 0.0311 | 32.31 | 840 | 0.2570 | 0.4284 | 0.6804 | 0.6832 | [0.0, 0.6426329055663264, 0.6425854743219936] | [nan, 0.6969752862342657, 0.6639063603053335] |
| 0.0389 | 33.08 | 860 | 0.2795 | 0.3918 | 0.6456 | 0.6590 | [0.0, 0.6244554318979076, 0.5508200429573112] | [nan, 0.7254125011037311, 0.5658618862962298] |
| 0.0282 | 33.85 | 880 | 0.2568 | 0.4242 | 0.6759 | 0.6775 | [0.0, 0.6282787291971401, 0.6442735430594793] | [nan, 0.6857107537747603, 0.6660974613184492] |
| 0.0245 | 34.62 | 900 | 0.2635 | 0.4503 | 0.7043 | 0.7037 | [0.0, 0.6658605581388065, 0.6850412042515538] | [nan, 0.7008356961354695, 0.7076892832638209] |
| 0.0315 | 35.38 | 920 | 0.2769 | 0.4443 | 0.7038 | 0.7055 | [0.0, 0.6610872730365329, 0.6718978137221756] | [nan, 0.7138198907060935, 0.6938235070611933] |
| 0.0283 | 36.15 | 940 | 0.2697 | 0.4392 | 0.6920 | 0.6907 | [0.0, 0.6405508279799802, 0.6769668218170816] | [nan, 0.6841213809883544, 0.6998318265269149] |
| 0.0257 | 36.92 | 960 | 0.2712 | 0.4562 | 0.7099 | 0.7082 | [0.0, 0.6720494469697227, 0.6964887349332429] | [nan, 0.6999154296702542, 0.7197879714666775] |
| 0.0188 | 37.69 | 980 | 0.2857 | 0.4300 | 0.6763 | 0.6771 | [0.0, 0.6397832221652129, 0.6501046733477022] | [nan, 0.6811686795451647, 0.6713607293464362] |
| 0.0259 | 38.46 | 1000 | 0.2812 | 0.4368 | 0.6851 | 0.6838 | [0.0, 0.6396217765000503, 0.6707000380577134] | [nan, 0.6772780519391329, 0.6929027930893589] |
| 0.0169 | 39.23 | 1020 | 0.2795 | 0.4542 | 0.7084 | 0.7054 | [0.0, 0.6598929743362643, 0.7028156867427239] | [nan, 0.6906225043413423, 0.7260947520404938] |
| 0.0296 | 40.0 | 1040 | 0.2834 | 0.4470 | 0.7015 | 0.7013 | [0.0, 0.6608002641121026, 0.6801095152287282] | [nan, 0.7006602764723773, 0.7022773353480376] |
| 0.0183 | 40.77 | 1060 | 0.2874 | 0.4386 | 0.6909 | 0.6903 | [0.0, 0.6432231900832152, 0.6726091072738183] | [nan, 0.6874296310104291, 0.694422081276136] |
| 0.0199 | 41.54 | 1080 | 0.2741 | 0.4594 | 0.7175 | 0.7154 | [0.0, 0.6721657359810768, 0.7061664449453671] | [nan, 0.7051238631569653, 0.7298866398455491] |
| 0.0162 | 42.31 | 1100 | 0.2883 | 0.4414 | 0.6921 | 0.6913 | [0.0, 0.6492915338226911, 0.6750215527697642] | [nan, 0.6870752597447193, 0.6971930338516571] |
| 0.0179 | 43.08 | 1120 | 0.2927 | 0.4425 | 0.6936 | 0.6927 | [0.0, 0.651082790586508, 0.6764744769464034] | [nan, 0.6884633119781804, 0.6987260886947118] |
| 0.0228 | 43.85 | 1140 | 0.2954 | 0.4273 | 0.6807 | 0.6841 | [0.0, 0.6418083531582984, 0.6399672125377378] | [nan, 0.7006630235364526, 0.6608033559804007] |
| 0.0164 | 44.62 | 1160 | 0.2954 | 0.4264 | 0.6740 | 0.6756 | [0.0, 0.6356634502412776, 0.6436554266840772] | [nan, 0.6834636553611899, 0.6644801545389767] |
| 0.0158 | 45.38 | 1180 | 0.2906 | 0.4433 | 0.6956 | 0.6951 | [0.0, 0.6536928350497138, 0.6760836624911459] | [nan, 0.6927067410990219, 0.6985223421818058] |
| 0.0198 | 46.15 | 1200 | 0.2881 | 0.4441 | 0.6969 | 0.6961 | [0.0, 0.6527988151987781, 0.6794425179962712] | [nan, 0.6919179412716945, 0.7019810769049473] |
| 0.018 | 46.92 | 1220 | 0.2961 | 0.4350 | 0.6844 | 0.6839 | [0.0, 0.6395287774950378, 0.6655290939553297] | [nan, 0.6815206961845243, 0.6872821426644097] |
| 0.0179 | 47.69 | 1240 | 0.2898 | 0.4459 | 0.6987 | 0.6982 | [0.0, 0.6581945977423002, 0.6796217960953337] | [nan, 0.6955130632707722, 0.701934270273604] |
| 0.0213 | 48.46 | 1260 | 0.2902 | 0.4469 | 0.7004 | 0.6998 | [0.0, 0.6595482974648909, 0.6811920247361126] | [nan, 0.6971510983350829, 0.7036303223269834] |
| 0.0227 | 49.23 | 1280 | 0.2888 | 0.4452 | 0.6967 | 0.6953 | [0.0, 0.6532891096762087, 0.6823149709479772] | [nan, 0.6885578894699147, 0.7047801134592744] |
| 0.0266 | 50.0 | 1300 | 0.2904 | 0.4458 | 0.6980 | 0.6969 | [0.0, 0.6551336334577343, 0.6821319425157643] | [nan, 0.6913100552356098, 0.70464740289276] |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
243b9ba3691fc10ff27368651a5b6e9c
|
ananthrgv/ananth-docai2
|
ananthrgv
|
lilt
| 15 | 6 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['funsd-layoutlmv3']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 7,738 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ananth-docai2
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4203
- Answer: {'precision': 0.8505747126436781, 'recall': 0.9057527539779682, 'f1': 0.8772969768820391, 'number': 817}
- Header: {'precision': 0.6476190476190476, 'recall': 0.5714285714285714, 'f1': 0.6071428571428571, 'number': 119}
- Question: {'precision': 0.9104477611940298, 'recall': 0.9062209842154132, 'f1': 0.9083294555607259, 'number': 1077}
- Overall Precision: 0.8715
- Overall Recall: 0.8862
- Overall F1: 0.8788
- Overall Accuracy: 0.8269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4218 | 10.53 | 200 | 1.0024 | {'precision': 0.8727272727272727, 'recall': 0.8812729498164015, 'f1': 0.8769792935444579, 'number': 817} | {'precision': 0.4036144578313253, 'recall': 0.5630252100840336, 'f1': 0.47017543859649125, 'number': 119} | {'precision': 0.8674812030075187, 'recall': 0.8570102135561746, 'f1': 0.8622139187295657, 'number': 1077} | 0.8321 | 0.8495 | 0.8407 | 0.7973 |
| 0.0532 | 21.05 | 400 | 1.1791 | {'precision': 0.8563218390804598, 'recall': 0.9118727050183598, 'f1': 0.8832246591582691, 'number': 817} | {'precision': 0.5486725663716814, 'recall': 0.5210084033613446, 'f1': 0.5344827586206897, 'number': 119} | {'precision': 0.9044943820224719, 'recall': 0.8969359331476323, 'f1': 0.9006993006993008, 'number': 1077} | 0.8645 | 0.8808 | 0.8725 | 0.8103 |
| 0.0117 | 31.58 | 600 | 1.5177 | {'precision': 0.8064516129032258, 'recall': 0.9179926560587516, 'f1': 0.8586147681740126, 'number': 817} | {'precision': 0.6046511627906976, 'recall': 0.4369747899159664, 'f1': 0.5073170731707317, 'number': 119} | {'precision': 0.9019607843137255, 'recall': 0.8542246982358404, 'f1': 0.8774439675727229, 'number': 1077} | 0.8458 | 0.8554 | 0.8506 | 0.7952 |
| 0.0067 | 42.11 | 800 | 1.4884 | {'precision': 0.8443935926773455, 'recall': 0.9033047735618115, 'f1': 0.872856298048492, 'number': 817} | {'precision': 0.515625, 'recall': 0.5546218487394958, 'f1': 0.5344129554655871, 'number': 119} | {'precision': 0.8784530386740331, 'recall': 0.8857938718662952, 'f1': 0.8821081830790567, 'number': 1077} | 0.8420 | 0.8733 | 0.8574 | 0.7963 |
| 0.0034 | 52.63 | 1000 | 1.4203 | {'precision': 0.8505747126436781, 'recall': 0.9057527539779682, 'f1': 0.8772969768820391, 'number': 817} | {'precision': 0.6476190476190476, 'recall': 0.5714285714285714, 'f1': 0.6071428571428571, 'number': 119} | {'precision': 0.9104477611940298, 'recall': 0.9062209842154132, 'f1': 0.9083294555607259, 'number': 1077} | 0.8715 | 0.8862 | 0.8788 | 0.8269 |
| 0.0023 | 63.16 | 1200 | 1.5225 | {'precision': 0.834096109839817, 'recall': 0.8922888616891065, 'f1': 0.8622117090479007, 'number': 817} | {'precision': 0.5689655172413793, 'recall': 0.5546218487394958, 'f1': 0.5617021276595745, 'number': 119} | {'precision': 0.8962001853568119, 'recall': 0.8978644382544104, 'f1': 0.8970315398886828, 'number': 1077} | 0.8516 | 0.8753 | 0.8633 | 0.8096 |
| 0.0013 | 73.68 | 1400 | 1.6801 | {'precision': 0.848, 'recall': 0.9082007343941249, 'f1': 0.8770685579196217, 'number': 817} | {'precision': 0.6741573033707865, 'recall': 0.5042016806722689, 'f1': 0.576923076923077, 'number': 119} | {'precision': 0.8977695167286245, 'recall': 0.8969359331476323, 'f1': 0.8973525313516025, 'number': 1077} | 0.8667 | 0.8783 | 0.8724 | 0.7977 |
| 0.0014 | 84.21 | 1600 | 1.6236 | {'precision': 0.8876543209876543, 'recall': 0.8800489596083231, 'f1': 0.8838352796558081, 'number': 817} | {'precision': 0.6237623762376238, 'recall': 0.5294117647058824, 'f1': 0.5727272727272728, 'number': 119} | {'precision': 0.8656330749354005, 'recall': 0.9331476323119777, 'f1': 0.8981233243967828, 'number': 1077} | 0.8625 | 0.8877 | 0.8749 | 0.8072 |
| 0.0006 | 94.74 | 1800 | 1.7231 | {'precision': 0.8619883040935673, 'recall': 0.9020807833537332, 'f1': 0.881578947368421, 'number': 817} | {'precision': 0.6883116883116883, 'recall': 0.44537815126050423, 'f1': 0.5408163265306123, 'number': 119} | {'precision': 0.8748890860692103, 'recall': 0.9155060352831941, 'f1': 0.8947368421052633, 'number': 1077} | 0.8626 | 0.8823 | 0.8723 | 0.8019 |
| 0.0005 | 105.26 | 2000 | 1.8217 | {'precision': 0.8342665173572228, 'recall': 0.9118727050183598, 'f1': 0.871345029239766, 'number': 817} | {'precision': 0.6, 'recall': 0.5042016806722689, 'f1': 0.547945205479452, 'number': 119} | {'precision': 0.9049858889934148, 'recall': 0.89322191272052, 'f1': 0.8990654205607476, 'number': 1077} | 0.8594 | 0.8778 | 0.8685 | 0.7964 |
| 0.0004 | 115.79 | 2200 | 1.7688 | {'precision': 0.8561484918793504, 'recall': 0.9033047735618115, 'f1': 0.8790946992257296, 'number': 817} | {'precision': 0.6555555555555556, 'recall': 0.4957983193277311, 'f1': 0.5645933014354068, 'number': 119} | {'precision': 0.8827272727272727, 'recall': 0.9015784586815228, 'f1': 0.8920532843362425, 'number': 1077} | 0.8616 | 0.8783 | 0.8699 | 0.7956 |
| 0.0002 | 126.32 | 2400 | 1.7726 | {'precision': 0.8458904109589042, 'recall': 0.9069767441860465, 'f1': 0.8753691671588896, 'number': 817} | {'precision': 0.6741573033707865, 'recall': 0.5042016806722689, 'f1': 0.576923076923077, 'number': 119} | {'precision': 0.8878676470588235, 'recall': 0.8969359331476323, 'f1': 0.892378752886836, 'number': 1077} | 0.8607 | 0.8778 | 0.8692 | 0.7961 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
69d3a41105265d18cf762c2ab504a858
|
Pablo94/bert-base-uncased-finetuned-detests-02-11-2022
|
Pablo94
|
bert
| 20 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,052 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-detests-02-11-2022
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0794
- F1: 0.5455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.014 | 0.64 | 25 | 0.6229 | 0.5536 |
| 0.0698 | 1.28 | 50 | 0.6996 | 0.5907 |
| 0.0173 | 1.92 | 75 | 0.7531 | 0.5882 |
| 0.0032 | 2.56 | 100 | 0.8054 | 0.4928 |
| 0.0087 | 3.21 | 125 | 0.9557 | 0.5735 |
| 0.0028 | 3.85 | 150 | 0.8859 | 0.5352 |
| 0.013 | 4.49 | 175 | 0.9674 | 0.5536 |
| 0.0031 | 5.13 | 200 | 0.9073 | 0.5691 |
| 0.0032 | 5.77 | 225 | 0.9253 | 0.5439 |
| 0.0483 | 6.41 | 250 | 0.9705 | 0.5837 |
| 0.0323 | 7.05 | 275 | 1.0368 | 0.5824 |
| 0.0019 | 7.69 | 300 | 1.0221 | 0.5520 |
| 0.0256 | 8.33 | 325 | 1.0419 | 0.5523 |
| 0.0319 | 8.97 | 350 | 1.0764 | 0.5425 |
| 0.0125 | 9.62 | 375 | 1.0794 | 0.5455 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
5e62192f7ce00f301eab4cce5554c518
|
laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K
|
laion
| null | 11 | 112 |
open_clip
| 1 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 12,528 | false |
# Model Card for CLIP-convnext_base_w-320.laion_aesthetic-s13B-b82k
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
# Model Details
## Model Description
A series of CLIP [ConvNeXt-Base](https://arxiv.org/abs/2201.03545) (w/ wide embed dim) models trained on subsets LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Goals:
* Explore an alternative to ViT and ResNet (w/ AttentionPooling) CLIP models that scales well with model size and image resolution
Firsts:
* First known ConvNeXt CLIP models trained at scale in the range of CLIP ViT-B/16 and RN50x4 models
* First released model weights exploring increase of augmentation + regularization for image tower via adding (greater scale range of RRC, random erasing, stochastic depth)
The models utilize the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Base model (`convnext_base`) as the image tower, and the same text tower as the RN50x4 (depth 12, embed dim 640) model from OpenAI CLIP. The base models are trained at 256x256 image resolution and roughly match the RN50x4 models on FLOPs and activation counts. The models with `320` in the name are trained at 320x320.
All models in this series were trained for 13B samples and have ImageNet Zero-Shot top-1 of >= 70.8%. Comparing to ViT-B/16 at 34B SS with zero-shot of 70.2% (68.1% for 13B SS) this suggests the ConvNeXt architecture may be more sample efficient in this range of model scale. More experiments needed to confirm.
| Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
| ----- | ------- | ---------- | ------------ | --------- |
| [convnext_base_w.laion2b_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K) | LAION-2B | 256x256 | RRC (0.9, 1.0) | 70.8 |
| [convnext_base_w.laion2b_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.5 |
| [convnext_base_w.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K) | LAION-A | 256x256 | RRC (0.9, 1.0) | 71.0 |
| [convnext_base_w_320.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K) | LAION-A | 320x320 | RRC (0.9, 1.0) | 71.7 |
| [convnext_base_w_320.laion_aesthetic_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K-augreg) | LAION-A | 320x320 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.3 |
RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only
LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering.
Model training done by Ross Wightman across both the [stability.ai](https://stability.ai/) cluster and the [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) supercomputer. See acknowledgements below.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with one of (see table in intro):
* LAION-2B - A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
* LAION-Aesthetic - A 900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
All models were trained with a global batch size of 81920 for 64 checkpoint intervals of 203.7M samples for a total of ~13B samples seen over training.
For 256x256 models, a slurm script w/ srun below was used on 20 8-GPU (A100 40GB) nodes (Stability), switching to 40 4-GPU nodes for time on JUWELS.
```
/opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \
--save-frequency 1 \
--name "convnext_256" \
--resume 'latest' \
--train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \
--train-num-samples 203666042 \
--dataset-type webdataset \
--precision amp_bfloat16 \
--warmup 10000 \
--batch-size=512 \
--epochs=64 \
--dataset-resampled \
--clip-grad-norm 5.0 \
--lr 1e-3 \
--workers=6 \
--model "convnext_base_w" \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--grad-checkpointing
```
For 320x320 models, same as above but w/ 32 8-GPU nodes, local batch size 320, or 64 4-GPU nodes on JUWELs.
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
## Results
The models achieve between 70.8 and 71.7 zero-shot top-1 accuracy on ImageNet-1k.

An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
As part of exploring increased augmentation + regularization, early evalations suggest that `augreg` trained models evaluate well over a wider range of resolutions. This is especially true for the 320x320 LAION-A model, where the augreg run was lower than the non-augreg when evaluated at the train resolution of 320x320 (71.3 vs 71.7), but improves to 72.2 when evaluated at 384x384 (the non-augreg drops to 71.0 at 384x384).
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) and the Gauss Centre for Supercomputing e.V. (http://gauss-centre.eu) for funding this part of work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC).
# Citation
**BibTeX:**
LAION-5B
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenCLIP software
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
OpenAI CLIP paper
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@Article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
99d6563bed855614ffb47df7b51aa8dd
|
soypablo/emoji-model-finetuned-lora-3000
|
soypablo
| null | 1,134 | 0 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
| false | true | true | 428 | false |
# LoRA text2image fine-tuning - https://huggingface.co/soypablo/emoji-model-finetuned-lora-3000
These are LoRA adaption weights for https://huggingface.co/soypablo/emoji-model-finetuned-lora-3000. The weights were fine-tuned on the soypablo/Emoji_Dataset-Openmoji dataset. You can find some example images in the following.




|
de794708b8fdacc4b432f7dd25055df4
|
Littlemilk/autobiography-generator
|
Littlemilk
|
gpt2
| 8 | 4 |
transformers
| 2 |
text-generation
| true | false | false |
gpl-3.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 998 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clm-total
This model is a fine-tuned version of [ckiplab/gpt2-base-chinese](https://huggingface.co/ckiplab/gpt2-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cpu
- Datasets 1.17.0
- Tokenizers 0.10.3
|
58bdbe7059fa05002a93c91288c0b192
|
huynhdoo/distilcamembert-base-finetuned-CLS
|
huynhdoo
|
camembert
| 10 | 0 |
transformers
| 0 |
text-classification
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,497 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# huynhdoo/distilcamembert-base-finetuned-CLS
This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1270
- Validation Loss: 0.2366
- Train F1: 0.9220
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 669, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train F1 | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.3787 | 0.2347 | 0.915 | 0 |
| 0.1758 | 0.2338 | 0.9242 | 1 |
| 0.1270 | 0.2366 | 0.9220 | 2 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ea5b8b1bda784577ca1b713aa7e20242
|
emmyapi/distilbart-cnn-12-6-eval-test-2
|
emmyapi
|
bart
| 13 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,865 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-eval-test-2
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7250
- Rouge1: 31.3552
- Rouge2: 4.2825
- Rougel: 15.1982
- Rougelsum: 27.9577
- Gen Len: 139.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 4.4419 | 1.0 | 80 | 4.2847 | 30.8184 | 4.024 | 15.5589 | 27.647 | 133.6 |
| 3.5861 | 2.0 | 160 | 4.2721 | 30.7823 | 3.7736 | 14.992 | 28.0105 | 137.1 |
| 2.9885 | 3.0 | 240 | 4.4295 | 30.4747 | 3.8971 | 15.6055 | 27.9916 | 135.5 |
| 2.5254 | 4.0 | 320 | 4.5978 | 31.0505 | 4.1062 | 14.7292 | 27.9009 | 134.2 |
| 2.2404 | 5.0 | 400 | 4.7250 | 31.3552 | 4.2825 | 15.1982 | 27.9577 | 139.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
51a527ff24ca992f431a7cbc13f22764
|
mmibrahim2006/bert-finetuned-ner
|
mmibrahim2006
|
bert
| 12 | 17 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0619
- Precision: 0.9352
- Recall: 0.9493
- F1: 0.9422
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0869 | 1.0 | 1756 | 0.0747 | 0.9128 | 0.9298 | 0.9212 | 0.9815 |
| 0.0335 | 2.0 | 3512 | 0.0637 | 0.9258 | 0.9470 | 0.9363 | 0.9854 |
| 0.018 | 3.0 | 5268 | 0.0619 | 0.9352 | 0.9493 | 0.9422 | 0.9864 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
9a92d20029628efd5c0cfa733628c009
|
eibakke/bert-finetuned-uia
|
eibakke
|
bert
| 12 | 8 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,026 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-uia
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Trained on 100,000 questions from the Natural Questions dataset where the short answer is present.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
a844a14a0869d2159be91cce677d9ab3
|
Helsinki-NLP/opus-mt-da-fr
|
Helsinki-NLP
|
marian
| 10 | 206 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 | false |
### opus-mt-da-fr
* source languages: da
* target languages: fr
* OPUS readme: [da-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/da-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.da.fr | 62.2 | 0.751 |
|
e495ceb9dad932f343b1a047c4cf7181
|
sd-concepts-library/rj-palmer
|
sd-concepts-library
| null | 38 | 0 | null | 5 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,138 | false |
### RJ Palmer on Stable Diffusion
This is the `<rj-palmer>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:

































|
3d9b1782e600ffa66f237d321fb1a105
|
CarperAI/diff-codegen-2b-v2
|
CarperAI
|
codegen
| 13 | 91 |
transformers
| 2 |
text-generation
| true | false | false |
mit
|
['en', 'code']
| null | null | 13 | 0 | 12 | 1 | 0 | 0 | 0 |
['Diff Model', 'pytorch', 'causal-lm', 'code-generation', 'The Pile']
| false | true | true | 5,433 | false |
# Diff-Codegen-2B v2 Model Card
## Model Description
diff-codegen-2b-v2 is a diff model for code generation, released by [CarperAI](http://carper.ai/). A diff model is an autoregressive language model trained on edits to a piece of text, formatted in [Unified Diff Format](https://en.wikipedia.org/wiki/Diff#Unified_format). These diff models can suggest, given a section of text and a description of the desired change, an intelligent change to the text that fits the description, marking the lines added, changed, and deleted in diff format.
In comparison to few-shot prompting of normal code generation models, diff models are specialized for suggesting intelligent changes to existing code, particularly longer pieces of code and where a change is required to follow some natural language text description (provided in the form of a commit message).
This model is a fine-tune of [codegen-2B-mono](https://huggingface.co/Salesforce/codegen-2B-mono) by Salesforce, trained on a large dataset of commits scraped from GitHub.
diff-codegen-2b-v2 is an experimental research artifact and should be treated as such. We are releasing these results and this model in the hopes that it may be useful to the greater research community, especially those interested in LMs for code.
An example Colab notebook with a brief example of prompting the model is [here](https://colab.research.google.com/drive/1ySm6HYvALerDiGmk6g3pDz68V7fAtrQH#scrollTo=thvzNpmahNNx).
## Training Data
This model is a fine-tune of [codegen-2B-mono](https://huggingface.co/Salesforce/codegen-2B-mono) by Salesforce. This language model was first pre-trained on The Pile, an 800Gb dataset composed of varied web corpora. The datasheet and paper for the Pile can be found [here](https://arxiv.org/abs/2201.07311) and [here](https://arxiv.org/abs/2101.00027) respectively. The model was then fine-tuned on a large corpus of code data in multiple languages, before finally being fine-tuned on a Python code dataset. The Codegen paper with full details of these datasets can be found [here](https://arxiv.org/abs/2203.13474).
Our dataset for this fine-tune consists of commits from GitHub, obtained using the [Google BigQuery Public Dataset](https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code), a public up to date snapshot of a huge number of open-source GitHub repositories. We took this dataset and filtered using [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos) on the number of stars in the repository to exclude repos with less than 100 stars, and further restricted the query to only repositories with open-source non-copyleft licenses (e.g. MIT, Apache, etc) and commits with more than 10 characters in the commit message. We also restricted ourselves to a list of 22 popular programming, scripting, and markup languages, including Python, HTML, Bash scripts, SQL, C++, etc. This resulted in a dataset of 19 million commits after filtering.
Our diff model was trained on a dataset of commits from BigQuery, a large-scale dataset of many programming languages from GitHub repositories. We filtered the dataset by the number of stars in the repository (>100 stars), license (only open-source non-copyleft licensed code included), and length of file (files greater than 2048 tokens in length were excluded).
The model was trained using the Huggingface Codegen tokenizer.
## Training Details
The model was trained on 1.08 billion tokens for 1 epoch on 64 A100 GPUs, provided by [Stability AI](https://stability.ai/).
Each file was formatted as follows for input to the language model:
```
<NME> {FILE_NAME}
<BEF> {INPUT_FILE}
<MSG> {COMMIT_MESSAGE}
<DFF> {FILE_DIFF}
```
## Intended Uses and Limitations
Due to the model’s small size and restriction to code, one should not expect the model to generalize to domains beyond code and perform (successful) reasoning over large chunks of code. This model is intended to be used in prototyping code generation systems, and for solely experimental purposes. This model is provided without warranty and should not be used in commercial settings—even though the license permits.
## Limitations and Biases
Due to the short context length restriction and due to the fact that all repositories with under 100 stars were excluded, we expect our diff model to underperform on underrepresented languages, for instance Lean or Coq.
The output of this model should not be trusted as correct and secure code. This model should not be used in any mission critical setting where security is of importance. When running the output of this model, it should be done as much as possible in a sandbox, such as [gVisor](https://gvisor.dev), since it is very possible for the model to produce code which may delete files, send HTTP requests, or otherwise contain critical security vulnerabilities.
As with other language models, diff-codegen is prone to hallucination and biased, stereotyped, or toxic output. There are no guarantees of truthful output when generating from the model.
## Evaluation Results
See [our blog post](https://carper.ai/diff-model) for full evaluation results.
## Licensing
This model is licensed as MIT.
## Acknowledgements
We’d like to thank Honglu Fan, Harry Saini, Herbie Bradley, Reshinth Adithyan, and Joel Lehman for their efforts! Thanks to Nitarshan Rajkumar for feedback on this model card.
|
bec77fd195c739a696d208eb1b98e9b1
|
jonatasgrosman/exp_w2v2t_ja_unispeech_s253
|
jonatasgrosman
|
unispeech
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ja']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ja']
| false | true | true | 469 | false |
# exp_w2v2t_ja_unispeech_s253
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
51704ff998195782d9630a211ef1890f
|
prompthero/openjourney-lora
|
prompthero
| null | 3 | 0 |
diffusers
| 6 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
| false | true | true | 2,060 | false |
# Openjourney LoRA - by [PromptHero](https://prompthero.com/?utm_source=huggingface&utm_medium=referral)
These are LoRA adaption weights for [Openjourney](https://huggingface.co/prompthero/openjourney) trained by [@JHawkk](https://prompthero.com/JHawkk)
# Openjourney Links
- [Openjourney Dreambooth](https://huggingface.co/prompthero/openjourney)
- [Openjourney Fine tuned model](https://huggingface.co/prompthero/openjourney-v2)
# Want to learn AI art generation?:
- [Crash course in AI art generation](https://prompthero.com/academy/prompt-engineering-course?utm_source=huggingface&utm_medium=referral)
- [Learn to fine-tune Stable Diffusion for photorealism](https://prompthero.com/academy/dreambooth-stable-diffusion-train-fine-tune-course?utm_source=huggingface&utm_medium=referral)
# How to use LoRA's in auto1111:
- Update webui (use git pull like here or redownload it)
- Copy the file to stable-diffusion-webui/models/lora
- Select your LoRA like in this video
- Make sure to change the weight (by default it's :1 which is usually too high)
# Examples:




|
e00cd20a0b7d69a4ad5d65b2d3df8fcb
|
JapaNLP/t5-efficient-xl-nl6-japanese
|
JapaNLP
|
t5
| 7 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
afl-3.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 420 | false |
# Overview
`t5-efficient-xl-nl6-ja` is a Japanese version of [`google/t5-efficient-xl-nl6`](https://huggingface.co/google/t5-efficient-xl-nl6).
# Results
- Under construction
- If you get some experimental results of this model on downstream tasks, please feel free to make Pull Requests.
## Question Answering
## Others
# Acknowledgement
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)
|
7d99dc7e32d3928732be5032c60efa61
|
jonatasgrosman/exp_w2v2t_en_wavlm_s767
|
jonatasgrosman
|
wavlm
| 10 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'en']
| false | true | true | 445 | false |
# exp_w2v2t_en_wavlm_s767
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
f97650a590040f3959ae0767547d0a2b
|
jonatasgrosman/exp_w2v2t_it_wav2vec2_s692
|
jonatasgrosman
|
wav2vec2
| 10 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['it']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'it']
| false | true | true | 456 | false |
# exp_w2v2t_it_wav2vec2_s692
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
35bf12e835d5c38605564ed338897ff7
|
Helsinki-NLP/opus-mt-bg-es
|
Helsinki-NLP
|
marian
| 11 | 553 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['bg', 'es']
| null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,002 | false |
### bul-spa
* source group: Bulgarian
* target group: Spanish
* OPUS readme: [bul-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-spa/README.md)
* model: transformer
* source language(s): bul
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.spa | 49.1 | 0.661 |
### System Info:
- hf_name: bul-spa
- source_languages: bul
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'es']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: spa
- short_pair: bg-es
- chrF2_score: 0.6609999999999999
- bleu: 49.1
- brevity_penalty: 0.992
- ref_len: 1783.0
- src_name: Bulgarian
- tgt_name: Spanish
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: es
- prefer_old: False
- long_pair: bul-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
0c7cdae22fffc5814b9073d06eafabab
|
jonatasgrosman/exp_w2v2t_ar_vp-100k_s874
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ar']
| false | true | true | 475 | false |
# exp_w2v2t_ar_vp-100k_s874
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
dae80b70f6b7d859d6ff1b3d78007cd9
|
andi611/bert-base-uncased-ner-conll2003
|
andi611
|
bert
| 14 | 8 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,433 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1258
- Precision: 0.0269
- Recall: 0.1379
- F1: 0.0451
- Accuracy: 0.1988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 4 | 2.1296 | 0.0270 | 0.1389 | 0.0452 | 0.1942 |
| No log | 2.0 | 8 | 2.1258 | 0.0269 | 0.1379 | 0.0451 | 0.1988 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.8.1+cu111
- Datasets 1.8.0
- Tokenizers 0.10.3
|
f8595e52d8dbfc9f56cb12da1be14515
|
google/t5-xxl-ssm
|
google
|
t5
| 9 | 12 |
transformers
| 4 |
text2text-generation
| true | true | false |
apache-2.0
|
['en']
|
['c4', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,600 | false |
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4) and subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia).
**Note**: This model should be fine-tuned on a question answering downstream task before it is useable for closed book question answering.
Other Community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

|
e138f372fc689d7c84b6e46df9ecfd58
|
sd-concepts-library/aj-fosik
|
sd-concepts-library
| null | 10 | 0 | null | 4 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,112 | false |
### AJ Fosik on Stable Diffusion
This is the `<AJ-Fosik>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
7e0f8c5ec56485e337729654dbc443e5
|
PaddlePaddle/uie-mini
|
PaddlePaddle
|
ernie
| 7 | 0 |
paddlenlp
| 0 | null | false | false | false |
apache-2.0
|
['en', 'zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,351 | false |
[](https://github.com/PaddlePaddle/PaddleNLP)
# PaddlePaddle/uie-mini
Information extraction suffers from its varying targets, heterogeneous structures, and demand-specific schemas. The unified text-to-structure generation framework, namely UIE, can universally model different IE tasks, adaptively generate targeted structures, and collaboratively learn general IE abilities from different knowledge sources. Specifically, UIE uniformly encodes different extraction structures via a structured extraction language, adaptively generates target extractions via a schema-based prompt mechanism - structural schema instructor, and captures the common IE abilities via a large-scale pre-trained text-to-structure model. Experiments show that UIE achieved the state-of-the-art performance on 4 IE tasks, 13 datasets, and on all supervised, low-resource, and few-shot settings for a wide range of entity, relation, event and sentiment extraction tasks and their unification. These results verified the effectiveness, universality, and transferability of UIE.
UIE Paper: https://arxiv.org/abs/2203.12277
PaddleNLP released UIE model series for Information Extraction of texts and multi-modal documents which use the ERNIE 3.0 models as the pre-trained language models and were finetuned on a large amount of information extraction data.

## Available Models
| Model Name | Usage Scenarios | Supporting Tasks |
| :----------------------------------------------------------: | :--------------------------------------------------------- | :--------------------------------------------------- |
| `uie-base`<br />`uie-medium`<br />`uie-mini`<br />`uie-micro`<br />`uie-nano` | For **plain text** The **extractive** model of the scene supports **Chinese** | Supports entity, relation, event, opinion extraction |
| `uie-base-en` | An **extractive** model for **plain text** scenarios, supports **English** | Supports entity, relation, event, opinion extraction |
| `uie-m-base`<br />`uie-m-large` | An **extractive** model for **plain text** scenarios, supporting **Chinese and English** | Supports entity, relation, event, opinion extraction |
| <b>`uie-x-base`</b> | An **extractive** model for **plain text** and **document** scenarios, supports **Chinese and English** | Supports entity, relation, event, opinion extraction on both plain text and documents/pictures/tables |
## Performance on Text Dataset
We conducted experiments on the in-house test sets of the three different domains of Internet, medical care, and finance:
<table>
<tr><th row_span='2'><th colspan='2'>finance<th colspan='2'>healthcare<th colspan='2'>internet
<tr><td><th>0-shot<th>5-shot<th>0-shot<th>5-shot<th>0-shot<th>5-shot
<tr><td>uie-base (12L768H)<td>46.43<td>70.92<td><b>71.83</b><td>85.72<td>78.33<td>81.86
<tr><td>uie-medium (6L768H)<td>41.11<td>64.53<td>65.40<td>75.72<td>78.32<td>79.68
<tr><td>uie-mini (6L384H)<td>37.04<td>64.65<td>60.50<td>78.36<td>72.09<td>76.38
<tr><td>uie-micro (4L384H)<td>37.53<td>62.11<td>57.04<td>75.92<td>66.00<td>70.22
<tr><td>uie-nano (4L312H)<td>38.94<td>66.83<td>48.29<td>76.74<td>62.86<td>72.35
<tr><td>uie-m-large (24L1024H)<td><b>49.35</b><td><b>74.55</b><td>70.50<td><b>92.66</b ><td>78.49<td><b>83.02</b>
<tr><td>uie-m-base (12L768H)<td>38.46<td>74.31<td>63.37<td>87.32<td>76.27<td>80.13
<tr><td>🧾🎓<b>uie-x-base (12L768H)</b><td>48.84<td>73.87<td>65.60<td>88.81<td><b>79.36</b> <td>81.65
</table>
0-shot means that no training data is directly used for prediction through paddlenlp.Taskflow, and 5-shot means that each category contains 5 pieces of labeled data for model fine-tuning. Experiments show that UIE can further improve the performance with a small amount of data (few-shot).
> Detailed Info: https://github.com/PaddlePaddle/PaddleNLP/blob/develop/applications/information_extraction/README_en.md
|
2acd9fa121492c156ef7ba1348822b9e
|
jonatasgrosman/exp_w2v2t_ar_r-wav2vec2_s545
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ar']
| false | true | true | 462 | false |
# exp_w2v2t_ar_r-wav2vec2_s545
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
e12928b3136fd119cd6194bf1d3240e8
|
pszemraj/neuspell-subwordbert-probwordnoise
|
pszemraj
| null | 6 | 0 | null | 0 | null | true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['neuspell', 'spelling', 'spell-correction']
| false | true | true | 834 | false |
# neuspell-subwordbert-probwordnoise
> towards a reliable workaround for the `neuspell` lib being broken
See the [github repository](https://github.com/neuspell/neuspell) for usage and all official information.
## Usage
Clone this model repo with git:
```bash
sudo apt-get install git-lfs -q
git clone https://huggingface.co/pszemraj/neuspell-subwordbert-probwordnoise
```
Install `neuspell` from pypi:
```bash
pip install -U neuspell -q
```
Use in python for spell correction:
```python
from neuspell import BertChecker
checker = BertChecker()
checker.from_pretrained("./neuspell-subwordbert-probwordnoise/")
checker.correct("I luk foward to receving your reply") # correct a string
checker.correct_strings(
["I luk foward to receving your reply", "were did wendigo goe boating?"]
) # correct a list of strings
```
|
b48f9f4bff23ca1c925481e925bcba8a
|
bheshaj/bart-large-cnn-small-xsum-5epochs
|
bheshaj
|
bart
| 10 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null |
['xsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,691 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-small-xsum-5epochs
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7051
- Rouge1: 0.2859
- Rouge2: 0.0937
- Rougel: 0.2033
- Rougelsum: 0.2101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.045e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 2.5007 | 0.32 | 16 | 2.0311 | 0.2393 | 0.0609 | 0.1618 | 0.1832 |
| 2.0942 | 0.64 | 32 | 1.9169 | 0.2906 | 0.1053 | 0.2072 | 0.2166 |
| 1.7543 | 0.96 | 48 | 1.9069 | 0.2904 | 0.0955 | 0.2058 | 0.2187 |
| 1.2476 | 1.28 | 64 | 1.9614 | 0.2928 | 0.1043 | 0.2081 | 0.2257 |
| 1.2318 | 1.6 | 80 | 1.9622 | 0.2892 | 0.0976 | 0.2099 | 0.2245 |
| 1.0768 | 1.92 | 96 | 2.0244 | 0.2935 | 0.1008 | 0.2095 | 0.2209 |
| 0.8845 | 2.24 | 112 | 2.0605 | 0.2886 | 0.0992 | 0.2039 | 0.2146 |
| 0.5722 | 2.56 | 128 | 2.2340 | 0.2852 | 0.0946 | 0.1983 | 0.2146 |
| 0.7132 | 2.88 | 144 | 2.1948 | 0.2838 | 0.0961 | 0.2047 | 0.2163 |
| 0.4438 | 3.2 | 160 | 2.3758 | 0.2869 | 0.0906 | 0.1987 | 0.2102 |
| 0.4194 | 3.52 | 176 | 2.5609 | 0.2882 | 0.0916 | 0.2022 | 0.2133 |
| 0.3404 | 3.84 | 192 | 2.4988 | 0.2884 | 0.0907 | 0.2022 | 0.213 |
| 0.2929 | 4.16 | 208 | 2.5802 | 0.2885 | 0.0967 | 0.2046 | 0.2141 |
| 0.2466 | 4.48 | 224 | 2.6590 | 0.2823 | 0.094 | 0.1994 | 0.2119 |
| 0.1889 | 4.8 | 240 | 2.7051 | 0.2859 | 0.0937 | 0.2033 | 0.2101 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
b92c79c2a3d2d29e0494448f58eadee3
|
flax-community/gpt2-base-thai
|
flax-community
|
gpt2
| 18 | 315 |
transformers
| 6 |
text-generation
| true | false | true |
mit
|
['th']
|
['oscar']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['gpt2-base-thai']
| false | true | true | 2,312 | false |
## GPT-2 Base Thai
GPT-2 Base Thai is a causal language model based on the [OpenAI GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model. It was trained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset, specifically the `unshuffled_deduplicated_th` subset. The model was trained from scratch and achieved an evaluation loss of 1.708 and an evaluation perplexity of 5.516.
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by HuggingFace. All training was done on a TPUv3-8 VM, sponsored by the Google Cloud team.
All necessary scripts used for training could be found in the [Files and versions](https://hf.co/flax-community/gpt2-base-thai/tree/main) tab, as well as the [Training metrics](https://hf.co/flax-community/gpt2-base-thai/tensorboard) logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------- | ------- | ----- | ------------------------------------ |
| `gpt2-base-thai` | 124M | GPT-2 | `unshuffled_deduplicated_th` Dataset |
## Evaluation Results
The model was trained for 3 epochs and the following is the final result once the training ended.
| train loss | valid loss | valid PPL | total time |
| ---------- | ---------- | --------- | ---------- |
| 1.638 | 1.708 | 5.516 | 6:12:34 |
## How to Use
### As Causal Language Model
```python
from transformers import pipeline
pretrained_name = "flax-community/gpt2-base-thai"
nlp = pipeline(
"text-generation",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("สวัสดีตอนเช้า")
```
### Feature Extraction in PyTorch
```python
from transformers import GPT2Model, GPT2TokenizerFast
pretrained_name = "flax-community/gpt2-base-thai"
model = GPT2Model.from_pretrained(pretrained_name)
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
prompt = "สวัสดีตอนเช้า"
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Team Members
- Sakares Saengkaew ([@sakares](https://hf.co/sakares))
- Wilson Wongso ([@w11wo](https://hf.co/w11wo))
|
81e5ac95c7d3f0c9ddd55d175b23a6f1
|
philschmid/distilroberta-base-ner-wikiann
|
philschmid
|
roberta
| 9 | 25 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['wikiann']
| null | 2 | 0 | 2 | 0 | 0 | 0 | 0 |
['token-classification']
| true | true | true | 1,630 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-ner-wikiann
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the wikiann dataset.
eval F1-Score: **83,78**
test F1-Score: **83,76**
## Model Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-wikiann")
model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-wikiann")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True)
example = "My name is Philipp and live in Germany"
nlp(example)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.9086903597787154e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
It achieves the following results on the evaluation set:
- Loss: 0.3156
- Precision: 0.8332
- Recall: 0.8424
- F1: 0.8378
- Accuracy: 0.9193
It achieves the following results on the test set:
- Loss: 0.3023
- Precision: 0.8301
- Recall: 0.8452
- F1: 0.8376
- Accuracy: 0.92
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.6.2
- Tokenizers 0.10.2
|
595dd133ec6b172c56669b061096b11f
|
bryanleeharyanto/vtt-indonesia
|
bryanleeharyanto
|
wav2vec2
| 13 | 9 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,825 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vtt-indonesia
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3472
- Wer: 0.3582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7612 | 3.23 | 400 | 0.6405 | 0.6714 |
| 0.4143 | 6.45 | 800 | 0.3772 | 0.4974 |
| 0.2068 | 9.68 | 1200 | 0.3877 | 0.4442 |
| 0.1436 | 12.9 | 1600 | 0.3785 | 0.4212 |
| 0.1133 | 16.13 | 2000 | 0.3944 | 0.4144 |
| 0.09 | 19.35 | 2400 | 0.3695 | 0.3925 |
| 0.0705 | 22.58 | 2800 | 0.3706 | 0.3846 |
| 0.057 | 25.81 | 3200 | 0.3720 | 0.3725 |
| 0.048 | 29.03 | 3600 | 0.3472 | 0.3582 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
cc99a4c8361031c86520f1809f4bff15
|
jonatasgrosman/exp_w2v2t_uk_wavlm_s722
|
jonatasgrosman
|
wavlm
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['uk']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'uk']
| false | true | true | 439 | false |
# exp_w2v2t_uk_wavlm_s722
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (uk)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
d8d6ebf91a2ed2f1ad00089f8484b021
|
it5/it5-base-formal-to-informal
|
it5
|
t5
| 10 | 4 |
transformers
| 0 |
text2text-generation
| true | true | true |
apache-2.0
|
['it']
|
['yahoo/xformal_it']
|
{'emissions': '17g', 'source': 'Google Cloud Platform Carbon Footprint', 'training_type': 'fine-tuning', 'geographical_location': 'Eemshaven, Netherlands, Europe', 'hardware_used': '1 TPU v3-8 VM'}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['italian', 'sequence-to-sequence', 'style-transfer', 'formality-style-transfer']
| true | true | true | 1,794 | false |
# IT5 Base for Formal-to-informal Style Transfer 🤗
This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
f2i = pipeline("text2text-generation", model='it5/it5-base-formal-to-informal')
f2i("Vi ringrazio infinitamente per vostra disponibilità")
>>> [{"generated_text": "e grazie per la vostra disponibilità!"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-formal-to-informal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-formal-to-informal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
|
1423f263c6591f58f59191fb7d70258e
|
vonewman/distilbert-base-uncased-finetuned-emotion
|
vonewman
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,343 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2217
- Accuracy: 0.921
- F1: 0.9212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8094 | 1.0 | 250 | 0.3157 | 0.9055 | 0.9009 |
| 0.2462 | 2.0 | 500 | 0.2217 | 0.921 | 0.9212 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
3aef6089a6fb9197a1ab4c144643e01c
|
projecte-aina/roberta-large-ca-v2
|
projecte-aina
|
roberta
| 10 | 19 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
|
['ca']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['catalan', 'masked-lm', 'RoBERTa-large-ca-v2', 'CaText', 'Catalan Textual Corpus']
| false | true | true | 10,596 | false |
# Catalan BERTa (roberta-large-ca-v2) large model
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [CLUB benchmark](#club-benchmark)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-large-ca-v2** is a transformer-based masked language model for the Catalan language.
It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) large model
and has been trained on a medium-size corpus collected from publicly available corpora and crawlers.
## Intended uses and limitations
**roberta-large-ca-v2** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section).
However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition.
## How to use
Here is how to use this model:
```python
from transformers import AutoModelForMaskedLM
from transformers import AutoTokenizer, FillMaskPipeline
from pprint import pprint
tokenizer_hf = AutoTokenizer.from_pretrained('projecte-aina/roberta-large-ca-v2')
model = AutoModelForMaskedLM.from_pretrained('projecte-aina/roberta-large-ca-v2')
model.eval()
pipeline = FillMaskPipeline(model, tokenizer_hf)
text = f"Em dic <mask>."
res_hf = pipeline(text)
pprint([r['token_str'] for r in res_hf])
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
The training corpus consists of several corpora gathered from web crawling and public corpora.
| Corpus | Size in GB |
|-------------------------|------------|
| Catalan Crawling | 13.00 |
| Wikipedia | 1.10 |
| DOGC | 0.78 |
| Catalan Open Subtitles | 0.02 |
| Catalan Oscar | 4.00 |
| CaWaC | 3.60 |
| Cat. General Crawling | 2.50 |
| Cat. Goverment Crawling | 0.24 |
| ACN | 0.42 |
| Padicat | 0.63 |
| RacoCatalá | 8.10 |
| Nació Digital | 0.42 |
| Vilaweb | 0.06 |
| Tweets | 0.02 |
### Training procedure
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2)
used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 50,262 tokens.
The RoBERTa-large pretraining consists of a masked language model training that follows the approach employed for the RoBERTa large model
with the same hyperparameters as in the original work.
The training lasted a total of 96 hours with 32 NVIDIA V100 GPUs of 16GB DDRAM.
## Evaluation
### CLUB benchmark
The BERTa-large model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB),
that has been created along with the model.
It contains the following tasks and their related datasets:
1. Named Entity Recognition (NER)
**[NER (AnCora)](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: extracted named entities from the original [Ancora](https://doi.org/10.5281/zenodo.4762030) version,
filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format
2. Part-of-Speech Tagging (POS)
**[POS (AnCora)](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: from the [Universal Dependencies treebank](https://github.com/UniversalDependencies/UD_Catalan-AnCora) of the well-known Ancora corpus.
3. Text Classification (TC)
**[TeCla](https://huggingface.co/datasets/projecte-aina/tecla)**: consisting of 137k news pieces from the Catalan News Agency ([ACN](https://www.acn.cat/)) corpus, with 30 labels.
4. Textual Entailment (TE)
**[TE-ca](https://huggingface.co/datasets/projecte-aina/teca)**: consisting of 21,163 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction, or neutral), extracted from the [Catalan Textual Corpus](https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus).
5. Semantic Textual Similarity (STS)
**[STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca)**: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them, scraped from the [Catalan Textual Corpus](https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus).
6. Question Answering (QA):
**[VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad)**: contains 6,282 pairs of questions and answers, outsourced from 2095 Catalan language articles from VilaWeb newswire text.
**[ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad)**: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan.
**[CatalanQA](https://huggingface.co/datasets/projecte-aina/catalanqa)**: an aggregation of 2 previous datasets (VilaQuAD and ViquiQuAD), 21,427 pairs of Q/A balanced by type of question, containing one question and one answer per context, although the contexts can repeat multiple times.
**[XQuAD-ca](https://huggingface.co/datasets/projecte-aina/xquad-ca)**: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a _test set_.
Here are the train/dev/test splits of the datasets:
| Task (Dataset) | Total | Train | Dev | Test |
|:--|:--|:--|:--|:--|
| NER (Ancora) |13,581 | 10,628 | 1,427 | 1,526 |
| POS (Ancora)| 16,678 | 13,123 | 1,709 | 1,846 |
| STS (STS-ca) | 3,073 | 2,073 | 500 | 500 |
| TC (TeCla) | 137,775 | 110,203 | 13,786 | 13,786|
| TE (TE-ca) | 21,163 | 16,930 | 2,116 | 2,117
| QA (VilaQuAD) | 6,282 | 3,882 | 1,200 | 1,200 |
| QA (ViquiQuAD) | 14,239 | 11,255 | 1,492 | 1,429 |
| QA (CatalanQA) | 21,427 | 17,135 | 2,157 | 2,135 |
### Evaluation results
| Task | NER (F1) | POS (F1) | STS-ca (Comb) | TeCla (Acc.) | TEca (Acc.) | VilaQuAD (F1/EM)| ViquiQuAD (F1/EM) | CatalanQA (F1/EM) | XQuAD-ca <sup>1</sup> (F1/EM) |
| ------------|:-------------:| -----:|:------|:------|:-------|:------|:----|:----|:----|
| RoBERTa-large-ca-v2 | **89.82** | **99.02** | **83.41** | **75.46** | **83.61** | **89.34/75.50** | **89.20**/75.77 | **90.72/79.06** | **73.79**/55.34 |
| RoBERTa-base-ca-v2 | 89.29 | 98.96 | 79.07 | 74.26 | 83.14 | 87.74/72.58 | 88.72/**75.91** | 89.50/76.63 | 73.64/**55.42** |
| BERTa | 89.76 | 98.96 | 80.19 | 73.65 | 79.26 | 85.93/70.58 | 87.12/73.11 | 89.17/77.14 | 69.20/51.47 |
| mBERT | 86.87 | 98.83 | 74.26 | 69.90 | 74.63 | 82.78/67.33 | 86.89/73.53 | 86.90/74.19 | 68.79/50.80 |
| XLM-RoBERTa | 86.31 | 98.89 | 61.61 | 70.14 | 33.30 | 86.29/71.83 | 86.88/73.11 | 88.17/75.93 | 72.55/54.16 |
<sup>1</sup> : Trained on CatalanQA, tested on XQuAD-ca.
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to aina@bsc.es
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Citation information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
e65f1f77cfef72edabef002b8bc814e0
|
muhtasham/tiny-mlm-glue-rte-target-glue-rte
|
muhtasham
|
bert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,430 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-rte-target-glue-rte
This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-rte](https://huggingface.co/muhtasham/tiny-mlm-glue-rte) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1886
- Accuracy: 0.6209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6394 | 6.41 | 500 | 0.6611 | 0.6318 |
| 0.4349 | 12.82 | 1000 | 0.8110 | 0.6245 |
| 0.268 | 19.23 | 1500 | 0.9771 | 0.6209 |
| 0.1653 | 25.64 | 2000 | 1.1886 | 0.6209 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
340752685b20a20c75bf68784626b670
|
yoshitomo-matsubara/bert-base-uncased-wnli_from_bert-large-uncased-wnli
|
yoshitomo-matsubara
|
bert
| 9 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['wnli']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bert', 'wnli', 'glue', 'kd', 'torchdistill']
| false | true | true | 703 | false |
`bert-base-uncased` fine-tuned on WNLI dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/wnli/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
6d0c25cf5242bbb3e09db433b5ee29b3
|
Shahm/t5-small-german
|
Shahm
|
t5
| 17 | 205 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
|
['de']
|
['mlsum']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'summarization']
| true | true | true | 1,084 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-seven-epoch-base-german
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the mlsum de dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5491
- Rouge1: 42.3787
- Rouge2: 32.0253
- Rougel: 38.9529
- Rougelsum: 40.4544
- Gen Len: 47.7873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
1c7456d99d2651963e83f34b9113b809
|
jhaochenz/finetuned_gpt2-xl_sst2_negation0.1_pretrainedFalse_epochs10
|
jhaochenz
|
gpt2
| 14 | 1 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null |
['sst2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,627 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2-xl_sst2_negation0.1_pretrainedFalse_epochs10
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7811 | 1.0 | 1329 | 3.1311 |
| 1.0842 | 2.0 | 2658 | 3.4312 |
| 0.8781 | 3.0 | 3987 | 3.6260 |
| 0.7678 | 4.0 | 5316 | 3.7834 |
| 0.706 | 5.0 | 6645 | 3.9070 |
| 0.6531 | 6.0 | 7974 | 3.9999 |
| 0.6115 | 7.0 | 9303 | 4.0954 |
| 0.5744 | 8.0 | 10632 | 4.1809 |
| 0.5402 | 9.0 | 11961 | 4.2368 |
| 0.5158 | 10.0 | 13290 | 4.2784 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
6bf6817ab03eced394d748c5c694237e
|
PlanTL-GOB-ES/gpt2-large-bne
|
PlanTL-GOB-ES
|
gpt2
| 5 | 1,012 |
transformers
| 8 |
text-generation
| true | false | false |
apache-2.0
|
['es']
|
['bne']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['national library of spain', 'spanish', 'bne', 'gpt2-large-bne']
| false | true | true | 11,645 | false |
# GPT2-large trained with data from the National Library of Spain (BNE)
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Overview](#overview)
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Disclaimer](#disclaimer)
</details>
## Overview
- **Architecture:** gpt2-large
- **Language:** Spanish
- **Task:** text-generation
- **Data:** BNE
## Model description
**GPT2-large-bne** is a transformer-based model for the Spanish language. It is based on the [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
## Intended uses and limitations
You can use the raw model for text generation or fine-tune it to a downstream task.
## How to use
Here is how to use this model:
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed
>>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne")
>>> model = AutoModelForCausalLM.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne")
>>> generator = pipeline('text-generation', tokenizer=tokenizer, model=model)
>>> set_seed(42)
>>> generator("La Biblioteca Nacional de España es una entidad pública y sus fines son", num_return_sequences=5)
[{'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son servir como herramienta básica en la difusión de la cultura. '},
{'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son el desarrollo de la educación, la cultura y el conocimiento, promoviendo actividades a través de Internet con la información que recibe del acceso a los fondos que en ella se almacenan. '},
{'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son la publicación y difusión cultural. '},
{'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son preservar y difundir los fondos y colecciones de la Biblioteca Nacional, así como servir de punto de encuentro para toda la comunidad científica, la academia y para la sociedad civil. '},
{'generated_text': 'La Biblioteca Nacional de España es una entidad pública y sus fines son la conservación, estudio y difusión del Patrimonio Bibliográfico en cualquiera de sus formas así como la formación y perfeccionamiento de los especialistas e investigadores en el campo de la información y de las bibliotecas.'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
>>> from transformers import AutoTokenizer, GPT2Model
>>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne")
>>> model = GPT2Model.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne")
>>> text = "La Biblioteca Nacional de España es una entidad pública y sus fines son"
>>> encoded_input = tokenizer(text, return_tensors='pt')
>>> output = model(**encoded_input)
>>> print(output.last_hidden_state.shape)
torch.Size([1, 14, 1280])
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions:
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, set_seed
>>> tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne")
>>> model = AutoModelForCausalLM.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne")
>>> generator = pipeline('text-generation', tokenizer=tokenizer, model=model)
>>> set_seed(42)
>>> generator("El hombre se dedica a", num_return_sequences=5)
[{'generated_text': 'El hombre se dedica a comprar móviles a sus padres, pero les paga por ellos y luego les devuelve la pasta a ella. '},
{'generated_text': 'El hombre se dedica a la venta ambulante ilegal en la zona de la Alameda, con puestos del rastro callejero o de supermercados a los que luego roba. '},
{'generated_text': 'El hombre se dedica a la venta ambulante en el Paseo de Melilla. '},
{'generated_text': 'El hombre se dedica a los tatuajes y los dibujos en el cuerpo con su apariencia física y no da a basto en las tareas domésticas. '},
{'generated_text': 'El hombre se dedica a la caza indiscriminada de animales. '}]
>>> set_seed(42)
>>> generator("La mujer se dedica a", num_return_sequences=5)
[{'generated_text': 'La mujer se dedica a comprar móviles a sus padres, pero les paga por ellos y luego no paga la factura." '},
{'generated_text': 'La mujer se dedica a la venta ambulante y su pareja vende cupones en el mercadillo navideño. '},
{'generated_text': 'La mujer se dedica a la venta al por mayor de perfumes, cosmética, complementos, y otros bienes de consumo. '},
{'generated_text': 'La mujer se dedica a los servicios sexuales y se aprovecha de los servicios religiosos. '},
{'generated_text': 'La mujer se dedica a la prostitución y tiene dos hijas del matrimonio y la propia familia de la víctima. '}]
```
## Training
### Training data
The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.
Some of the statistics of the corpus:
| Corpora | Number of documents | Number of tokens | Size (GB) |
|---------|---------------------|------------------|-----------|
| BNE | 201,080,084 | 135,733,450,668 | 570GB |
### Training procedure
The pretraining objective used for this architecture is next token prediction.
The configuration of the **GPT2-large-bne** model is as follows:
- gpt2-large: 36-layer, 1280-hidden, 20-heads, 774M parameters.
The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model with a vocabulary size of 50,262 tokens.
The GPT2-large-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2.
The training lasted a total of 10 days with 32 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to <plantl-gob-es@bsc.es>
### Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Licensing information
This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
### Citation information
If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
```
@article{,
abstract = {We want to thank the National Library of Spain for such a large effort on the data gathering and the Future of Computing Center, a
Barcelona Supercomputing Center and IBM initiative (2020). This work was funded by the Spanish State Secretariat for Digitalization and Artificial
Intelligence (SEDIA) within the framework of the Plan-TL.},
author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas},
doi = {10.26342/2022-68-3},
issn = {1135-5948},
journal = {Procesamiento del Lenguaje Natural},
keywords = {Artificial intelligence,Benchmarking,Data processing.,MarIA,Natural language processing,Spanish language modelling,Spanish language resources,Tractament del llenguatge natural (Informàtica),Àrees temàtiques de la UPC::Informàtica::Intel·ligència artificial::Llenguatge natural},
publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural},
title = {MarIA: Spanish Language Models},
volume = {68},
url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley},
year = {2022},
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
</details>
|
e6e6e3ecaae6f2630005d77d8ac63214
|
muhtasham/tiny-mlm-glue-stsb-from-scratch-custom-tokenizer-expand-vocab
|
muhtasham
|
bert
| 12 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,594 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-stsb-from-scratch-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.575 | 0.7 | 500 | 8.4501 |
| 7.8603 | 1.39 | 1000 | 7.2557 |
| 7.0873 | 2.09 | 1500 | 6.8941 |
| 6.8132 | 2.78 | 2000 | 6.7624 |
| 6.8004 | 3.48 | 2500 | 6.5626 |
| 6.7383 | 4.17 | 3000 | 6.6079 |
| 6.6661 | 4.87 | 3500 | 6.5800 |
| 6.6778 | 5.56 | 4000 | 6.5710 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
bfcb0cdcdb22c386db23ee3fe0a4a270
|
cjbarrie/distilbert-base-uncased-finetuned-emotion
|
cjbarrie
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 933 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
3b154bbd089a3400bb38e2466b45591a
|
responsibility-framing/predict-perception-xlmr-blame-concept
|
responsibility-framing
|
xlm-roberta
| 12 | 21 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 9,937 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-blame-concept
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9414
- Rmse: 0.7875
- Rmse Blame::a Un concetto astratto o un'emozione: 0.7875
- Mae: 0.6165
- Mae Blame::a Un concetto astratto o un'emozione: 0.6165
- R2: 0.2291
- R2 Blame::a Un concetto astratto o un'emozione: 0.2291
- Cos: 0.1304
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3509
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Un concetto astratto o un'emozione | Mae | Mae Blame::a Un concetto astratto o un'emozione | R2 | R2 Blame::a Un concetto astratto o un'emozione | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------------------------------:|:------:|:-----------------------------------------------:|:------:|:----------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0549 | 1.0 | 15 | 1.2093 | 0.8925 | 0.8925 | 0.6659 | 0.6659 | 0.0097 | 0.0097 | -0.3043 | 0.0 | 0.5 | 0.4013 | nan |
| 1.0085 | 2.0 | 30 | 1.2199 | 0.8964 | 0.8964 | 0.6494 | 0.6494 | 0.0010 | 0.0010 | -0.1304 | 0.0 | 0.5 | 0.4515 | nan |
| 1.0131 | 3.0 | 45 | 1.1798 | 0.8815 | 0.8815 | 0.6412 | 0.6412 | 0.0339 | 0.0339 | -0.2174 | 0.0 | 0.5 | 0.2402 | nan |
| 0.9931 | 4.0 | 60 | 1.1726 | 0.8788 | 0.8788 | 0.6370 | 0.6370 | 0.0397 | 0.0397 | -0.1304 | 0.0 | 0.5 | 0.2911 | nan |
| 0.9668 | 5.0 | 75 | 1.1194 | 0.8587 | 0.8587 | 0.5925 | 0.5925 | 0.0833 | 0.0833 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.8759 | 6.0 | 90 | 1.0776 | 0.8425 | 0.8425 | 0.6265 | 0.6265 | 0.1175 | 0.1175 | 0.3043 | 0.0 | 0.5 | 0.4190 | nan |
| 0.8787 | 7.0 | 105 | 1.0513 | 0.8321 | 0.8321 | 0.6087 | 0.6087 | 0.1391 | 0.1391 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.7637 | 8.0 | 120 | 1.0537 | 0.8331 | 0.8331 | 0.6265 | 0.6265 | 0.1372 | 0.1372 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.6568 | 9.0 | 135 | 0.9104 | 0.7744 | 0.7744 | 0.5887 | 0.5887 | 0.2544 | 0.2544 | 0.3043 | 0.0 | 0.5 | 0.3680 | nan |
| 0.6354 | 10.0 | 150 | 0.9055 | 0.7723 | 0.7723 | 0.6222 | 0.6222 | 0.2585 | 0.2585 | 0.1304 | 0.0 | 0.5 | 0.3987 | nan |
| 0.5107 | 11.0 | 165 | 1.0173 | 0.8186 | 0.8186 | 0.6168 | 0.6168 | 0.1669 | 0.1669 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.4598 | 12.0 | 180 | 0.9155 | 0.7765 | 0.7765 | 0.6284 | 0.6284 | 0.2503 | 0.2503 | 0.1304 | 0.0 | 0.5 | 0.3987 | nan |
| 0.3815 | 13.0 | 195 | 0.9255 | 0.7808 | 0.7808 | 0.6140 | 0.6140 | 0.2421 | 0.2421 | 0.1304 | 0.0 | 0.5 | 0.3987 | nan |
| 0.3303 | 14.0 | 210 | 0.8506 | 0.7485 | 0.7485 | 0.6076 | 0.6076 | 0.3035 | 0.3035 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2799 | 15.0 | 225 | 1.0272 | 0.8226 | 0.8226 | 0.6699 | 0.6699 | 0.1588 | 0.1588 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2998 | 16.0 | 240 | 0.9969 | 0.8103 | 0.8103 | 0.6461 | 0.6461 | 0.1836 | 0.1836 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.3131 | 17.0 | 255 | 0.9066 | 0.7727 | 0.7727 | 0.5849 | 0.5849 | 0.2576 | 0.2576 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.2234 | 18.0 | 270 | 0.8741 | 0.7588 | 0.7588 | 0.5953 | 0.5953 | 0.2842 | 0.2842 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.2481 | 19.0 | 285 | 1.0022 | 0.8125 | 0.8125 | 0.6549 | 0.6549 | 0.1793 | 0.1793 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2333 | 20.0 | 300 | 0.9238 | 0.7801 | 0.7801 | 0.6180 | 0.6180 | 0.2435 | 0.2435 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2407 | 21.0 | 315 | 0.9868 | 0.8062 | 0.8062 | 0.6457 | 0.6457 | 0.1919 | 0.1919 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2122 | 22.0 | 330 | 0.9514 | 0.7916 | 0.7916 | 0.6204 | 0.6204 | 0.2209 | 0.2209 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2162 | 23.0 | 345 | 0.9227 | 0.7796 | 0.7796 | 0.6053 | 0.6053 | 0.2444 | 0.2444 | 0.1304 | 0.0 | 0.5 | 0.3509 | nan |
| 0.1739 | 24.0 | 360 | 0.9147 | 0.7762 | 0.7762 | 0.5979 | 0.5979 | 0.2510 | 0.2510 | 0.1304 | 0.0 | 0.5 | 0.3509 | nan |
| 0.2084 | 25.0 | 375 | 0.9645 | 0.7970 | 0.7970 | 0.6296 | 0.6296 | 0.2102 | 0.2102 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.1702 | 26.0 | 390 | 0.9587 | 0.7946 | 0.7946 | 0.6279 | 0.6279 | 0.2149 | 0.2149 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2146 | 27.0 | 405 | 0.9519 | 0.7918 | 0.7918 | 0.6273 | 0.6273 | 0.2205 | 0.2205 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.1645 | 28.0 | 420 | 0.9398 | 0.7868 | 0.7868 | 0.6181 | 0.6181 | 0.2304 | 0.2304 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2052 | 29.0 | 435 | 0.9492 | 0.7907 | 0.7907 | 0.6228 | 0.6228 | 0.2227 | 0.2227 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.147 | 30.0 | 450 | 0.9414 | 0.7875 | 0.7875 | 0.6165 | 0.6165 | 0.2291 | 0.2291 | 0.1304 | 0.0 | 0.5 | 0.3509 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
4516c26df4b57733845f899ed29cff32
|
sd-concepts-library/center-table
|
sd-concepts-library
| null | 11 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,330 | false |
### center-table on Stable Diffusion
This is the `<wakefit-center-table>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
1c5aaae90b3239e7311ccf31025eb715
|
SetFit/deberta-v3-large__sst2__train-8-6
|
SetFit
|
deberta-v2
| 10 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,305 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-6
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4331
- Accuracy: 0.7106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6486 | 1.0 | 3 | 0.7901 | 0.25 |
| 0.6418 | 2.0 | 6 | 0.9259 | 0.25 |
| 0.6169 | 3.0 | 9 | 1.0574 | 0.25 |
| 0.5639 | 4.0 | 12 | 1.1372 | 0.25 |
| 0.4562 | 5.0 | 15 | 0.6090 | 0.5 |
| 0.3105 | 6.0 | 18 | 0.4435 | 1.0 |
| 0.2303 | 7.0 | 21 | 0.2804 | 1.0 |
| 0.1388 | 8.0 | 24 | 0.2205 | 1.0 |
| 0.0918 | 9.0 | 27 | 0.1282 | 1.0 |
| 0.0447 | 10.0 | 30 | 0.0643 | 1.0 |
| 0.0297 | 11.0 | 33 | 0.0361 | 1.0 |
| 0.0159 | 12.0 | 36 | 0.0211 | 1.0 |
| 0.0102 | 13.0 | 39 | 0.0155 | 1.0 |
| 0.0061 | 14.0 | 42 | 0.0158 | 1.0 |
| 0.0049 | 15.0 | 45 | 0.0189 | 1.0 |
| 0.0035 | 16.0 | 48 | 0.0254 | 1.0 |
| 0.0027 | 17.0 | 51 | 0.0305 | 1.0 |
| 0.0021 | 18.0 | 54 | 0.0287 | 1.0 |
| 0.0016 | 19.0 | 57 | 0.0215 | 1.0 |
| 0.0016 | 20.0 | 60 | 0.0163 | 1.0 |
| 0.0014 | 21.0 | 63 | 0.0138 | 1.0 |
| 0.0015 | 22.0 | 66 | 0.0131 | 1.0 |
| 0.001 | 23.0 | 69 | 0.0132 | 1.0 |
| 0.0014 | 24.0 | 72 | 0.0126 | 1.0 |
| 0.0011 | 25.0 | 75 | 0.0125 | 1.0 |
| 0.001 | 26.0 | 78 | 0.0119 | 1.0 |
| 0.0008 | 27.0 | 81 | 0.0110 | 1.0 |
| 0.0007 | 28.0 | 84 | 0.0106 | 1.0 |
| 0.0008 | 29.0 | 87 | 0.0095 | 1.0 |
| 0.0009 | 30.0 | 90 | 0.0089 | 1.0 |
| 0.0008 | 31.0 | 93 | 0.0083 | 1.0 |
| 0.0007 | 32.0 | 96 | 0.0075 | 1.0 |
| 0.0008 | 33.0 | 99 | 0.0066 | 1.0 |
| 0.0006 | 34.0 | 102 | 0.0059 | 1.0 |
| 0.0007 | 35.0 | 105 | 0.0054 | 1.0 |
| 0.0008 | 36.0 | 108 | 0.0051 | 1.0 |
| 0.0007 | 37.0 | 111 | 0.0049 | 1.0 |
| 0.0007 | 38.0 | 114 | 0.0047 | 1.0 |
| 0.0006 | 39.0 | 117 | 0.0045 | 1.0 |
| 0.0006 | 40.0 | 120 | 0.0046 | 1.0 |
| 0.0005 | 41.0 | 123 | 0.0045 | 1.0 |
| 0.0006 | 42.0 | 126 | 0.0044 | 1.0 |
| 0.0006 | 43.0 | 129 | 0.0043 | 1.0 |
| 0.0006 | 44.0 | 132 | 0.0044 | 1.0 |
| 0.0005 | 45.0 | 135 | 0.0045 | 1.0 |
| 0.0006 | 46.0 | 138 | 0.0043 | 1.0 |
| 0.0006 | 47.0 | 141 | 0.0043 | 1.0 |
| 0.0006 | 48.0 | 144 | 0.0041 | 1.0 |
| 0.0007 | 49.0 | 147 | 0.0042 | 1.0 |
| 0.0005 | 50.0 | 150 | 0.0042 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
b1681dfd8bfc004ecfbdd4af03aff4c2
|
theojolliffe/bart-paraphrase-v4-e1-rev
|
theojolliffe
|
bart
| 12 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,459 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-v4-e1-rev
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4221
- Rouge1: 62.2412
- Rouge2: 56.1611
- Rougel: 59.4952
- Rougelsum: 61.581
- Gen Len: 19.6036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0975 | 1.0 | 14185 | 0.4221 | 62.2412 | 56.1611 | 59.4952 | 61.581 | 19.6036 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ccb7debcc6c5c1eaffe8e3e97373becd
|
harveyagraphcore/bert-base-uncased-finetuned-squad
|
harveyagraphcore
|
bert
| 11 | 2 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,118 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 3
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
46dc556248a77d2e2ea280c32636c02c
|
sanderland/zelda-the-cat
|
sanderland
| null | 17 | 8 |
diffusers
| 3 |
text-to-image
| true | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
| false | true | true | 1,497 | false |
# DreamBooth model for the zzelda concept trained by Sanderbaduk on dataset of cats.
This is a Stable Diffusion model fine-tuned on pictures of my mum's cat "Zelda" with DreamBooth. It can be used by using the phrase 'zzelda cat' in a prompt.
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
<table>
<tr>
<td>One of the images used to fine-tune on<br>"a photo of zzelda cat on a chair"</td>
<td>One of the images generated by the model<br>"a photo of zzelda cat in space"</td>
</tr>
<tr>
<td>
<img src="http://i.imgur.com/zFOzQtf.jpg" style="max-height:400px">
</td>
<td>
<img src="http://i.imgur.com/12Nilhg.png" style="max-height:400px">
</td>
</tr>
</table>
## Description
This is a Stable Diffusion model fine-tuned on images of my mum's cat Zelda for the animal theme.
To experiment a bit, I used a custom prompt for each image based on the file name. This works, but does not seem to have made much of a difference.
The model was trained on CPU after encountering issues with CUDA, taking around 2 hours on 32 cores.
It works a lot better locally than in the widget, where it tends to take a few more tries to get the right cat.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Sanderbaduk/zelda-the-cat')
image = pipeline().images[0]
image
```
|
b5ab0dcbe1073cab8af6123f92108286
|
ihanif/whisper-large-pashto
|
ihanif
|
whisper
| 24 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['google/fleurs']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,739 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Pashto
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the google/fleurs ps_af dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8623
- Wer: 54.0685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 700
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 1.2281 | 16.59 | 100 | 1.0951 | 69.3118 |
| 0.7529 | 33.3 | 200 | 0.8693 | 57.5635 |
| 0.5372 | 49.89 | 300 | 0.8399 | 54.7350 |
| 0.4398 | 66.59 | 400 | 0.8623 | 54.0685 |
| 0.3244 | 83.3 | 500 | 0.9098 | 54.7505 |
| 0.238 | 99.89 | 600 | 0.9607 | 55.3782 |
| 0.2014 | 116.59 | 700 | 1.0077 | 55.9206 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
61578c224f1143c79bb5a04bef867c9e
|
bassemessam/wav2vec2-large-xls-r-300m-arabic-saudi-colab
|
bassemessam
|
wav2vec2
| 13 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,099 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-arabic-saudi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
daa93792ade7a445166dc80b7ee1bf12
|
Helsinki-NLP/opus-mt-st-es
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-st-es
* source languages: st
* target languages: es
* OPUS readme: [st-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.st.es | 31.3 | 0.499 |
|
3f602904695ac17b7cfcd8f7c579f881
|
hamjang/xlm-roberta-base-finetuned-panx-de
|
hamjang
|
xlm-roberta
| 14 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,259 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1368
- F1: 0.8517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2468 | 1.0 | 787 | 0.1583 | 0.8312 |
| 0.1187 | 2.0 | 1574 | 0.1368 | 0.8517 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
e2280e41b96dd1308233a59f3bdecd9c
|
dusty310/DialoGPT-medium-Misaki
|
dusty310
|
gpt2
| 31 | 2 |
transformers
| 0 |
conversational
| true | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 568 | false |
# Misaki DialoGPT Model
I tried to base it off of "Misaki Ayuzawa" from the anime
"Kaichou Wa Maid-Sama! (会長はメイド様!, lit. The Student Council President is a Maid)".
This was mostly done just for fun, but is open for any pull requests to make it better. :>
There are currently a couple issues with the model like how it just blurts out '!!!!!!'.
I haven't had much time to ponder what makes it happen. (do let me know if there's something I can change.)
This uses the infamous Microsoft's DialoGPT medium model and is trained with transcripts
of the anime episodes.
|
a25047c07debe57a426380a0f677907e
|
CharyWind/huihui-cat-heywhale
|
CharyWind
| null | 17 | 18 |
diffusers
| 1 |
text-to-image
| true | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
| false | true | true | 797 | false |
# DreamBooth model for the huihui concept trained by CharyWind.
This is a Stable Diffusion model fine-tuned on the huihui concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of huihui cat**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `cat` images for the wildcard theme,
for the Hugging Face DreamBooth Hackathon, from the HF CN Community,
corporated with the HeyWhale.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('CharyWind/huihui-cat-heywhale')
image = pipeline().images[0]
image
```
|
dc3683508b57e5d046258afcd0273083
|
google/t5-efficient-small-kv256
|
google
|
t5
| 12 | 7 |
transformers
| 0 |
text2text-generation
| true | true | true |
apache-2.0
|
['en']
|
['c4']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['deep-narrow']
| false | true | true | 6,260 | false |
# T5-Efficient-SMALL-KV256 (Deep-Narrow version)
T5-Efficient-SMALL-KV256 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-kv256** - is of model type **Small** with the following variations:
- **kv** is **256**
It has **117.14** million parameters and thus requires *ca.* **468.58 MB** of memory in full precision (*fp32*)
or **234.29 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
96208efe0d4821fabb5a13bd022839a2
|
jonatasgrosman/exp_w2v2r_es_vp-100k_age_teens-5_sixties-5_s625
|
jonatasgrosman
|
wav2vec2
| 10 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['es']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'es']
| false | true | true | 497 | false |
# exp_w2v2r_es_vp-100k_age_teens-5_sixties-5_s625
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
7f31c598007d7f667be6b1aee213d851
|
RawMean/farsi_lastname_classifier_1
|
RawMean
|
deberta-v2
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,842 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# farsi_lastname_classifier_1
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0482
- Pearson: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 12 | 0.2705 | 0.7018 |
| No log | 2.0 | 24 | 0.0993 | 0.7986 |
| No log | 3.0 | 36 | 0.0804 | 0.8347 |
| No log | 4.0 | 48 | 0.0433 | 0.9246 |
| No log | 5.0 | 60 | 0.0559 | 0.9176 |
| No log | 6.0 | 72 | 0.0465 | 0.9334 |
| No log | 7.0 | 84 | 0.0503 | 0.9154 |
| No log | 8.0 | 96 | 0.0438 | 0.9222 |
| No log | 9.0 | 108 | 0.0468 | 0.9260 |
| No log | 10.0 | 120 | 0.0482 | 0.9232 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
68be19fb7443d4f27db4ed5ed669d7d5
|
jonatasgrosman/exp_w2v2r_de_xls-r_gender_male-2_female-8_s755
|
jonatasgrosman
|
wav2vec2
| 10 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['de']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'de']
| false | true | true | 476 | false |
# exp_w2v2r_de_xls-r_gender_male-2_female-8_s755
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
8397894425f824fff14a8c9a8cd3dc2f
|
baffo32/genji-python-6B-split
|
baffo32
|
gpt_neo
| 346 | 6 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
|
['en']
|
['the Pile']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'causal-lm']
| false | true | true | 4,457 | false |
# Genji-python 6B
For example usage or to easily use the model you can check our colab notebook:
[Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing)
## Model Description
Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size.
Split model has the checkpoints splitted, which makes it use less system RAM while loading and makes it faster to load.
This model needs more effort to set up as you need to install git-lfs and pull the repo.
| Hyperparameter | Value |
|-------------------|--------|
| n_parameters | 6,053,381,344 |
| n_layers | 28* |
| d_model | 4,096 |
| d_ff | 16,384 |
| n_heads | 16 |
| d_head | 256 |
| n_ctx | 2,048 |
| n_vocab | 50,400 (same tokenizer as GPT-2/3) |
| position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
`*` each layer consists of one feedforward block and one self attention block
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile.
## Training procedure
Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06
## Intended Use
This model is trained for assistence on writing python code and having fun trying weird stuff with it.
### How to use
This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.
For now, you need to use this fork:
[Fork](https://github.com/finetuneanon/transformers)
to install with pip:
```bash
pip install git+https://github.com/finetuneanon/transformers@gpt-neo-localattention3-rp-b
```
**git-lfs** also needs to be installed, on ubuntu:
```bash
apt install git-lfs
```
after it's installed, initialize git-lfs:
```bash
git lfs install
```
then clone this repo:
```bash
git clone https://huggingface.co/NovelAI/genji-python-6B-split
```
Now we can load the model.
We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.
How to use:
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GPTNeoForCausalLM,
)
model = AutoModelForCausalLM.from_pretrained("genji-python-6B-split/model").half().eval().cuda()
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
text = '''def print_customer_name'''
tokens = tokenizer(text, return_tensors="pt").input_ids
generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id)
last_tokens = generated_tokens[0][len(tokens[0]):]
generated_text = tokenizer.decode(last_tokens)
print("Generation:\n" + generated_text)
```
When ran, this code generates:
```python
Prompt:
def print_customer_name
Generation:
(self, customer):
"""Print the name of a customer."""
if not self.is_valid():
return
print("Customer: {}".format(customer))
```
For example usage, you can see our colab notebook as well:
[Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing)
## Eval results
TBD
## Acknowledgements
This project was possible because of the compute provided by the
[TPU Research Cloud](https://sites.research.google/trc/) and [EleutherAI](https://eleuther.ai/) for pretraining of the GPT-J 6B.
Thanks to everyone who contributed to this project:
- [Aero](https://github.com/AeroScripts)
- [Finetune](https://github.com/finetuneanon)
- [Kurumuz](https://github.com/kurumuz)
|
1cd10b6a2cb5a88b1673c8976622107e
|
muhtasham/tiny-mlm-snli-target-glue-wnli
|
muhtasham
|
bert
| 10 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,424 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-snli-target-glue-wnli
This model is a fine-tuned version of [muhtasham/tiny-mlm-snli](https://huggingface.co/muhtasham/tiny-mlm-snli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1223
- Accuracy: 0.0704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.689 | 25.0 | 500 | 0.7743 | 0.2394 |
| 0.6581 | 50.0 | 1000 | 1.1395 | 0.1127 |
| 0.6078 | 75.0 | 1500 | 1.6260 | 0.0704 |
| 0.5462 | 100.0 | 2000 | 2.1223 | 0.0704 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
61eb1caaacc78af5dd7c91e2720f0d23
|
KoichiYasuoka/roberta-base-vietnamese
|
KoichiYasuoka
|
roberta
| 8 | 10 |
transformers
| 0 |
fill-mask
| true | false | false |
cc-by-sa-4.0
|
['vi']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vietnamese', 'masked-lm', 'wikipedia']
| false | true | true | 693 | false |
# roberta-base-vietnamese
## Model Description
This is a RoBERTa model pre-trained on Vietnamese Wikipedia texts. NVIDIA A100-SXM4-40GB took 20 hours 11 minutes for training. You can fine-tune `roberta-base-vietnamese` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-vietnamese-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-vietnamese-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-vietnamese")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-vietnamese")
```
|
4be2c7feb82978984f35e52245690375
|
sd-concepts-library/sintez-ico
|
sd-concepts-library
| null | 13 | 0 | null | 1 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,460 | false |
### sintez-ico on Stable Diffusion
This is the `<sintez-ico>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:








|
b3203caa3eefcec4751395cde52332d6
|
Geotrend/bert-base-en-fr-ar-cased
|
Geotrend
|
bert
| 8 | 2 |
transformers
| 0 |
fill-mask
| true | true | true |
apache-2.0
|
['multilingual']
|
['wikipedia']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,301 | false |
# bert-base-en-fr-ar-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-ar-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
8d7085e1a124a2e27282b4c1869e0c7d
|
hisaoka/t5-large_radiology-cardiothoracic-imagingcancer-0.9
|
hisaoka
|
t5
| 10 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,027 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large_radiology-cardiothoracic-imagingcancer-0.9
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
a70981546df90985530e537e11fd360d
|
courtneypham/bert-finetuned-squad
|
courtneypham
|
bert
| 12 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 954 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
84fd27510758ace68634fd2aa9f87145
|
Helsinki-NLP/opus-mt-sv-ln
|
Helsinki-NLP
|
marian
| 10 | 10 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-sv-ln
* source languages: sv
* target languages: ln
* OPUS readme: [sv-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ln/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ln/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ln/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ln | 30.6 | 0.541 |
|
85f7dd17c454e4c8d83c2bafafadd41d
|
kmewhort/beit-sketch-classifier-pt-metaset-2
|
kmewhort
|
beit
| 7 | 190 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,622 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-sketch-classifier-pt-metaset-2
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6703
- Accuracy: 0.8282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.8028 | 1.0 | 76608 | 0.7586 | 0.8007 |
| 0.7168 | 2.0 | 153216 | 0.6983 | 0.8154 |
| 0.6357 | 3.0 | 229824 | 0.6676 | 0.8240 |
| 0.5707 | 4.0 | 306432 | 0.6606 | 0.8276 |
| 0.4254 | 5.0 | 383040 | 0.6703 | 0.8282 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
45df52d0e994c865bd201ccd4dbeeac5
|
jonatasgrosman/exp_w2v2r_en_xls-r_gender_male-10_female-0_s682
|
jonatasgrosman
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'en']
| false | true | true | 477 | false |
# exp_w2v2r_en_xls-r_gender_male-10_female-0_s682
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
c4243ccfd2960636e8df8091ee8e4bd8
|
Froddan/jannismayr
|
Froddan
| null | 11 | 0 | null | 3 |
text-to-image
| false | false | false |
cc0-1.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 1,137 | false |
# Stable Diffusion fine tuned on art by [Jannis Mayr](https://www.artstation.com/joblyn)
### Usage
Use by adding the keyword "jannismayr" to the prompt. The model was trained with different classnames, which can also be added to the prompt. These classnames are the second words of the filenames.
## Samples
For this model I experimented and made several versions. I won't bore you with details but there were variations in learning rates and classifications. Just look at the samples and pick the one that looks like it suits you best.
The full images can be found in the files and versions tab as they are quite large.
<img src="https://huggingface.co/Froddan/jannismayr/resolve/main/xy_grid-0000-1454625692-.jpg"/>
<img src="https://huggingface.co/Froddan/jannismayr/resolve/main/xy_grid-0001-3762916514-.jpg"/>
<img src="https://huggingface.co/Froddan/jannismayr/resolve/main/xy_grid-0002-590770723-.jpg"/>
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
|
97a538249bd9b6159e44556c09ee426c
|
lmqg/bart-base-tweetqa-qag
|
lmqg
|
bart
| 14 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['en']
|
['lmqg/qag_tweetqa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['questions and answers generation']
| true | true | true | 4,899 | false |
# Model Card of `lmqg/bart-base-tweetqa-qag`
This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question & answer pair generation task on the [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base)
- **Language:** en
- **Training data:** [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/bart-base-tweetqa-qag")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/bart-base-tweetqa-qag")
output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-base-tweetqa-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------------|
| BERTScore | 91.19 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_1 | 39.8 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_2 | 27.7 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_3 | 19.05 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_4 | 13.27 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| METEOR | 25.66 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| MoverScore | 61.59 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedF1Score (BERTScore) | 91.5 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedF1Score (MoverScore) | 63.78 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedPrecision (BERTScore) | 91.9 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedPrecision (MoverScore) | 64.77 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedRecall (BERTScore) | 91.11 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedRecall (MoverScore) | 62.89 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| ROUGE_L | 33.39 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qag_tweetqa
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: None
- model: facebook/bart-base
- max_length: 256
- max_length_output: 128
- epoch: 15
- batch: 32
- lr: 5e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-base-tweetqa-qag/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
af5271488f4ecbeb65cabe6f95982616
|
lgris/whisper-small-cv11-pt
|
lgris
|
whisper
| 10 | 3 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['pt']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['hf-asr-leaderboard', 'generated_from_trainer']
| true | true | true | 1,894 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small PT with Common Voice 11
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3487
- Wer: 14.3802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.1202 | 0.88 | 1000 | 0.2225 | 15.5847 |
| 0.1024 | 1.76 | 2000 | 0.2160 | 15.0651 |
| 0.0832 | 2.64 | 3000 | 0.2259 | 15.0923 |
| 0.0081 | 3.51 | 4000 | 0.2519 | 14.7345 |
| 0.0387 | 4.39 | 5000 | 0.2718 | 14.7311 |
| 0.0039 | 5.27 | 6000 | 0.3031 | 14.5914 |
| 0.001 | 6.15 | 7000 | 0.3238 | 14.5710 |
| 0.0007 | 7.03 | 8000 | 0.3285 | 14.5113 |
| 0.0009 | 7.91 | 9000 | 0.3467 | 14.3580 |
| 0.0008 | 8.79 | 10000 | 0.3487 | 14.3802 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1
|
46c17779c7409079f5b5f6e364c98ae6
|
gokuls/distilbert_sa_GLUE_Experiment_qqp_256
|
gokuls
|
distilbert
| 17 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,493 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_qqp_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4425
- Accuracy: 0.8030
- F1: 0.7323
- Combined Score: 0.7677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.53 | 1.0 | 1422 | 0.5023 | 0.7557 | 0.6592 | 0.7075 |
| 0.479 | 2.0 | 2844 | 0.4823 | 0.7679 | 0.6483 | 0.7081 |
| 0.4522 | 3.0 | 4266 | 0.4788 | 0.7741 | 0.6474 | 0.7108 |
| 0.4263 | 4.0 | 5688 | 0.4753 | 0.7829 | 0.6911 | 0.7370 |
| 0.4009 | 5.0 | 7110 | 0.4536 | 0.7906 | 0.7194 | 0.7550 |
| 0.3772 | 6.0 | 8532 | 0.4497 | 0.7949 | 0.7200 | 0.7574 |
| 0.3548 | 7.0 | 9954 | 0.4453 | 0.8010 | 0.7201 | 0.7606 |
| 0.3332 | 8.0 | 11376 | 0.4425 | 0.8030 | 0.7323 | 0.7677 |
| 0.3132 | 9.0 | 12798 | 0.4654 | 0.7938 | 0.7375 | 0.7657 |
| 0.2951 | 10.0 | 14220 | 0.4551 | 0.8056 | 0.7423 | 0.7739 |
| 0.2777 | 11.0 | 15642 | 0.4675 | 0.8120 | 0.7374 | 0.7747 |
| 0.2625 | 12.0 | 17064 | 0.4946 | 0.8082 | 0.7451 | 0.7766 |
| 0.2473 | 13.0 | 18486 | 0.5041 | 0.8102 | 0.7469 | 0.7786 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
66f0a1f9a1049ad4d4386d299da1ebbc
|
rbawden/modern_french_normalisation
|
rbawden
|
fsmt
| 10 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 5,990 | false |
# Modern French normalisation model
Normalisation model from Modern (17th c.) French to contemporary French. It was introduced in [this paper](https://hal.inria.fr/hal-03540226/) (see citation below). The main research repository can be found [here](https://github.com/rbawden/ModFr-Norm). If you use this model, please cite our research paper (see [below](#cite)).
## Model description
The normalisation model is trained on the [FreEM_norm corpus](https://freem-corpora.github.io/corpora/norm/), which is a parallel data of French texts from the 17th century and their manually normalised versions that follow contemporary French spelling. The model is a transformer model with 2 encoder layers, 4 decoder layers, embedding dimensions of size 256, feedforward dimension of 1024. The associated tokeniser is trained with SentencePiece and the BPE strategy with a BPE vocabulary of 1000 tokens.
### Intended uses & limitations
The model is designed to be used to normalise 17th c. French texts. The best performance can be seen on texts from similar genres as those produced within this century of French.
### How to use
The model is to be used with the custom pipeline available in this repository (transformers>=4.21.0):
```
from transformers import pipeline
normaliser = pipeline(model="rbawden/modern_french_normalisation", batch_size=32, beam_size=5, cache_file="./cache.pickle", trust_remote_code=True)
list_inputs = ["Elle haïſſoit particulierement le Cardinal de Lorraine;", "Adieu, i'iray chez vous tantoſt vous rendre grace."]
list_outputs = normaliser(list_inputs)
print(list_outputs)
>> [{'text': 'Elle haïssait particulièrement le Cardinal de Lorraine; ', 'alignment': [([0, 3], [0, 3]), ([5, 12], [5, 12]), ([14, 29], [14, 29]), ([31, 32], [31, 32]), ([34, 41], [34, 41]), ([43, 44], [43, 44]), ([46, 53], [46, 53]), ([54, 54], [54, 54])]}, {'text': "Adieu, j'irai chez vous tantôt vous rendre grâce. ", 'alignment': [([0, 4], [0, 4]), ([5, 5], [5, 5]), ([7, 8], [7, 8]), ([9, 12], [9, 12]), ([14, 17], [14, 17]), ([19, 22], [19, 22]), ([24, 30], [24, 29]), ([32, 35], [31, 34]), ([37, 42], [36, 41]), ([44, 48], [43, 47]), ([49, 49], [48, 48])]}]
```
To disable postprocessing (faster but less good normalisation), set the arguments `no_postproc_lex` and `no_post_clean` to True when instantiating the pipeline:
```
normaliser = pipeline(model="rbawden/modern_french_normalisation", no_postproc_lex=True, no_post_clean=True, batch_size=32, beam_size=5, cache_file="./cache.pickle", trust_remote_code=True)
```
### Limitations and bias
The model has been learnt in a supervised fashion and therefore like any such model is likely to perform well on texts similar to those used for training and less well on other texts. Whilst care was taken to include a range of different domains from different periods in the 17th c. in the training data, there are nevertheless imbalances, notably with some decades (e.g. 1610s) being underrepresented.
The model reaches a high performance, but could in rare cases result in changes to the text other than those involving spelling conventions (e.g. changing words, deleting or hallucinating words). A post-processing step is introduced in the pipeline file to avoid these problems, which involves a look-up in a contemporary French lexicon ([The Le*fff*](http://almanach.inria.fr/software_and_resources/custom/Alexina-en.html)) and checks to make sure that the normalised words do not stray too far from the original source words.
## Training data
The model is trained on the parallel FreEM dataset [FreEM_norm corpus](https://freem-corpora.github.io/corpora/norm/), consisting of 17,930 training sentences and 2,443 development sentences (used for model selection).
## Training procedure
### Preprocessing
Texts are normalised (in terms of apostrophes, quotes and spaces), before being tokenised with SentencePiece and a vocabulary size of 1000. The inputs are of the form:
```
Sentence in Early Modern French </s>
```
where `</s>` is the end-of-sentence (eos) token.
### Training
The model was trained using [Fairseq](https://github.com/facebookresearch/fairseq) and ported to HuggingFace using an adapted version of [Stas's scripts for FSMT models](https://huggingface.co/blog/porting-fsmt).
### Evaluation results
Coming soon... (once post-processing extension has been finalised)
## BibTex entry and citation info
<a name="cite"></a>
Rachel Bawden, Jonathan Poinhos, Eleni Kogkitsidou, Philippe Gambette, Benoît Sagot and Simon Gabay. 2022. [Automatic Normalisation of Early Modern French](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.358.pdf). In Proceedings of the 13th Language Resources and Evaluation Conference. European Language Resources Association. Marseille, France.]
Bibtex:
```
@inproceedings{bawden-etal-2022-automatic,
title = {{Automatic Normalisation of Early Modern French}},
author = {Bawden, Rachel and Poinhos, Jonathan and Kogkitsidou, Eleni and Gambette, Philippe and Sagot, Beno{\^i}t and Gabay, Simon},
url = {https://hal.inria.fr/hal-03540226},
booktitle = {Proceedings of the 13th Language Resources and Evaluation Conference},
publisher = {European Language Resources Association},
year = {2022},
address = {Marseille, France},
pages = {3354--3366},
url = {http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.358.pdf}
}
```
And to reference the FreEM-norm dataset used in the experiments:
Simon Gabay. (2022). FreEM-corpora/FreEMnorm: FreEM norm Parallel corpus (1.0.0). Zenodo. https://doi.org/10.5281/zenodo.5865428
```
@software{simon_gabay_2022_5865428,
author = {Simon Gabay},
title = {{FreEM-corpora/FreEMnorm: FreEM norm Parallel
corpus}},
month = jan,
year = 2022,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.5865428},
url = {https://doi.org/10.5281/zenodo.5865428}
}
|
8756fa2feada9501bb03b18585fa8de2
|
Helsinki-NLP/opus-mt-de-af
|
Helsinki-NLP
|
marian
| 11 | 197 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['de', 'af']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,984 | false |
### deu-afr
* source group: German
* target group: Afrikaans
* OPUS readme: [deu-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-afr/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.afr | 51.3 | 0.690 |
### System Info:
- hf_name: deu-afr
- source_languages: deu
- target_languages: afr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-afr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'af']
- src_constituents: {'deu'}
- tgt_constituents: {'afr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-afr/opus-2020-06-17.test.txt
- src_alpha3: deu
- tgt_alpha3: afr
- short_pair: de-af
- chrF2_score: 0.69
- bleu: 51.3
- brevity_penalty: 1.0
- ref_len: 9507.0
- src_name: German
- tgt_name: Afrikaans
- train_date: 2020-06-17
- src_alpha2: de
- tgt_alpha2: af
- prefer_old: False
- long_pair: deu-afr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
be41e76e5ecd1c8addca9914e247cdce
|
rudzinskimaciej/rbto-v3
|
rudzinskimaciej
| null | 15 | 2 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 617 | false |
### rbto_v3 Dreambooth model trained by rudzinskimaciej with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
20ad9af49ee4895ff34956f2634df8d7
|
HooshvareLab/bert-fa-base-uncased-sentiment-digikala
|
HooshvareLab
|
bert
| 12 | 181 |
transformers
| 0 |
text-classification
| true | true | true |
apache-2.0
|
['fa']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,633 | false |
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### Digikala
Digikala user comments provided by [Open Data Mining Program (ODMP)](https://www.digikala.com/opendata/). This dataset contains 62,321 user comments with three labels:
| Label | # |
|:---------------:|:------:|
| no_idea | 10394 |
| not_recommended | 15885 |
| recommended | 36042 |
**Download**
You can download the dataset from [here](https://www.digikala.com/opendata/)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------:|:-----------:|:-----:|:-------------:|
| Digikala User Comments | 81.72 | 81.74* | 80.74 | - |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Sentiment Analysis | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
|
c7c1b60620c082c4cc4a3f0d781e5101
|
SaintGermain/is-this-furry
|
SaintGermain
|
swin
| 5 | 4 |
transformers
| 0 |
image-classification
| true | false | false |
mit
| null | null |
{'emissions': 2.8752228959859316}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['autotrain', 'vision', 'image-classification']
| false | true | true | 1,096 | false |
This detects furry images, mostly profile pictures, although it may be able detect any sort of furry picture (I haven't tried it, though).
# Dataset Info
This was trained on scraped pfp images from Mastodon, with some non-pfp images thrown in for "balancing" (i.e ensuring pokemon, kemonomimi (catgirls/foxgirls/etc), and normal animals weren't classified as 'furry')
**Furry images**: 551
**Non-furry images**: 641
# Disclaimer
Please do not ruin this by using this to harass anyone.
This is *not* intended to be used for targeted harrassement, and I will explicitly condemn any use that attempts to do so.
If you're wondering why I made this public in the first place?
I believe in freedom of *information* - this image classification model has various perfectly valid uses, and it's kinda useless to keep it private.
# Statistics
## Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2890884434
- CO2 Emissions (in grams): 2.8752
## Validation Metrics
- Loss: 0.175
- Accuracy: 0.933
- Precision: 0.938
- Recall: 0.938
- AUC: 0.975
- F1: 0.938
|
608baa1829d033f5dbcdba3b11a6b9b2
|
infinitejoy/wav2vec2-large-xls-r-300m-urdu
|
infinitejoy
|
wav2vec2
| 16 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ur']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event', 'ur']
| true | true | true | 2,373 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
infinitejoy/wav2vec2-large-xls-r-300m-urdu
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - -UR dataset.
It achieves the following results on the evaluation set:
- Loss: NA
- Wer: NA
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py \
--model_id infinitejoy/wav2vec2-large-xls-r-300m-urdu --dataset speech-recognition-community-v2/dev_data \
--config ur --split validation --chunk_length_s 10 --stride_length_s 1
```
### Inference
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "infinitejoy/wav2vec2-large-xls-r-300m-urdu"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "ur", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
### Eval results on Common Voice 7 "test" (WER):
|
7b2895697fde298e95da10b3b066408c
|
tomekkorbak/suspicious_noyce
|
tomekkorbak
| null | 2 | 0 | null | 0 | null | false | false | false |
mit
|
['en']
|
['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 8,715 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# suspicious_noyce
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'suspicious_noyce',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/21rsvjy5
|
c688d4467827883c751eed859c781dc4
|
pszemraj/mGPT-Peter-2E
|
pszemraj
|
gpt2
| 15 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null |
['mc4', 'Wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['multilingual', 'PyTorch', 'Transformers', 'gpt3', 'gpt2', 'Deepspeed', 'Megatron', 'mGPT']
| false | true | true | 1,883 | false |
# mGPT: fine-tune on message data - 2E
- This model is a fine-tuned version of [sberbank-ai/mGPT](https://huggingface.co/sberbank-ai/mGPT) on 80k messages. This builds on the minimum-working-example checkpoint [here](https://huggingface.co/pszemraj/mGPT-Peter-mwe).
- 2E = 2 epochs
## Model description
- testing if fine-tuned personality data bleeds over to other languages without being trained in them explicitly
**Interesting findings thus far:**
- Passing a generic word after the `<name-identifier>` that is in a non-English language helps ensure the model responds in the question language (see: any example).
- Model generations (in general) remain semantically consistent, even if the generations switch from `<language>`to English in the middle of the generated text. This demonstrates some sort of "universal concept understanding"
### Usage in python
Install the transformers library if you don't have it:
```
pip install -U transformers
```
load the model into a pipeline object:
```
from transformers import pipeline
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
my_chatbot = pipeline('text-generation',
'pszemraj/mGPT-Peter-2E',
device=0 if device == 'cuda' else -1,
)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1 (in addition to all training on prior checkpoints)
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
cd2e934cad91a6ba284ba8d3be437cb0
|
sd-concepts-library/wojaks-now
|
sd-concepts-library
| null | 8 | 0 | null | 4 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 918 | false |
### wojaks-now on Stable Diffusion
This is the `<red-wojak>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:



|
6335ca411a0a1e5f93fa111df057fc7b
|
microsoft/speecht5_vc
|
microsoft
|
speecht5
| 8 | 271 |
transformers
| 3 |
audio-to-audio
| true | false | false |
mit
| null |
['cmu-arctic']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'audio-to-audio']
| false | true | true | 4,228 | false |
# SpeechT5 (voice conversion task)
SpeechT5 model fine-tuned for voice conversion (speech-to-speech) on CMU ARCTIC.
This model was introduced in [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
SpeechT5 was first released in [this repository](https://github.com/microsoft/SpeechT5/), [original weights](https://huggingface.co/mechanicalsea/speecht5-vc). The license used is [MIT](https://github.com/microsoft/SpeechT5/blob/main/LICENSE).
Disclaimer: The team releasing SpeechT5 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model Description
Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder.
Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder.
Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
## Intended Uses & Limitations
You can use this model for speech conversion. See the [model hub](https://huggingface.co/models?search=speecht5) to look for fine-tuned versions on a task that interests you.
Currently, both the feature extractor and model support PyTorch.
## Citation
**BibTeX:**
```bibtex
@inproceedings{ao-etal-2022-speecht5,
title = {{S}peech{T}5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing},
author = {Ao, Junyi and Wang, Rui and Zhou, Long and Wang, Chengyi and Ren, Shuo and Wu, Yu and Liu, Shujie and Ko, Tom and Li, Qing and Zhang, Yu and Wei, Zhihua and Qian, Yao and Li, Jinyu and Wei, Furu},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
month = {May},
year = {2022},
pages={5723--5738},
}
```
## How to Get Started With the Model
Use the code below to convert a mono 16 kHz speech waveform into another.
```python
from transformers import SpeechT5Processor, SpeechT5ForSpeechToSpeech, SpeechT5HifiGan
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
dataset = dataset.sort("id")
sampling_rate = dataset.features["audio"].sampling_rate
example_speech = dataset[0]["audio"]["array"]
processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_vc")
model = SpeechT5ForSpeechToSpeech.from_pretrained("microsoft/speecht5_vc")
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
inputs = processor(audio=example_speech, sampling_rate=sampling_rate, return_tensors="pt")
# load xvector containing speaker's voice characteristics from a file
import numpy as np
import torch
speaker_embeddings = np.load("xvector_speaker_embedding.npy")
speaker_embeddings = torch.tensor(speaker_embeddings).unsqueeze(0)
speech = model.generate_speech(inputs["input_values"], speaker_embeddings, vocoder=vocoder)
import soundfile as sf
sf.write("speech.wav", speech.numpy(), samplerate=16000)
```
|
92d5bf8856112c92bb30c7d35f92376b
|
jungjongho/wav2vec2-large-xlsr-korean-demo-colab
|
jungjongho
|
wav2vec2
| 16 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,113 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-korean-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4534
- Wer: 0.3272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 17.4809 | 0.65 | 400 | 4.6145 | 1.0 |
| 4.4863 | 1.29 | 800 | 4.3819 | 1.0 |
| 4.2921 | 1.94 | 1200 | 4.1163 | 0.9970 |
| 2.7971 | 2.59 | 1600 | 1.5376 | 0.8379 |
| 1.5061 | 3.24 | 2000 | 1.0354 | 0.7299 |
| 1.1123 | 3.88 | 2400 | 0.7909 | 0.6418 |
| 0.9037 | 4.53 | 2800 | 0.6345 | 0.5698 |
| 0.779 | 5.18 | 3200 | 0.5909 | 0.5571 |
| 0.6834 | 5.83 | 3600 | 0.5339 | 0.5063 |
| 0.6287 | 6.47 | 4000 | 0.5326 | 0.4954 |
| 0.5518 | 7.12 | 4400 | 0.4930 | 0.4607 |
| 0.5315 | 7.77 | 4800 | 0.4577 | 0.4451 |
| 0.4867 | 8.41 | 5200 | 0.4547 | 0.4382 |
| 0.4543 | 9.06 | 5600 | 0.4581 | 0.4371 |
| 0.4089 | 9.71 | 6000 | 0.4387 | 0.4258 |
| 0.3893 | 10.36 | 6400 | 0.4300 | 0.4100 |
| 0.3751 | 11.0 | 6800 | 0.4265 | 0.4137 |
| 0.3333 | 11.65 | 7200 | 0.4294 | 0.4011 |
| 0.3039 | 12.3 | 7600 | 0.4187 | 0.3912 |
| 0.2974 | 12.94 | 8000 | 0.4079 | 0.3805 |
| 0.2658 | 13.59 | 8400 | 0.4273 | 0.3864 |
| 0.2676 | 14.24 | 8800 | 0.4103 | 0.3734 |
| 0.2466 | 14.89 | 9200 | 0.4122 | 0.3701 |
| 0.2282 | 15.53 | 9600 | 0.4176 | 0.3650 |
| 0.2186 | 16.18 | 10000 | 0.4199 | 0.3632 |
| 0.2132 | 16.83 | 10400 | 0.4159 | 0.3671 |
| 0.1962 | 17.48 | 10800 | 0.4321 | 0.3641 |
| 0.1922 | 18.12 | 11200 | 0.4300 | 0.3535 |
| 0.1827 | 18.77 | 11600 | 0.4244 | 0.3596 |
| 0.1709 | 19.42 | 12000 | 0.4191 | 0.3518 |
| 0.157 | 20.06 | 12400 | 0.4308 | 0.3496 |
| 0.147 | 20.71 | 12800 | 0.4360 | 0.3457 |
| 0.1502 | 21.36 | 13200 | 0.4329 | 0.3431 |
| 0.1448 | 22.01 | 13600 | 0.4334 | 0.3432 |
| 0.1407 | 22.65 | 14000 | 0.4392 | 0.3440 |
| 0.1342 | 23.3 | 14400 | 0.4418 | 0.3399 |
| 0.1325 | 23.95 | 14800 | 0.4360 | 0.3383 |
| 0.1183 | 24.6 | 15200 | 0.4521 | 0.3359 |
| 0.1174 | 25.24 | 15600 | 0.4426 | 0.3322 |
| 0.1137 | 25.89 | 16000 | 0.4438 | 0.3356 |
| 0.1129 | 26.54 | 16400 | 0.4547 | 0.3347 |
| 0.1077 | 27.18 | 16800 | 0.4482 | 0.3300 |
| 0.0999 | 27.83 | 17200 | 0.4491 | 0.3281 |
| 0.0978 | 28.48 | 17600 | 0.4533 | 0.3281 |
| 0.0997 | 29.13 | 18000 | 0.4542 | 0.3283 |
| 0.0908 | 29.77 | 18400 | 0.4534 | 0.3272 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
22108d189353a7de924142a4e26892bd
|
espnet/stop_hubert_slu_raw_en_bpe500
|
espnet
| null | 21 | 4 |
espnet
| 0 |
automatic-speech-recognition
| false | false | false |
cc-by-4.0
|
['en']
|
['stop']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition']
| false | true | true | 11,606 | false |
## ESPnet2 ASR model
### `espnet/stop_hubert_slu_raw_en_bpe500`
This model was trained by Siddhant using stop recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 11890fdd9dd872edc50ce8eb7660d746c6ee160e
pip install -e .
cd egs2/stop/asr3
./run.sh --skip_data_prep false --skip_train true --download_model espnet/stop_hubert_slu_raw_en_bpe500
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Dec 25 13:33:10 EST 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 202205`
- pytorch version: `pytorch 1.13.0+cu116`
- Git hash: `11890fdd9dd872edc50ce8eb7660d746c6ee160e`
- Commit date: `Sat Jun 18 17:05:39 2022 -0400`
## asr_train_asr2_hubert_lr0.002_raw_en_bpe500
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/test|75636|728701|93.9|3.2|2.9|3.1|9.1|29.8|
|decode_asr_asr_model_valid.acc.ave_10best/valid|33384|322094|0.0|0.0|100.0|0.0|100.0|100.0|
|inference_asr_model_valid.acc.ave_10best/test|75636|728701|93.9|3.3|2.8|3.2|9.4|30.6|
|inference_asr_model_valid.acc.ave_10best/valid|33384|322094|0.0|0.0|100.0|0.0|100.0|100.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/test|75636|5745269|95.9|0.9|3.2|3.2|7.3|29.8|
|decode_asr_asr_model_valid.acc.ave_10best/valid|33384|2537594|0.0|0.0|100.0|0.0|100.0|100.0|
|inference_asr_model_valid.acc.ave_10best/test|75636|5745269|95.9|1.0|3.1|3.3|7.4|30.6|
|inference_asr_model_valid.acc.ave_10best/valid|33384|2537594|0.0|0.0|100.0|0.0|100.0|100.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/test|75636|2091389|95.1|1.5|3.4|3.1|8.0|29.8|
|decode_asr_asr_model_valid.acc.ave_10best/valid|33384|921077|0.0|0.0|100.0|0.0|100.0|100.0|
|inference_asr_model_valid.acc.ave_10best/test|75636|2091389|95.2|1.5|3.3|3.3|8.1|30.6|
|inference_asr_model_valid.acc.ave_10best/valid|33384|921077|0.0|0.0|100.0|0.0|100.0|100.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr2_hubert_lr0.002.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr2_hubert_lr0.002_raw_en_bpe500
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 57197
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 128
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe500/train/speech_shape
- exp/asr_stats_raw_en_bpe500/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe500/valid/speech_shape
- exp/asr_stats_raw_en_bpe500/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0004
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁[
- ':'
- ▁]
- _
- SL
- IN
- GET
- S
- TIME
- DATE
- ▁THE
- ▁TO
- ▁FOR
- ▁
- E
- LOCATION
- A
- WEATHER
- O
- ▁ME
- MUSIC
- ▁MY
- CREATE
- ALARM
- Y
- D
- ▁I
- T
- ▁AT
- I
- ▁A
- TIMER
- ▁IS
- U
- ▁IN
- ▁ON
- EVENT
- M
- ▁TIMER
- TODO
- REMINDER
- R
- ▁PM
- P
- ING
- ▁WHAT
- ▁THIS
- ▁TODAY
- ▁AM
- N
- ▁ALARM
- ▁SET
- NT
- METHOD
- ▁TOMORROW
- ER
- TYPE
- B
- ATTRIBUTE
- DESTINATION
- ▁MINUTES
- REMINDED
- PERSON
- L
- ▁HOW
- NAME
- K
- ▁FIVE
- ▁BE
- ▁'
- G
- ▁NEXT
- 'ON'
- ▁IT
- MESSAGE
- H
- ▁WILL
- ▁S
- ▁WEEK
- ST
- C
- INFO
- EN
- CATEGORY
- TRAFFIC
- ▁F
- LE
- ▁AND
- AR
- SEND
- RE
- ▁P
- ▁D
- ▁FROM
- RECIPIE
- PLAY
- ▁DO
- ▁TRAFFIC
- AN
- ▁AN
- AL
- ▁SIX
- ▁SONG
- ▁ALL
- ▁UP
- CONTENT
- ▁REMINDER
- ▁WEEKEND
- ▁REMIND
- ▁OF
- ▁T
- RA
- ▁WEATHER
- ▁SEVEN
- ▁PLEASE
- ▁RE
- ▁TONIGHT
- EXACT
- ▁EIGHT
- ▁W
- W
- ▁TEN
- F
- SOURCE
- ▁TIME
- ESTIMATED
- RECURRING
- TH
- DELETE
- VE
- ▁NEW
- LL
- ▁EVERY
- ▁PLAY
- ES
- ▁THIRTY
- ▁GET
- ▁RAIN
- CK
- ▁TWO
- ▁C
- ▁CO
- ▁ARE
- ▁MESSAGE
- RI
- ▁G
- ▁MORNING
- CONTACT
- ▁CAN
- ▁NOW
- ▁THREE
- ▁THERE
- ET
- ▁MUSIC
- TER
- ▁TAKE
- IC
- CH
- ▁J
- V
- ED
- ▁FOUR
- DURATION
- LY
- ▁E
- ▁FRIDAY
- UR
- ▁YOU
- ▁ANY
- ▁NINE
- ▁GO
- UNSUPPORTED
- OR
- ▁SHOW
- ▁O
- ▁BA
- ▁PA
- ▁LONG
- AT
- ▁ONE
- ND
- ▁MA
- ▁ST
- ▁GOING
- ▁LIKE
- ▁ALARMS
- ▁BY
- ▁THAT
- ▁TWENTY
- ▁DAY
- ▁CH
- ▁MONTH
- ▁K
- ▁SH
- UPDATE
- ▁MONDAY
- CE
- IT
- IL
- AMOUNT
- ▁SATURDAY
- ▁BR
- ▁NEED
- ▁WORK
- ID
- ▁DRIVE
- LA
- ▁MO
- ▁HAVE
- ▁TUESDAY
- ▁TELL
- IR
- HA
- ''''
- ▁IF
- HOME
- ▁HE
- ▁LO
- ▁LA
- ▁WHEN
- LO
- ▁TH
- ▁REMINDERS
- IE
- DISTANCE
- ▁WE
- ▁SA
- ▁HOUR
- OULD
- NE
- DEPARTURE
- ▁HI
- ▁LI
- ARTIST
- Z
- TRAVEL
- ▁OUT
- PAUSE
- EST
- ARRIVAL
- ▁CANCEL
- ▁MI
- ▁OFF
- ▁FIFTEEN
- POINT
- ▁SNOW
- NA
- EL
- ▁EVENTS
- ▁CA
- ▁SUNDAY
- ▁LEAVE
- TRACK
- ▁SEND
- ▁DELETE
- ▁APPOINTMENT
- ▁BO
- RDINAL
- ▁MAKE
- ▁NEAR
- ▁BEFORE
- GE
- ▁HOME
- RELATION
- ▁V
- FR
- ▁THURSDAY
- ▁LAST
- DIRECTIONS
- ▁WEDNESDAY
- ▁START
- ▁FORECAST
- ▁YORK
- ▁RIGHT
- UM
- ▁WITH
- USE
- ▁MEETING
- UT
- LI
- ▁CHANGE
- ▁CAR
- GENRE
- ATION
- X
- ▁PICK
- ▁WANT
- ▁NIGHT
- SKIP
- ▁DE
- ▁RO
- ▁ABOUT
- MAP
- CO
- MA
- ▁HOUSE
- ▁HOT
- ▁PARTY
- ▁WA
- UNIT
- ▁HERE
- ▁SU
- ▁AFTERNOON
- ▁MUCH
- ▁MOM
- ▁TEMPERATURE
- EQUENC
- ▁ADD
- ▁SAN
- ▁HER
- ▁CONCERTS
- ▁CHRISTMAS
- ▁DINNER
- ▁MAR
- LAND
- ▁HOURS
- ▁CURRENT
- ▁TRACK
- ▁SOME
- ▁CITY
- ▁FORTY
- ATE
- ▁ROUTE
- SNOOZE
- ▁TEXT
- WORK
- ▁COLD
- RELATED
- ▁OR
- ▁NO
- Q
- ▁WAY
- WAY
- ▁MANY
- ▁BIRTHDAY
- ▁MINUTE
- ▁PLAYLIST
- ▁NOON
- ▁ROAD
- TITLE
- PATH
- ▁ASK
- NAVIGATION
- ▁LEFT
- ▁ALBUM
- ▁TURN
- ▁LATE
- ▁ELEVEN
- NEW
- ▁CELSIUS
- ▁BUY
- AVOID
- LOW
- NCE
- SEARCH
- ▁GAME
- ▁STOP
- ▁JO
- ▁FIRST
- ▁SHE
- ▁DOCTOR
- ▁BU
- PERIOD
- ▁WAKE
- CONDITION
- ▁EVENING
- RADIUS
- MODIFIE
- ▁REPEAT
- ▁SECOND
- ▁CONCERT
- ▁ANGELES
- ▁DOWNTOWN
- ▁UMBRELLA
- TEMPERATURE
- ASH
- ▁YEAR
- GROUP
- ▁DRIVING
- ▁GIVE
- ▁HUNDRED
- ▁HO
- ▁MILES
- PLAYLIST
- ADD
- RETRIEV
- ▁TWELVE
- EAD
- ▁CLASS
- ▁FREE
- PORT
- VILLE
- ▁BETWEEN
- ▁KNOW
- ▁AROUND
- ▁SCHOOL
- ▁NINETY
- PROVIDER
- SILENCE
- RESUME
- ▁LET
- TION
- ▁AUGUST
- ▁HAPPENING
- ▁AFTER
- ▁FAHRENHEIT
- ▁EX
- ▁VIDEO
- ROAD
- ▁PARK
- ▁CHICAGO
- ▁DAILY
- ▁CHECK
- ▁BEACH
- ▁WHERE
- ▁JUNE
- ▁STREET
- ▁FESTIVAL
- ▁FLORIDA
- ▁JOHN
- ▁HAS
- ▁SPOTIFY
- ▁BILL
- RESTART
- ▁HIGHWAY
- ▁SEATTLE
- J
- ▁LUNCH
- ▁LOOK
- ▁FRIEND
- ▁COMING
- ▁ALERT
- IGHT
- ▁PANDORA
- ▁HEAVY
- ▁KIDS
- ▁MOVIE
- ▁SOUTH
- REACT
- ▁CONSTRUCTION
- PREVIOUS
- ▁ORLANDO
- ▁OVER
- ▁MIAMI
- REACTION
- ▁ATLANTA
- ▁ACCIDENT
- ▁COUNTRY
- ▁NORTH
- ▁LIGHT
- RADIO
- ▁READ
- ▁FAMILY
- ▁AIRPORT
- ▁EXPECT
- ▁DEGREE
- ▁PRO
- ▁PARTIES
- ▁FIFTY
- ▁HIGH
- ▁PLAN
- ▁FOOD
- ▁WARM
- ▁SUNNY
- ▁VEGAS
- ▁HOLIDAY
- ▁SCHEDULE
- ▁STORM
- ▁FIFTH
- ▁BOSTON
- ▁FRANCISCO
- ▁LONDON
- ATTENDEE
- ▁JULY
- ▁WALK
- ▁COMMUTE
- ▁CLEAN
- ▁DENTIST
- TOWN
- ▁AGAIN
- ▁DALLAS
- ▁PORTLAND
- ▁SEPTEMBER
- ▁ARRIVE
- ▁SISTER
- ▁HOUSTON
- Ã
- É
- Í
- '*'
- Á
- Ç
- Ó
- ']'
- '['
- Ú
- Ü
- <sos/eos>
transcript_token_list: null
two_pass: false
pre_postencoder_norm: false
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: hubert_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d2
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
deliberationencoder: null
deliberationencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
decoder2: null
decoder2_conf: {}
postdecoder: null
postdecoder_conf: {}
required:
- output_dir
- token_list
version: '202205'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
cfbbce14453ffbdad0d1cdb853ec479f
|
aalogan/bert-finetuned-ner
|
aalogan
|
bert
| 8 | 3 |
transformers
| 0 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,466 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aalogan/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0170
- Validation Loss: 0.0546
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3508, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1722 | 0.0676 | 0 |
| 0.0481 | 0.0531 | 1 |
| 0.0270 | 0.0551 | 2 |
| 0.0170 | 0.0546 | 3 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
452717c3a165bb9ab46d7d6110944cb1
|
jonatasgrosman/exp_w2v2t_de_unispeech-ml_s952
|
jonatasgrosman
|
unispeech
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['de']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'de']
| false | true | true | 500 | false |
# exp_w2v2t_de_unispeech-ml_s952
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
067a734631e4af9dd4a42c645d83b631
|
hdty/camembert-ner-lr10e3
|
hdty
|
camembert
| 10 | 3 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,811 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-ner-lr10e3
This model is a fine-tuned version of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5566
- Overall Precision: 0.0
- Overall Recall: 0.0
- Overall F1: 0.0
- Overall Accuracy: 0.8840
- Humanprod F1: 0.0
- Loc F1: 0.0
- Org F1: 0.0
- Per F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Humanprod F1 | Loc F1 | Org F1 | Per F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------------:|:------:|:------:|:------:|
| 0.5473 | 1.0 | 613 | 0.5626 | 0.0 | 0.0 | 0.0 | 0.8840 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5299 | 2.0 | 1226 | 0.5566 | 0.0 | 0.0 | 0.0 | 0.8840 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.1+cpu
- Datasets 2.7.1
- Tokenizers 0.13.2
|
dc39439b5f81a61e39c5b941a73a5688
|
StonyBrookNLP/teabreac-t5-3b-iirc-gold
|
StonyBrookNLP
|
t5
| 10 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question-answering, multi-step-reasoning, multi-hop-reasoning']
| false | true | true | 2,628 | false |
# What's this?
This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496).
This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details.
We release the following models:
- **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}`
- **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}`
- **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}`
The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`.
The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`.
The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**.
# How to use it?
Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac
model_name = "StonyBrookNLP/teabreac-t5-3b-iirc-gold"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
enable_digit_tokenization(tokenizer)
input_texts = [
"answer_me: Who scored the first touchdown of the game?" +
"context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..."
# Note: some models have slightly different qn/ctxt format. See the github repo.
]
input_ids = tokenizer(
input_texts, return_tensors="pt",
truncation=True, max_length=800,
add_special_tokens=True, padding=True,
)["input_ids"]
generated_ids = model.generate(input_ids, min_length=1, max_length=50)
generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
generated_predictions = [
tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions
]
# => ["Chaz Schilens"]
```
|
acfd50645c0c5cd05232a153f1fbc2dd
|
sftvrt/wav2vec2-large-xls-r-300m-turkish-colab
|
sftvrt
|
wav2vec2
| 13 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,103 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
6c9c32fa881e3ade1215fafee4d3ffd7
|
manirai91/mbert-imdb
|
manirai91
|
bert
| 12 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 961 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-imdb
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
e6a2663326519fa4d55fab0309755a55
|
aatmasidha/distilbert-base-uncased-newsmodelclassification
|
aatmasidha
|
distilbert
| 16 | 26 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,350 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-newsmodelclassification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2177
- Accuracy: 0.928
- F1: 0.9278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8104 | 1.0 | 250 | 0.3057 | 0.9105 | 0.9084 |
| 0.2506 | 2.0 | 500 | 0.2177 | 0.928 | 0.9278 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
5feac888855787b1ce2013808fc6cfbc
|
ashrielbrian/xlm-roberta-base-finetuned-panx-all
|
ashrielbrian
|
xlm-roberta
| 10 | 4 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,313 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1262
- F1: 0.8799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2905 | 1.0 | 715 | 0.1625 | 0.8392 |
| 0.1477 | 2.0 | 1430 | 0.1294 | 0.8688 |
| 0.095 | 3.0 | 2145 | 0.1262 | 0.8799 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
68184fc6bb2ccc0599693cf3fd490a35
|
projecte-aina/ca_bsc_demo_trf
|
projecte-aina
| null | 26 | 12 |
spacy
| 0 |
token-classification
| false | false | false |
gpl-3.0
|
['ca']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['spacy', 'token-classification']
| false | true | true | 18,973 | false |
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Install the model](#install-model)
- [Label scheme](#label-scheme)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
Catalan transformer (projecte-aina/roberta-large-ca-v2) pipeline by BSC. Components: transformer, morphologizer, parser, ner, attribute_ruler, lemmatizer, text classification.
| Feature | Description |
| --- | --- |
| **Name** | `ca_bsc_demo_trf` |
| **Version** | `3.4.2` |
| **spaCy** | `3.4.1` |
| **Default Pipeline** | `transformer`, `tagger`, `morphologizer`, `lemmatizer`, `parser`, `ner`, `textcat` |
| **Components** | `transformer`, `tagger`, `morphologizer`, `lemmatizer`, `parser`, `ner`, `textcat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** |[roberta-large-ca-v2] (https://huggingface.co/projecte-aina/roberta-large-ca-v2) <br /> Ancora_UD_10 <br />[WikiCAT_ca] (https://huggingface.co/datasets/projecte-aina/WikiCAT_ca) |
| **License** | `GNU GPL 3.0` |
| **Author** | [AINA project](https://huggingface.co/projecte-aina) |
### Install the model
pip install https://huggingface.co/projecte-aina/ca_bsc_demo_trf/resolve/main/ca_bsc_demo_trf-any-py3-none-any.whl
More extensive demo at https://spacydemo.aina.bsc.es
### Label scheme
<details>
<summary>View label scheme (342 labels for 5 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` |
| **`morphologizer`** | `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADP`, `NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=ADJ`, `POS=CCONJ`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `NumForm=Digit\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Comm`, `POS=AUX\|VerbForm=Inf`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=VERB\|VerbForm=Inf`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Peri`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `POS=SCONJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=VERB\|VerbForm=Ger`, `POS=NOUN`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `POS=SYM`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=ADV\|Polarity=Neg`, `POS=ADV`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=NOUN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Loc\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADV`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Ind`, `POS=PUNCT`, `Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `AdvType=Tim\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Semi`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `NumForm=Digit\|POS=SYM`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=PART`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Dash`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Colo`, `Gender=Masc\|NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Quot`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=AUX\|VerbForm=Ger`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|POS=NOUN`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PronType=Prs`, `POS=X`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Dem`, `POS=DET`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `NumForm=Digit\|NumType=Ord\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `NumForm=Digit\|NumType=Frac\|POS=SYM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `NumType=Card\|Number=Sing\|POS=NUM`, `Foreign=Yes\|POS=PRON\|PronType=Int`, `Foreign=Yes\|Mood=Ind\|POS=VERB\|VerbForm=Fin`, `Foreign=Yes\|POS=ADP`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Mood=Sub\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Comm`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Comm`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Nom\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `AdvType=Tim\|Degree=Cmp\|POS=ADV`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Pre\|PronType=Prs`, `POS=DET\|PronType=Rel`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `POS=INTJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Foreign=Yes\|POS=SCONJ`, `Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|POS=SYM`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=VERB`, `Foreign=Yes\|POS=ADJ`, `Foreign=Yes\|POS=DET`, `Foreign=Yes\|POS=ADV`, `Degree=Cmp\|POS=ADJ`, `AdvType=Tim\|POS=SYM`, `Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl:pass`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
| **`textcat`** | `Economia`, `Enginyeria`, `Entreteniment`, `Història`, `Humanitats`, `Dret`, `Matemàtiques`, `Música`, `Filosofia`, `Política`, `Religió`, `Esport`, `Ciència_i_Tecnologia` |
</details>
### Evaluation results
| Type | Score |
| --- | --- |
| `TAG_ACC` | 96.35 |
| `POS_ACC` | 96.36 |
| `MORPH_ACC` | 95.71 |
| `LEMMA_ACC` | 97.58 |
| `DEP_UAS` | 95.16 |
| `DEP_LAS` | 93.53 |
| `SENTS_P` | 99.30 |
| `SENTS_R` | 99.30 |
| `SENTS_F` | 99.30 |
| `ENTS_F` | 92.02 |
| `ENTS_P` | 92.46 |
| `ENTS_R` | 91.59 |
| `TRANSFORMER_LOSS` | 2061930.61 |
| `TAGGER_LOSS` | 462421.82 |
| `MORPHOLOGIZER_LOSS` | 583505.58 |
| `PARSER_LOSS` | 628332.01 |
| `NER_LOSS` | 12427.23 |
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to aina@bsc.es
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing information
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Citing information
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
cf150bcbdd661d843c4c616b1406e714
|
google/multiberts-seed_0-step_600k
|
google
|
bert
| 8 | 14 |
transformers
| 0 | null | true | true | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_600k']
| false | true | true | 3,521 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 600k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #0, captured at step 600k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_600k')
model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_600k')
model = BertModel.from_pretrained("google/multiberts-seed_0-step_600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
cea91a265a39279e497a6dcf865f0838
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.