modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 18:27:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 18:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Manishkalra/discourse_classification
|
Manishkalra
| 2022-07-20T09:48:11Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-07T11:13:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: discourse_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# discourse_classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7639
- Accuracy: 0.6649
- F1: 0.6649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7565 | 1.0 | 1839 | 0.7589 | 0.6635 | 0.6635 |
| 0.6693 | 2.0 | 3678 | 0.7639 | 0.6649 | 0.6649 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bigmorning/distilbert_oscarth_0080
|
bigmorning
| 2022-07-20T09:29:02Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-20T09:28:43Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_oscarth_0080
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_oscarth_0080
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1236
- Validation Loss: 1.0821
- Epoch: 79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.1327 | 2.9983 | 0 |
| 2.7813 | 2.4562 | 1 |
| 2.4194 | 2.2066 | 2 |
| 2.2231 | 2.0562 | 3 |
| 2.0894 | 1.9450 | 4 |
| 1.9905 | 1.8621 | 5 |
| 1.9148 | 1.7941 | 6 |
| 1.8508 | 1.7363 | 7 |
| 1.7976 | 1.6909 | 8 |
| 1.7509 | 1.6488 | 9 |
| 1.7126 | 1.6124 | 10 |
| 1.6764 | 1.5835 | 11 |
| 1.6450 | 1.5521 | 12 |
| 1.6175 | 1.5282 | 13 |
| 1.5919 | 1.5045 | 14 |
| 1.5679 | 1.4833 | 15 |
| 1.5476 | 1.4627 | 16 |
| 1.5271 | 1.4498 | 17 |
| 1.5098 | 1.4270 | 18 |
| 1.4909 | 1.4161 | 19 |
| 1.4760 | 1.3995 | 20 |
| 1.4609 | 1.3864 | 21 |
| 1.4475 | 1.3717 | 22 |
| 1.4333 | 1.3590 | 23 |
| 1.4203 | 1.3478 | 24 |
| 1.4093 | 1.3403 | 25 |
| 1.3980 | 1.3296 | 26 |
| 1.3875 | 1.3176 | 27 |
| 1.3773 | 1.3094 | 28 |
| 1.3674 | 1.3011 | 29 |
| 1.3579 | 1.2920 | 30 |
| 1.3497 | 1.2826 | 31 |
| 1.3400 | 1.2764 | 32 |
| 1.3326 | 1.2694 | 33 |
| 1.3236 | 1.2635 | 34 |
| 1.3169 | 1.2536 | 35 |
| 1.3096 | 1.2477 | 36 |
| 1.3024 | 1.2408 | 37 |
| 1.2957 | 1.2364 | 38 |
| 1.2890 | 1.2296 | 39 |
| 1.2818 | 1.2236 | 40 |
| 1.2751 | 1.2168 | 41 |
| 1.2691 | 1.2126 | 42 |
| 1.2644 | 1.2044 | 43 |
| 1.2583 | 1.2008 | 44 |
| 1.2529 | 1.1962 | 45 |
| 1.2473 | 1.1919 | 46 |
| 1.2416 | 1.1857 | 47 |
| 1.2365 | 1.1812 | 48 |
| 1.2318 | 1.1765 | 49 |
| 1.2273 | 1.1738 | 50 |
| 1.2224 | 1.1672 | 51 |
| 1.2177 | 1.1673 | 52 |
| 1.2132 | 1.1595 | 53 |
| 1.2084 | 1.1564 | 54 |
| 1.2033 | 1.1518 | 55 |
| 1.1993 | 1.1481 | 56 |
| 1.1966 | 1.1445 | 57 |
| 1.1924 | 1.1412 | 58 |
| 1.1876 | 1.1378 | 59 |
| 1.1834 | 1.1340 | 60 |
| 1.1806 | 1.1329 | 61 |
| 1.1783 | 1.1289 | 62 |
| 1.1739 | 1.1251 | 63 |
| 1.1705 | 1.1223 | 64 |
| 1.1669 | 1.1192 | 65 |
| 1.1628 | 1.1172 | 66 |
| 1.1599 | 1.1140 | 67 |
| 1.1570 | 1.1084 | 68 |
| 1.1526 | 1.1081 | 69 |
| 1.1496 | 1.1043 | 70 |
| 1.1463 | 1.0999 | 71 |
| 1.1438 | 1.1006 | 72 |
| 1.1397 | 1.0964 | 73 |
| 1.1378 | 1.0918 | 74 |
| 1.1347 | 1.0917 | 75 |
| 1.1319 | 1.0889 | 76 |
| 1.1296 | 1.0855 | 77 |
| 1.1271 | 1.0848 | 78 |
| 1.1236 | 1.0821 | 79 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jordyvl/biobert-base-cased-v1.2_ncbi_disease-sm-first-ner
|
jordyvl
| 2022-07-20T09:26:17Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:ncbi_disease",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-13T09:18:48Z |
---
tags:
- generated_from_trainer
datasets:
- ncbi_disease
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2_ncbi_disease-sm-first-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ncbi_disease
type: ncbi_disease
args: ncbi_disease
metrics:
- name: Precision
type: precision
value: 0.8522139160437032
- name: Recall
type: recall
value: 0.8826682549136391
- name: F1
type: f1
value: 0.8671737858396723
- name: Accuracy
type: accuracy
value: 0.9826972482743678
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2_ncbi_disease-sm-first-ner
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the ncbi_disease dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0865
- Precision: 0.8522
- Recall: 0.8827
- F1: 0.8672
- Accuracy: 0.9827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0858 | 1.0 | 1359 | 0.0985 | 0.7929 | 0.8005 | 0.7967 | 0.9730 |
| 0.042 | 2.0 | 2718 | 0.0748 | 0.8449 | 0.8856 | 0.8648 | 0.9820 |
| 0.0124 | 3.0 | 4077 | 0.0865 | 0.8522 | 0.8827 | 0.8672 | 0.9827 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Ecosmob555/t5-small-finetuned-on-800-records-samsum
|
Ecosmob555
| 2022-07-20T09:21:32Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-20T07:03:47Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: kapuska/t5-small-finetuned-on-800-records-samsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kapuska/t5-small-finetuned-on-800-records-samsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7883
- Validation Loss: 2.3752
- Train Rouge1: 24.8093
- Train Rouge2: 8.8889
- Train Rougel: 22.6817
- Train Rougelsum: 22.6817
- Train Gen Len: 19.0
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 1.9252 | 1.9205 | 19.5556 | 2.3256 | 15.1111 | 15.1111 | 19.0 | 0 |
| 1.9005 | 1.9227 | 17.5579 | 2.3810 | 15.2852 | 15.2852 | 19.0 | 1 |
| 1.8769 | 1.9228 | 17.5579 | 2.3810 | 15.2852 | 15.2852 | 19.0 | 2 |
| 1.8463 | 1.9192 | 17.5579 | 2.3810 | 15.2852 | 15.2852 | 19.0 | 3 |
| 1.8251 | 1.9132 | 17.4786 | 2.3256 | 13.0342 | 13.0342 | 19.0 | 4 |
| 1.8148 | 1.9147 | 15.5594 | 2.3810 | 13.2867 | 13.2867 | 19.0 | 5 |
| 1.7980 | 1.9142 | 15.5594 | 2.3810 | 13.2867 | 13.2867 | 19.0 | 6 |
| 1.7684 | 1.9158 | 15.6772 | 2.3810 | 13.4045 | 13.4045 | 19.0 | 7 |
| 1.7571 | 1.9161 | 17.5964 | 2.3256 | 13.1519 | 13.1519 | 19.0 | 8 |
| 1.7345 | 1.9221 | 19.6372 | 2.3256 | 15.1927 | 15.1927 | 19.0 | 9 |
| 1.7136 | 1.9141 | 19.6372 | 2.3256 | 15.1927 | 15.1927 | 19.0 | 10 |
| 1.6935 | 1.9249 | 19.6372 | 2.3256 | 15.1927 | 15.1927 | 19.0 | 11 |
| 1.6685 | 1.9226 | 19.6372 | 2.3256 | 15.1927 | 15.1927 | 19.0 | 12 |
| 1.6571 | 1.9258 | 19.6372 | 2.3256 | 15.1927 | 15.1927 | 19.0 | 13 |
| 1.6327 | 1.9308 | 19.6372 | 2.3256 | 15.1927 | 15.1927 | 19.0 | 14 |
| 1.6295 | 1.9271 | 19.6372 | 2.3256 | 15.1927 | 15.1927 | 19.0 | 15 |
| 1.6112 | 1.9314 | 19.5556 | 2.3256 | 15.1111 | 15.1111 | 19.0 | 16 |
| 1.6008 | 1.9357 | 19.6372 | 2.3256 | 15.1927 | 15.1927 | 19.0 | 17 |
| 1.5826 | 1.9277 | 19.3913 | 2.2727 | 15.0435 | 15.0435 | 19.0 | 18 |
| 1.5784 | 1.9342 | 21.3913 | 2.2727 | 17.0435 | 17.0435 | 19.0 | 19 |
| 1.5553 | 1.9364 | 19.3913 | 2.2727 | 15.0435 | 15.0435 | 19.0 | 20 |
| 1.5292 | 1.9461 | 19.3913 | 2.2727 | 15.0435 | 15.0435 | 19.0 | 21 |
| 1.5114 | 1.9505 | 19.3913 | 2.2727 | 15.0435 | 15.0435 | 19.0 | 22 |
| 1.5042 | 1.9540 | 17.5964 | 2.3256 | 13.1519 | 13.1519 | 19.0 | 23 |
| 1.4964 | 1.9494 | 19.0621 | 4.4444 | 16.9344 | 16.9344 | 19.0 | 24 |
| 1.4736 | 1.9569 | 24.7136 | 4.4444 | 20.6628 | 22.5859 | 19.0 | 25 |
| 1.4644 | 1.9618 | 24.7136 | 4.4444 | 20.6628 | 22.5859 | 19.0 | 26 |
| 1.4562 | 1.9693 | 18.9821 | 4.4444 | 16.8544 | 16.8544 | 19.0 | 27 |
| 1.4339 | 1.9597 | 22.7905 | 4.4444 | 18.7398 | 20.6628 | 19.0 | 28 |
| 1.4204 | 1.9702 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 29 |
| 1.4182 | 1.9715 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 30 |
| 1.4014 | 1.9768 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 31 |
| 1.3845 | 1.9847 | 20.9428 | 4.4444 | 18.8152 | 18.8152 | 19.0 | 32 |
| 1.3756 | 1.9790 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 33 |
| 1.3611 | 1.9936 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 34 |
| 1.3495 | 1.9900 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 35 |
| 1.3403 | 1.9998 | 20.9428 | 4.4444 | 18.8152 | 18.8152 | 19.0 | 36 |
| 1.3253 | 2.0060 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 37 |
| 1.3109 | 2.0088 | 18.9821 | 4.4444 | 16.8544 | 16.8544 | 19.0 | 38 |
| 1.3106 | 2.0121 | 20.8674 | 4.4444 | 18.7398 | 18.7398 | 19.0 | 39 |
| 1.2903 | 2.0142 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 40 |
| 1.2795 | 2.0239 | 20.8674 | 4.4444 | 18.7398 | 18.7398 | 19.0 | 41 |
| 1.2788 | 2.0322 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 42 |
| 1.2629 | 2.0284 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 43 |
| 1.2525 | 2.0423 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 44 |
| 1.2373 | 2.0424 | 27.0458 | 11.1111 | 22.9951 | 24.9182 | 19.0 | 45 |
| 1.2242 | 2.0454 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 46 |
| 1.2214 | 2.0541 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 47 |
| 1.2066 | 2.0567 | 27.0458 | 11.1111 | 22.9951 | 24.9182 | 19.0 | 48 |
| 1.1866 | 2.0632 | 26.9370 | 11.1111 | 24.8093 | 24.8093 | 19.0 | 49 |
| 1.1976 | 2.0684 | 27.0458 | 11.1111 | 22.9951 | 24.9182 | 19.0 | 50 |
| 1.1806 | 2.0725 | 27.0458 | 11.1111 | 22.9951 | 24.9182 | 19.0 | 51 |
| 1.1662 | 2.0803 | 27.0458 | 11.1111 | 22.9951 | 24.9182 | 19.0 | 52 |
| 1.1626 | 2.0840 | 23.1997 | 11.1111 | 21.0720 | 21.0720 | 19.0 | 53 |
| 1.1464 | 2.0855 | 23.1997 | 11.1111 | 21.0720 | 21.0720 | 19.0 | 54 |
| 1.1298 | 2.0956 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 55 |
| 1.1300 | 2.1050 | 23.1997 | 11.1111 | 21.0720 | 21.0720 | 19.0 | 56 |
| 1.1255 | 2.1025 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 57 |
| 1.1005 | 2.1188 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 58 |
| 1.1002 | 2.1261 | 23.1997 | 11.1111 | 21.0720 | 21.0720 | 19.0 | 59 |
| 1.0806 | 2.1318 | 22.6817 | 4.4444 | 20.5540 | 20.5540 | 19.0 | 60 |
| 1.0869 | 2.1425 | 23.1997 | 11.1111 | 21.0720 | 21.0720 | 19.0 | 61 |
| 1.0768 | 2.1492 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 62 |
| 1.0681 | 2.1473 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 63 |
| 1.0594 | 2.1440 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 64 |
| 1.0411 | 2.1461 | 22.6817 | 4.4444 | 20.5540 | 20.5540 | 19.0 | 65 |
| 1.0342 | 2.1727 | 22.6817 | 4.4444 | 20.5540 | 20.5540 | 19.0 | 66 |
| 1.0306 | 2.1677 | 22.6817 | 4.4444 | 20.5540 | 20.5540 | 19.0 | 67 |
| 1.0163 | 2.1753 | 22.6817 | 4.4444 | 20.5540 | 20.5540 | 19.0 | 68 |
| 1.0139 | 2.1767 | 22.6817 | 4.4444 | 20.5540 | 20.5540 | 19.0 | 69 |
| 1.0036 | 2.1929 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 70 |
| 1.0049 | 2.1902 | 23.1997 | 11.1111 | 21.0720 | 21.0720 | 19.0 | 71 |
| 0.9947 | 2.1936 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 72 |
| 0.9803 | 2.2084 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 73 |
| 0.9791 | 2.2106 | 19.3144 | 4.5455 | 17.1405 | 17.1405 | 19.0 | 74 |
| 0.9655 | 2.2172 | 20.8674 | 4.4444 | 18.7398 | 18.7398 | 19.0 | 75 |
| 0.9640 | 2.2215 | 22.6817 | 4.4444 | 20.5540 | 20.5540 | 19.0 | 76 |
| 0.9456 | 2.2341 | 26.9370 | 11.1111 | 24.8093 | 24.8093 | 19.0 | 77 |
| 0.9396 | 2.2414 | 23.0705 | 8.8889 | 20.9428 | 20.9428 | 19.0 | 78 |
| 0.9335 | 2.2455 | 18.9444 | 4.4444 | 16.8167 | 16.8167 | 19.0 | 79 |
| 0.9261 | 2.2560 | 23.1997 | 11.1111 | 21.0720 | 21.0720 | 19.0 | 80 |
| 0.9075 | 2.2642 | 23.1997 | 11.1111 | 21.0720 | 21.0720 | 19.0 | 81 |
| 0.9023 | 2.2763 | 22.9951 | 8.8889 | 20.8674 | 20.8674 | 19.0 | 82 |
| 0.9044 | 2.2782 | 21.0720 | 8.8889 | 18.9444 | 18.9444 | 19.0 | 83 |
| 0.8961 | 2.2812 | 24.8093 | 8.8889 | 22.6817 | 22.6817 | 19.0 | 84 |
| 0.8813 | 2.2794 | 24.8093 | 8.8889 | 22.6817 | 22.6817 | 19.0 | 85 |
| 0.8731 | 2.2886 | 21.0720 | 8.8889 | 18.9444 | 18.9444 | 19.0 | 86 |
| 0.8751 | 2.2930 | 24.8093 | 8.8889 | 22.6817 | 22.6817 | 19.0 | 87 |
| 0.8652 | 2.3024 | 25.2256 | 6.8182 | 23.0517 | 23.0517 | 19.0 | 88 |
| 0.8605 | 2.3131 | 24.8093 | 8.8889 | 22.6817 | 22.6817 | 19.0 | 89 |
| 0.8571 | 2.3070 | 22.9951 | 8.8889 | 20.8674 | 20.8674 | 19.0 | 90 |
| 0.8473 | 2.3123 | 25.1227 | 11.1111 | 22.9951 | 22.9951 | 19.0 | 91 |
| 0.8456 | 2.3272 | 25.1227 | 11.1111 | 22.9951 | 22.9951 | 19.0 | 92 |
| 0.8329 | 2.3427 | 26.9370 | 11.1111 | 24.8093 | 24.8093 | 19.0 | 93 |
| 0.8294 | 2.3419 | 25.1982 | 11.1111 | 23.0705 | 23.0705 | 19.0 | 94 |
| 0.8243 | 2.3507 | 25.1982 | 11.1111 | 23.0705 | 23.0705 | 19.0 | 95 |
| 0.8132 | 2.3600 | 24.8093 | 8.8889 | 22.6817 | 22.6817 | 19.0 | 96 |
| 0.8153 | 2.3501 | 24.8093 | 8.8889 | 22.6817 | 22.6817 | 19.0 | 97 |
| 0.8005 | 2.3579 | 20.8778 | 2.2727 | 18.7039 | 18.7039 | 19.0 | 98 |
| 0.7883 | 2.3752 | 24.8093 | 8.8889 | 22.6817 | 22.6817 | 19.0 | 99 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tokeron/alephbert-finetuned-metaphor-detection
|
tokeron
| 2022-07-20T09:21:13Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"he",
"dataset:Piyutim",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-20T07:06:57Z |
---
license: afl-3.0
language:
- he
tags:
- token-classification
datasets:
- Piyutim
model:
- onlplab/alephbert-base
metrics:
- f1
widget:
- text: "נשבר לי הגב"
example_title: "Broken back"
- text: "ש לו לב זהב"
example_title: "Golden heart"
---
This is a token-classification model.
This model is AlephBert fine-tuned on detecting metaphors from Hebrew Piyutim
model-index:
- name: tokeron/alephbert-finetuned-metaphor-detection
results: []
# model
This model fine-tunes onlplab/alephbert-base model on Piyutim dataset.
### About Us
Created by Michael Toker in collaboration with Yonatan Belinkov, Benny Kornfeld, Oren Mishali, and Ophir Münz-Manor.
For more cooperation, please contact email:
tok@campus.technion.ac.il
|
workRL/DQNTest-LunarLander-v2
|
workRL
| 2022-07-20T09:05:04Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-20T09:04:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -95.66 +/- 35.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
workRL/DQN-LunarLander-v2
|
workRL
| 2022-07-20T08:59:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-20T08:41:15Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -116.80 +/- 16.36
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
knkarthick/bart-large-xsum-samsum
|
knkarthick
| 2022-07-20T08:29:15Z | 49 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"seq2seq",
"summarization",
"en",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- bart
- seq2seq
- summarization
license: apache-2.0
datasets:
- samsum
widget:
- text: "Hannah: Hey, do you have Betty's number?\nAmanda: Lemme check\nAmanda: Sorry,\
\ can't find it.\nAmanda: Ask Larry\nAmanda: He called her last time we were at\
\ the park together\nHannah: I don't know him well\nAmanda: Don't be shy, he's\
\ very nice\nHannah: If you say so..\nHannah: I'd rather you texted him\nAmanda:\
\ Just text him \U0001F642\nHannah: Urgh.. Alright\nHannah: Bye\nAmanda: Bye bye\n"
model-index:
- name: bart-large-xsum-samsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: 'SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization'
type: samsum
metrics:
- name: Validation ROUGE-1
type: rouge-1
value: 54.3921
- name: Validation ROUGE-2
type: rouge-2
value: 29.8078
- name: Validation ROUGE-L
type: rouge-l
value: 45.1543
- name: Test ROUGE-1
type: rouge-1
value: 53.3059
- name: Test ROUGE-2
type: rouge-2
value: 28.355
- name: Test ROUGE-L
type: rouge-l
value: 44.0953
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: train
metrics:
- name: ROUGE-1
type: rouge
value: 46.2492
verified: true
- name: ROUGE-2
type: rouge
value: 21.346
verified: true
- name: ROUGE-L
type: rouge
value: 37.2787
verified: true
- name: ROUGE-LSUM
type: rouge
value: 42.1317
verified: true
- name: loss
type: loss
value: 1.6859958171844482
verified: true
- name: gen_len
type: gen_len
value: 23.7103
verified: true
---
## `bart-large-xsum-samsum`
This model was obtained by fine-tuning `facebook/bart-large-xsum` on [Samsum](https://huggingface.co/datasets/samsum) dataset.
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/bart-large-xsum-samsum")
conversation = '''Hannah: Hey, do you have Betty's number?
Amanda: Lemme check
Amanda: Sorry, can't find it.
Amanda: Ask Larry
Amanda: He called her last time we were at the park together
Hannah: I don't know him well
Amanda: Don't be shy, he's very nice
Hannah: If you say so..
Hannah: I'd rather you texted him
Amanda: Just text him 🙂
Hannah: Urgh.. Alright
Hannah: Bye
Amanda: Bye bye
'''
summarizer(conversation)
```
|
knkarthick/meeting-summary-samsum
|
knkarthick
| 2022-07-20T08:28:58Z | 43 | 8 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"seq2seq",
"summarization",
"en",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- bart
- seq2seq
- summarization
license: apache-2.0
datasets:
- samsum
widget:
- text: |
Hannah: Hey, do you have Betty's number?
Amanda: Lemme check
Amanda: Sorry, can't find it.
Amanda: Ask Larry
Amanda: He called her last time we were at the park together
Hannah: I don't know him well
Amanda: Don't be shy, he's very nice
Hannah: If you say so..
Hannah: I'd rather you texted him
Amanda: Just text him 🙂
Hannah: Urgh.. Alright
Hannah: Bye
Amanda: Bye bye
model-index:
- name: bart-large-xsum-samsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization"
type: samsum
metrics:
- name: Validation ROUGE-1
type: rouge-1
value: 54.3921
- name: Validation ROUGE-2
type: rouge-2
value: 29.8078
- name: Validation ROUGE-L
type: rouge-l
value: 45.1543
- name: Test ROUGE-1
type: rouge-1
value: 53.3059
- name: Test ROUGE-2
type: rouge-2
value: 28.355
- name: Test ROUGE-L
type: rouge-l
value: 44.0953
---
## `bart-large-xsum-samsum`
This model was obtained by fine-tuning `facebook/bart-large-xsum` on [Samsum](https://huggingface.co/datasets/samsum) dataset.
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/bart-large-xsum-samsum")
conversation = '''Hannah: Hey, do you have Betty's number?
Amanda: Lemme check
Amanda: Sorry, can't find it.
Amanda: Ask Larry
Amanda: He called her last time we were at the park together
Hannah: I don't know him well
Amanda: Don't be shy, he's very nice
Hannah: If you say so..
Hannah: I'd rather you texted him
Amanda: Just text him 🙂
Hannah: Urgh.. Alright
Hannah: Bye
Amanda: Bye bye
'''
summarizer(conversation)
```
|
notmaineyy/distilbert-base-uncased-finetuned-ner
|
notmaineyy
| 2022-07-20T08:02:41Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-20T07:55:44Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: notmaineyy/distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# notmaineyy/distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0344
- Validation Loss: 0.0633
- Train Precision: 0.9181
- Train Recall: 0.9322
- Train F1: 0.9251
- Train Accuracy: 0.9823
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.2048 | 0.0749 | 0.8898 | 0.9129 | 0.9012 | 0.9784 | 0 |
| 0.0556 | 0.0621 | 0.9150 | 0.9300 | 0.9224 | 0.9819 | 1 |
| 0.0344 | 0.0633 | 0.9181 | 0.9322 | 0.9251 | 0.9823 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
auriolar/testpyramidsrnd
|
auriolar
| 2022-07-20T07:55:41Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-20T07:55:36Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: auriolar/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
FAICAM/distilbert-base-uncased-finetuned-cola
|
FAICAM
| 2022-07-20T07:54:29Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-20T07:47:13Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: FAICAM/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# FAICAM/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1871
- Validation Loss: 0.4889
- Train Matthews Correlation: 0.5644
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5111 | 0.5099 | 0.4325 | 0 |
| 0.3227 | 0.4561 | 0.5453 | 1 |
| 0.1871 | 0.4889 | 0.5644 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
wenkai-li/distilbert-base-uncased-finetuned-wikiandmark_epoch20
|
wenkai-li
| 2022-07-20T07:33:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-20T02:43:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-wikiandmark_epoch20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-wikiandmark_epoch20
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0561
- Accuracy: 0.9944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0224 | 1.0 | 1859 | 0.0277 | 0.9919 |
| 0.0103 | 2.0 | 3718 | 0.0298 | 0.9925 |
| 0.0047 | 3.0 | 5577 | 0.0429 | 0.9924 |
| 0.0038 | 4.0 | 7436 | 0.0569 | 0.9922 |
| 0.0019 | 5.0 | 9295 | 0.0554 | 0.9936 |
| 0.0028 | 6.0 | 11154 | 0.0575 | 0.9928 |
| 0.002 | 7.0 | 13013 | 0.0544 | 0.9926 |
| 0.0017 | 8.0 | 14872 | 0.0553 | 0.9935 |
| 0.001 | 9.0 | 16731 | 0.0498 | 0.9924 |
| 0.0001 | 10.0 | 18590 | 0.0398 | 0.9934 |
| 0.0 | 11.0 | 20449 | 0.0617 | 0.9935 |
| 0.0002 | 12.0 | 22308 | 0.0561 | 0.9944 |
| 0.0002 | 13.0 | 24167 | 0.0755 | 0.9934 |
| 0.0 | 14.0 | 26026 | 0.0592 | 0.9941 |
| 0.0 | 15.0 | 27885 | 0.0572 | 0.9939 |
| 0.0 | 16.0 | 29744 | 0.0563 | 0.9941 |
| 0.0 | 17.0 | 31603 | 0.0587 | 0.9936 |
| 0.0005 | 18.0 | 33462 | 0.0673 | 0.9937 |
| 0.0 | 19.0 | 35321 | 0.0651 | 0.9933 |
| 0.0 | 20.0 | 37180 | 0.0683 | 0.9936 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
RajSang/ppo-LunarLander-v2
|
RajSang
| 2022-07-20T07:06:10Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-20T07:05:41Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 182.38 +/- 36.42
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bigmorning/distilgpt_oscarth_0040
|
bigmorning
| 2022-07-20T03:34:29Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-20T03:34:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilgpt_oscarth_0040
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilgpt_oscarth_0040
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.0004
- Validation Loss: 2.8864
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.6021 | 4.5759 | 0 |
| 4.4536 | 4.1235 | 1 |
| 4.1386 | 3.9013 | 2 |
| 3.9546 | 3.7563 | 3 |
| 3.8255 | 3.6477 | 4 |
| 3.7271 | 3.5617 | 5 |
| 3.6488 | 3.4936 | 6 |
| 3.5844 | 3.4379 | 7 |
| 3.5301 | 3.3891 | 8 |
| 3.4833 | 3.3448 | 9 |
| 3.4427 | 3.3098 | 10 |
| 3.4068 | 3.2750 | 11 |
| 3.3749 | 3.2425 | 12 |
| 3.3462 | 3.2211 | 13 |
| 3.3202 | 3.1941 | 14 |
| 3.2964 | 3.1720 | 15 |
| 3.2749 | 3.1512 | 16 |
| 3.2548 | 3.1322 | 17 |
| 3.2363 | 3.1141 | 18 |
| 3.2188 | 3.0982 | 19 |
| 3.2025 | 3.0818 | 20 |
| 3.1871 | 3.0678 | 21 |
| 3.1724 | 3.0533 | 22 |
| 3.1583 | 3.0376 | 23 |
| 3.1446 | 3.0256 | 24 |
| 3.1318 | 3.0122 | 25 |
| 3.1195 | 3.0016 | 26 |
| 3.1079 | 2.9901 | 27 |
| 3.0968 | 2.9826 | 28 |
| 3.0863 | 2.9711 | 29 |
| 3.0761 | 2.9593 | 30 |
| 3.0665 | 2.9514 | 31 |
| 3.0572 | 2.9432 | 32 |
| 3.0483 | 2.9347 | 33 |
| 3.0396 | 2.9250 | 34 |
| 3.0313 | 2.9160 | 35 |
| 3.0232 | 2.9095 | 36 |
| 3.0153 | 2.9028 | 37 |
| 3.0078 | 2.8949 | 38 |
| 3.0004 | 2.8864 | 39 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Siyong/MT_RN_LM
|
Siyong
| 2022-07-20T03:25:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-20T01:38:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: run1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6666
- Wer: 0.6375
- Cer: 0.3170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 1.0564 | 2.36 | 2000 | 2.3456 | 0.9628 | 0.5549 |
| 0.5071 | 4.73 | 4000 | 2.0652 | 0.9071 | 0.5115 |
| 0.3952 | 7.09 | 6000 | 2.3649 | 0.9108 | 0.4628 |
| 0.3367 | 9.46 | 8000 | 1.7615 | 0.8253 | 0.4348 |
| 0.2765 | 11.82 | 10000 | 1.6151 | 0.7937 | 0.4087 |
| 0.2493 | 14.18 | 12000 | 1.4976 | 0.7881 | 0.3905 |
| 0.2318 | 16.55 | 14000 | 1.6731 | 0.8160 | 0.3925 |
| 0.2074 | 18.91 | 16000 | 1.5822 | 0.7658 | 0.3913 |
| 0.1825 | 21.28 | 18000 | 1.5442 | 0.7361 | 0.3704 |
| 0.1824 | 23.64 | 20000 | 1.5988 | 0.7621 | 0.3711 |
| 0.1699 | 26.0 | 22000 | 1.4261 | 0.7119 | 0.3490 |
| 0.158 | 28.37 | 24000 | 1.7482 | 0.7658 | 0.3648 |
| 0.1385 | 30.73 | 26000 | 1.4103 | 0.6784 | 0.3348 |
| 0.1199 | 33.1 | 28000 | 1.5214 | 0.6636 | 0.3273 |
| 0.116 | 35.46 | 30000 | 1.4288 | 0.7212 | 0.3486 |
| 0.1071 | 37.83 | 32000 | 1.5344 | 0.7138 | 0.3411 |
| 0.1007 | 40.19 | 34000 | 1.4501 | 0.6691 | 0.3237 |
| 0.0943 | 42.55 | 36000 | 1.5367 | 0.6859 | 0.3265 |
| 0.0844 | 44.92 | 38000 | 1.5321 | 0.6599 | 0.3273 |
| 0.0762 | 47.28 | 40000 | 1.6721 | 0.6264 | 0.3142 |
| 0.0778 | 49.65 | 42000 | 1.6666 | 0.6375 | 0.3170 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0+cu113
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Willaim/Bl00m
|
Willaim
| 2022-07-20T02:53:53Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-07-20T02:32:19Z |
---
license: bigscience-bloom-rail-1.0
---
import requests
API_URL = "https://api-inference.huggingface.co/models/bigscience/bloom"
headers = {"Authorization": "Bearer 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"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "Can you please let us know more details about your ",
})
|
commanderstrife/bc2gm_corpus-Bio_ClinicalBERT-finetuned-ner
|
commanderstrife
| 2022-07-20T02:51:04Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:bc2gm_corpus",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-20T02:00:12Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- bc2gm_corpus
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bc2gm_corpus-Bio_ClinicalBERT-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: bc2gm_corpus
type: bc2gm_corpus
args: bc2gm_corpus
metrics:
- name: Precision
type: precision
value: 0.7853881278538812
- name: Recall
type: recall
value: 0.8158102766798419
- name: F1
type: f1
value: 0.8003101977510663
- name: Accuracy
type: accuracy
value: 0.9758965601366187
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bc2gm_corpus-Bio_ClinicalBERT-finetuned-ner
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the bc2gm_corpus dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1505
- Precision: 0.7854
- Recall: 0.8158
- F1: 0.8003
- Accuracy: 0.9759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0981 | 1.0 | 782 | 0.0712 | 0.7228 | 0.7948 | 0.7571 | 0.9724 |
| 0.0509 | 2.0 | 1564 | 0.0687 | 0.7472 | 0.8199 | 0.7818 | 0.9746 |
| 0.0121 | 3.0 | 2346 | 0.0740 | 0.7725 | 0.8011 | 0.7866 | 0.9747 |
| 0.0001 | 4.0 | 3128 | 0.1009 | 0.7618 | 0.8251 | 0.7922 | 0.9741 |
| 0.0042 | 5.0 | 3910 | 0.1106 | 0.7757 | 0.8185 | 0.7965 | 0.9754 |
| 0.0015 | 6.0 | 4692 | 0.1182 | 0.7812 | 0.8111 | 0.7958 | 0.9758 |
| 0.0001 | 7.0 | 5474 | 0.1283 | 0.7693 | 0.8275 | 0.7973 | 0.9753 |
| 0.0072 | 8.0 | 6256 | 0.1376 | 0.7863 | 0.8158 | 0.8008 | 0.9762 |
| 0.0045 | 9.0 | 7038 | 0.1468 | 0.7856 | 0.8180 | 0.8015 | 0.9761 |
| 0.0 | 10.0 | 7820 | 0.1505 | 0.7854 | 0.8158 | 0.8003 | 0.9759 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Sily/ppo-LunarLander-v2
|
Sily
| 2022-07-20T02:49:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-20T02:48:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 162.88 +/- 38.54
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
luomingshuang/icefall_asr_tedlium3_transducer_stateless
|
luomingshuang
| 2022-07-20T02:44:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-03T02:56:02Z |
Note: This recipe is trained with the codes from this PR https://github.com/k2-fsa/icefall/pull/233
And the SpecAugment codes from this PR https://github.com/lhotse-speech/lhotse/pull/604.
# Pre-trained Transducer-Stateless models for the TEDLium3 dataset with icefall.
The model was trained on full [TEDLium3](https://www.openslr.org/51) with the scripts in [icefall](https://github.com/k2-fsa/icefall).
## Training procedure
The main repositories are list below, we will update the training and decoding scripts with the update of version.
k2: https://github.com/k2-fsa/k2
icefall: https://github.com/k2-fsa/icefall
lhotse: https://github.com/lhotse-speech/lhotse
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
* Preparing data.
```
cd egs/tedlium3/ASR
bash ./prepare.sh
```
* Training
```
export CUDA_VISIBLE_DEVICES="0,1,2,3"
./transducer_stateless/train.py \
--world-size 4 \
--num-epochs 30 \
--start-epoch 0 \
--exp-dir transducer_stateless/exp \
--max-duration 300
```
## Evaluation results
The decoding results (WER%) on TEDLium3 (dev and test) are listed below, we got this result by averaging models from epoch 19 to 29.
The WERs are
| | dev | test | comment |
|------------------------------------|------------|------------|------------------------------------------|
| greedy search | 7.19 | 6.70 | --epoch 29, --avg 11, --max-duration 100 |
| beam search (beam size 4) | 7.02 | 6.36 | --epoch 29, --avg 11, --max-duration 100 |
| modified beam search (beam size 4) | 6.91 | 6.33 | --epoch 29, --avg 11, --max-duration 100 |
|
bigmorning/distilbert_oscarth_0040
|
bigmorning
| 2022-07-20T01:27:25Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-20T01:27:11Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_oscarth_0040
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_oscarth_0040
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2890
- Validation Loss: 1.2296
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.1327 | 2.9983 | 0 |
| 2.7813 | 2.4562 | 1 |
| 2.4194 | 2.2066 | 2 |
| 2.2231 | 2.0562 | 3 |
| 2.0894 | 1.9450 | 4 |
| 1.9905 | 1.8621 | 5 |
| 1.9148 | 1.7941 | 6 |
| 1.8508 | 1.7363 | 7 |
| 1.7976 | 1.6909 | 8 |
| 1.7509 | 1.6488 | 9 |
| 1.7126 | 1.6124 | 10 |
| 1.6764 | 1.5835 | 11 |
| 1.6450 | 1.5521 | 12 |
| 1.6175 | 1.5282 | 13 |
| 1.5919 | 1.5045 | 14 |
| 1.5679 | 1.4833 | 15 |
| 1.5476 | 1.4627 | 16 |
| 1.5271 | 1.4498 | 17 |
| 1.5098 | 1.4270 | 18 |
| 1.4909 | 1.4161 | 19 |
| 1.4760 | 1.3995 | 20 |
| 1.4609 | 1.3864 | 21 |
| 1.4475 | 1.3717 | 22 |
| 1.4333 | 1.3590 | 23 |
| 1.4203 | 1.3478 | 24 |
| 1.4093 | 1.3403 | 25 |
| 1.3980 | 1.3296 | 26 |
| 1.3875 | 1.3176 | 27 |
| 1.3773 | 1.3094 | 28 |
| 1.3674 | 1.3011 | 29 |
| 1.3579 | 1.2920 | 30 |
| 1.3497 | 1.2826 | 31 |
| 1.3400 | 1.2764 | 32 |
| 1.3326 | 1.2694 | 33 |
| 1.3236 | 1.2635 | 34 |
| 1.3169 | 1.2536 | 35 |
| 1.3096 | 1.2477 | 36 |
| 1.3024 | 1.2408 | 37 |
| 1.2957 | 1.2364 | 38 |
| 1.2890 | 1.2296 | 39 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
steven123/Check_Aligned_Teeth
|
steven123
| 2022-07-20T00:59:05Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-20T00:58:54Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Check_Aligned_Teeth
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9473684430122375
---
# Check_Aligned_Teeth
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Aligned Teeth

#### Crooked Teeth

|
frgfm/cspdarknet53_mish
|
frgfm
| 2022-07-20T00:57:54Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:1911.11929",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
datasets:
- frgfm/imagenette
---
# CSP-Darknet-53 Mish model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The CSP-Darknet-53 Mish architecture was introduced in [this paper](https://arxiv.org/pdf/1911.11929.pdf).
## Model description
The core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture and replace activations with Mish.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/cspdarknet53_mish").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-1911-11929,
author = {Chien{-}Yao Wang and
Hong{-}Yuan Mark Liao and
I{-}Hau Yeh and
Yueh{-}Hua Wu and
Ping{-}Yang Chen and
Jun{-}Wei Hsieh},
title = {CSPNet: {A} New Backbone that can Enhance Learning Capability of {CNN}},
journal = {CoRR},
volume = {abs/1911.11929},
year = {2019},
url = {http://arxiv.org/abs/1911.11929},
eprinttype = {arXiv},
eprint = {1911.11929},
timestamp = {Tue, 03 Dec 2019 20:41:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-11929.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/cspdarknet53
|
frgfm
| 2022-07-20T00:57:40Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:1911.11929",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
datasets:
- frgfm/imagenette
---
# CSP-Darknet-53 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The CSP-Darknet-53 architecture was introduced in [this paper](https://arxiv.org/pdf/1911.11929.pdf).
## Model description
The core idea of the author is to change the convolutional stage by adding cross stage partial blocks in the architecture.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/cspdarknet53").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-1911-11929,
author = {Chien{-}Yao Wang and
Hong{-}Yuan Mark Liao and
I{-}Hau Yeh and
Yueh{-}Hua Wu and
Ping{-}Yang Chen and
Jun{-}Wei Hsieh},
title = {CSPNet: {A} New Backbone that can Enhance Learning Capability of {CNN}},
journal = {CoRR},
volume = {abs/1911.11929},
year = {2019},
url = {http://arxiv.org/abs/1911.11929},
eprinttype = {arXiv},
eprint = {1911.11929},
timestamp = {Tue, 03 Dec 2019 20:41:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-11929.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/resnet34
|
frgfm
| 2022-07-20T00:57:04Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:1512.03385",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- frgfm/imagenette
---
# ResNet-34 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ResNet architecture was introduced in [this paper](https://arxiv.org/pdf/1512.03385.pdf).
## Model description
The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/resnet34").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/HeZRS15,
author = {Kaiming He and
Xiangyu Zhang and
Shaoqing Ren and
Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {CoRR},
volume = {abs/1512.03385},
year = {2015},
url = {http://arxiv.org/abs/1512.03385},
eprinttype = {arXiv},
eprint = {1512.03385},
timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/resnet18
|
frgfm
| 2022-07-20T00:56:53Z | 40 | 1 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:1512.03385",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- frgfm/imagenette
---
# ResNet-18 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ResNet architecture was introduced in [this paper](https://arxiv.org/pdf/1512.03385.pdf).
## Model description
The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/resnet18").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/HeZRS15,
author = {Kaiming He and
Xiangyu Zhang and
Shaoqing Ren and
Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {CoRR},
volume = {abs/1512.03385},
year = {2015},
url = {http://arxiv.org/abs/1512.03385},
eprinttype = {arXiv},
eprint = {1512.03385},
timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/repvgg_a1
|
frgfm
| 2022-07-20T00:56:06Z | 35 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2101.03697",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- frgfm/imagenette
---
# RepVGG-A1 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The RepVGG architecture was introduced in [this paper](https://arxiv.org/pdf/2101.03697.pdf).
## Model description
The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/repvgg_a1").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2101-03697,
author = {Xiaohan Ding and
Xiangyu Zhang and
Ningning Ma and
Jungong Han and
Guiguang Ding and
Jian Sun},
title = {RepVGG: Making VGG-style ConvNets Great Again},
journal = {CoRR},
volume = {abs/2101.03697},
year = {2021},
url = {https://arxiv.org/abs/2101.03697},
eprinttype = {arXiv},
eprint = {2101.03697},
timestamp = {Tue, 09 Feb 2021 15:29:34 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-03697.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/repvgg_a0
|
frgfm
| 2022-07-20T00:55:54Z | 52 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2101.03697",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- frgfm/imagenette
---
# RepVGG-A0 model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The RepVGG architecture was introduced in [this paper](https://arxiv.org/pdf/2101.03697.pdf).
## Model description
The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/repvgg_a0").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2101-03697,
author = {Xiaohan Ding and
Xiangyu Zhang and
Ningning Ma and
Jungong Han and
Guiguang Ding and
Jian Sun},
title = {RepVGG: Making VGG-style ConvNets Great Again},
journal = {CoRR},
volume = {abs/2101.03697},
year = {2021},
url = {https://arxiv.org/abs/2101.03697},
eprinttype = {arXiv},
eprint = {2101.03697},
timestamp = {Tue, 09 Feb 2021 15:29:34 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2101-03697.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/rexnet2_0x
|
frgfm
| 2022-07-20T00:55:41Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2007.00992",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- frgfm/imagenette
---
# ReXNet-2.0x model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/rexnet2_0x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/rexnet1_5x
|
frgfm
| 2022-07-20T00:54:55Z | 63 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2007.00992",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- frgfm/imagenette
---
# ReXNet-1.5x model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/rexnet1_5x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/rexnet1_3x
|
frgfm
| 2022-07-20T00:54:33Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2007.00992",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- frgfm/imagenette
---
# ReXNet-1.3x model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/rexnet1_3x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
frgfm/rexnet1_0x
|
frgfm
| 2022-07-20T00:53:57Z | 40 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:frgfm/imagenette",
"arxiv:2007.00992",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- frgfm/imagenette
---
# ReXNet-1.0x model
Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows:
```shell
pip install pylocron
```
or using [conda](https://anaconda.org/frgfm/pylocron):
```shell
conda install -c frgfm pylocron
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/frgfm/Holocron.git
pip install -e Holocron/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from holocron.models import model_from_hf_hub
model = model_from_hf_hub("frgfm/rexnet1_0x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
aalogan/bert-ner-nsm1
|
aalogan
| 2022-07-19T22:45:43Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-19T14:00:30Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aalogan/bert-ner-nsm1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aalogan/bert-ner-nsm1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0366
- Validation Loss: 0.1607
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2694, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4732 | 0.1911 | 0 |
| 0.1551 | 0.1756 | 1 |
| 0.0931 | 0.1747 | 2 |
| 0.0679 | 0.1732 | 3 |
| 0.0477 | 0.1603 | 4 |
| 0.0366 | 0.1607 | 5 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jonaskoenig/topic_classification_03
|
jonaskoenig
| 2022-07-19T20:57:39Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-19T19:33:22Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: topic_classification_03
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# topic_classification_03
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0459
- Train Sparse Categorical Accuracy: 0.6535
- Validation Loss: 1.1181
- Validation Sparse Categorical Accuracy: 0.6354
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 1.2710 | 0.5838 | 1.1683 | 0.6156 | 0 |
| 1.1546 | 0.6193 | 1.1376 | 0.6259 | 1 |
| 1.1163 | 0.6314 | 1.1247 | 0.6292 | 2 |
| 1.0888 | 0.6400 | 1.1253 | 0.6323 | 3 |
| 1.0662 | 0.6473 | 1.1182 | 0.6344 | 4 |
| 1.0459 | 0.6535 | 1.1181 | 0.6354 | 5 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
t-bank-ai/ruDialoGPT-small
|
t-bank-ai
| 2022-07-19T20:27:35Z | 1,187 | 5 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"conversational",
"text-generation",
"ru",
"arxiv:2001.09977",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-12T14:24:39Z |
---
license: mit
pipeline_tag: text-generation
widget:
- text: "@@ПЕРВЫЙ@@ привет @@ВТОРОЙ@@ привет @@ПЕРВЫЙ@@ как дела? @@ВТОРОЙ@@"
example_title: "how r u"
- text: "@@ПЕРВЫЙ@@ что ты делал на выходных? @@ВТОРОЙ@@"
example_title: "wyd"
language:
- ru
tags:
- conversational
---
This generation model is based on [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2). It's trained on large corpus of dialog data and can be used for buildning generative conversational agents
The model was trained with context size 3
On a private validation set we calculated metrics introduced in [this paper](https://arxiv.org/pdf/2001.09977.pdf):
- Sensibleness: Crowdsourcers were asked whether model's response makes sense given the context
- Specificity: Crowdsourcers were asked whether model's response is specific for given context, in other words we don't want our model to give general and boring responses
- SSA which is the average of two metrics above (Sensibleness Specificity Average)
| | sensibleness | specificity | SSA |
|:----------------------------------------------------|---------------:|--------------:|------:|
| [tinkoff-ai/ruDialoGPT-small](https://huggingface.co/tinkoff-ai/ruDialoGPT-small) | 0.64 | 0.5 | 0.57 |
| [tinkoff-ai/ruDialoGPT-medium](https://huggingface.co/tinkoff-ai/ruDialoGPT-medium) | 0.78 | 0.69 | 0.735 |
How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/ruDialoGPT-small')
model = AutoModelWithLMHead.from_pretrained('tinkoff-ai/ruDialoGPT-small')
inputs = tokenizer('@@ПЕРВЫЙ@@ привет @@ВТОРОЙ@@ привет @@ПЕРВЫЙ@@ как дела? @@ВТОРОЙ@@', return_tensors='pt')
generated_token_ids = model.generate(
**inputs,
top_k=10,
top_p=0.95,
num_beams=3,
num_return_sequences=3,
do_sample=True,
no_repeat_ngram_size=2,
temperature=1.2,
repetition_penalty=1.2,
length_penalty=1.0,
eos_token_id=50257,
max_new_tokens=40
)
context_with_response = [tokenizer.decode(sample_token_ids) for sample_token_ids in generated_token_ids]
context_with_response
```
|
QuickSilver007/MLAgents-Pyramids_v2
|
QuickSilver007
| 2022-07-19T19:59:09Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-19T19:59:03Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: QuickSilver007/MLAgents-Pyramids_v2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kamangir/image-classifier
|
kamangir
| 2022-07-19T18:45:03Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"license:cc",
"region:us"
] | null | 2022-07-12T19:36:45Z |
---
license: cc
---
# Image Classifier
`image-classifier` is an extendable TensorFlow image classifier w/ a Bash cli and Hugging Face integration - to see the list of `image-classifier` commands complete [installation](#Installation) and type in:
```
image_classifier ?
```
## Installation
To install `image-classifier` first [install and configure awesome-bash-cli](https://github.com/kamangir/awesome-bash-cli) then run:
```
abcli huggingface clone image-classifier
```
To see the list of `image-classifier` saved models type in
```
image_classifier list
```
You should see the following items:
1. [fashion-mnist](#fashion-mnist)
1. intel-image-classifier 🚧
1. vegetable-classifier 🚧
## fashion-mnist

`fashion-mnist` is an `image-classifier` trained on [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist).
To retrain `fashion-mnist` type in:
```
abcli select
fashion_mnist train
abcli upload
image_classifier list . browser=1,model=object
```
You should now see the structure of the network (left) and the [content of the model](https://github.com/kamangir/browser) (right).
|  |  |
|---|---|
You can save this model under a new name by typing in:
```
fashion_mnist save new_name_1
```
/ END
|
bigmorning/oscarth_54321
|
bigmorning
| 2022-07-19T16:15:29Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-19T15:49:28Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: oscarth_54321
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# oscarth_54321
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5784
- Validation Loss: 4.5266
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.6206 | 4.5583 | 0 |
| 4.5784 | 4.5266 | 1 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Rocketknight1/bert-base-cased-finetuned-wikitext2
|
Rocketknight1
| 2022-07-19T14:14:15Z | 6 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/bert-base-cased-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/bert-base-cased-finetuned-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.3982
- Validation Loss: 6.2664
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.0679 | 6.4768 | 0 |
| 6.3982 | 6.2664 | 1 |
### Framework versions
- Transformers 4.21.0.dev0
- TensorFlow 2.9.1
- Datasets 2.3.3.dev0
- Tokenizers 0.11.0
|
Tahsin-Mayeesha/t5-end2end-questions-generation
|
Tahsin-Mayeesha
| 2022-07-19T13:52:43Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-19T11:58:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.6143
- eval_runtime: 96.0898
- eval_samples_per_second: 21.511
- eval_steps_per_second: 5.38
- epoch: 2.03
- step: 600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Eleven/bart-large-mnli-finetuned-emotion
|
Eleven
| 2022-07-19T13:17:53Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-18T19:19:13Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-large-mnli-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-mnli-finetuned-emotion
This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
saadob12/t5_C2T_autochart
|
saadob12
| 2022-07-19T13:03:11Z | 18 | 3 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2108.06897",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-08T15:50:39Z |
# Training Data
**Autochart:** Zhu, J., Ran, J., Lee, R. K. W., Choo, K., & Li, Z. (2021). AutoChart: A Dataset for Chart-to-Text Generation Task. arXiv preprint arXiv:2108.06897.
**Gitlab Link for the data**: https://gitlab.com/bottle_shop/snlg/chart/autochart
Train split for this model: Train 8000, Validation 1297, Test 1296
# Example use:
Append ```C2T: ``` before every input to the model
```
tokenizer = AutoTokenizer.from_pretrained(saadob12/t5_C2T_autochart)
model = AutoModelForSeq2SeqLM.from_pretrained(saadob12/t5_C2T_autochart)
data = 'Trade statistics of Qatar with developing economies in North Africa bar_chart Year-Trade with economies of Middle East & North Africa(%)(Merchandise exports,Merchandise imports) x-y1-y2 values 2000 0.591869968616745 3.59339030672154 , 2001 0.53415012207203 3.25371165779341 , 2002 3.07769793440318 1.672796364224 , 2003 0.6932513078579471 1.62522475477827 , 2004 1.17635914189321 1.80540331396412'
prefix = 'C2T: '
tokens = tokenizer.encode(prefix + data, truncation=True, padding='max_length', return_tensors='pt')
generated = model.generate(tokens, num_beams=4, max_length=256)
tgt_text = tokenizer.decode(generated[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)
summary = str(tgt_text).strip('[]""')
#Summary: This barchart shows the number of trade statistics of qatar with developing economies in north africa from 2000 through 2004. The unit of measurement in this graph is Trade with economies of Middle East & North Africa(%) as shown on the y-axis. The first group data denotes the change of Merchandise exports. There is a go up and down trend of the number. The peak of the number is found in 2002 and the lowest number is found in 2001. The changes in the number may be related to the conuntry's national policies. The second group data denotes the change of Merchandise imports. There is a go up and down trend of the number. The number in 2000 being the peak, and the lowest number is found in 2003. The changes in the number may be related to the conuntry's national policies.
```
# Limitations
You can use the model to generate summaries of data files.
Works well for general statistics like the following:
| Year | Children born per woman |
|:---:|:---:|
| 2018 | 1.14 |
| 2017 | 1.45 |
| 2016 | 1.49 |
| 2015 | 1.54 |
| 2014 | 1.6 |
| 2013 | 1.65 |
May or may not generate an **okay** summary at best for the following kind of data:
| Model | BLEU score | BLEURT|
|:---:|:---:|:---:|
| t5-small | 25.4 | -0.11 |
| t5-base | 28.2 | 0.12 |
| t5-large | 35.4 | 0.34 |
# Citation
Kindly cite my work. Thank you.
```
@misc{obaid ul islam_2022,
title={saadob12/t5_C2T_autochart Hugging Face},
url={https://huggingface.co/saadob12/t5_C2T_autochart},
journal={Huggingface.co},
author={Obaid ul Islam, Saad},
year={2022}
}
```
|
raisinbl/distilbert-base-uncased-finetuned-squad_2_512_1
|
raisinbl
| 2022-07-19T12:38:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-18T16:03:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad_2_512_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad_2_512_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2681 | 1.0 | 4079 | 1.2434 |
| 1.0223 | 2.0 | 8158 | 1.3153 |
| 0.865 | 3.0 | 12237 | 1.3225 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
spacestar1705/testpyramidsrnd
|
spacestar1705
| 2022-07-19T12:20:07Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-19T12:20:02Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: spacestar1705/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2
|
luomingshuang
| 2022-07-19T11:56:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-05-16T08:24:41Z |
Note: This recipe is trained with the codes from this PR https://github.com/k2-fsa/icefall/pull/355
And the SpecAugment codes from this PR https://github.com/lhotse-speech/lhotse/pull/604.
# Pre-trained Transducer-Stateless2 models for the Aidatatang_200zh dataset with icefall.
The model was trained on full [Aidatatang_200zh](https://www.openslr.org/62) with the scripts in [icefall](https://github.com/k2-fsa/icefall) based on the latest version k2.
## Training procedure
The main repositories are list below, we will update the training and decoding scripts with the update of version.
k2: https://github.com/k2-fsa/k2
icefall: https://github.com/k2-fsa/icefall
lhotse: https://github.com/lhotse-speech/lhotse
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
* Preparing data.
```
cd egs/aidatatang_200zh/ASR
bash ./prepare.sh
```
* Training
```
export CUDA_VISIBLE_DEVICES="0,1"
./pruned_transducer_stateless2/train.py \
--world-size 2 \
--num-epochs 30 \
--start-epoch 0 \
--exp-dir pruned_transducer_stateless2/exp \
--lang-dir data/lang_char \
--max-duration 250
```
## Evaluation results
The decoding results (WER%) on Aidatatang_200zh(dev and test) are listed below, we got this result by averaging models from epoch 11 to 29.
The WERs are
| | dev | test | comment |
|------------------------------------|------------|------------|------------------------------------------|
| greedy search | 5.53 | 6.59 | --epoch 29, --avg 19, --max-duration 100 |
| modified beam search (beam size 4) | 5.28 | 6.32 | --epoch 29, --avg 19, --max-duration 100 |
| fast beam search (set as default) | 5.29 | 6.33 | --epoch 29, --avg 19, --max-duration 1500|
|
kabelomalapane/Nso-En_update
|
kabelomalapane
| 2022-07-19T11:40:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-07-19T11:31:18Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Nso-En_update
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Nso-En_update
This model is a fine-tuned version of [kabelomalapane/En-Nso](https://huggingface.co/kabelomalapane/En-Nso) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9219
- Bleu: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:----:|
| No log | 1.0 | 108 | 2.0785 | 0.0 |
| No log | 2.0 | 216 | 1.9015 | 0.0 |
| No log | 3.0 | 324 | 1.8730 | 0.0 |
| No log | 4.0 | 432 | 1.8626 | 0.0 |
| 2.1461 | 5.0 | 540 | 1.8743 | 0.0 |
| 2.1461 | 6.0 | 648 | 1.8903 | 0.0 |
| 2.1461 | 7.0 | 756 | 1.9018 | 0.0 |
| 2.1461 | 8.0 | 864 | 1.9236 | 0.0 |
| 2.1461 | 9.0 | 972 | 1.9210 | 0.0 |
| 1.2781 | 10.0 | 1080 | 1.9219 | 0.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
robingeibel/reformer-finetuned-big_patent-wikipedia-arxiv-16384
|
robingeibel
| 2022-07-19T10:13:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"reformer",
"fill-mask",
"generated_from_trainer",
"dataset:wikipedia",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-14T15:11:09Z |
---
tags:
- generated_from_trainer
datasets:
- wikipedia
model-index:
- name: reformer-finetuned-big_patent-wikipedia-arxiv-16384
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reformer-finetuned-big_patent-wikipedia-arxiv-16384
This model is a fine-tuned version of [robingeibel/reformer-finetuned-big_patent-wikipedia-arxiv-16384](https://huggingface.co/robingeibel/reformer-finetuned-big_patent-wikipedia-arxiv-16384) on the wikipedia dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 8.0368 | 1.0 | 3785 | 6.7392 |
| 6.7992 | 2.0 | 7570 | 6.5576 |
| 6.6926 | 3.0 | 11355 | 6.5256 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Malanga/finetuning-sentiment-model-3000-samples
|
Malanga
| 2022-07-19T09:49:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-19T09:30:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8712871287128714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3104
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
spacestar1705/dqn-SpaceInvadersNoFrameskip-v4
|
spacestar1705
| 2022-07-19T09:41:56Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-19T09:41:17Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 508.50 +/- 105.36
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga spacestar1705 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga spacestar1705
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 100),
('train_freq', 4),
('normalize', False)])
```
|
ArthurZ/jukebox-1b-lyrics
|
ArthurZ
| 2022-07-19T09:40:53Z | 17 | 4 |
transformers
|
[
"transformers",
"pytorch",
"jukebox",
"feature-extraction",
"MusicGeneration",
"arxiv:2005.00341",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-05-30T08:11:09Z |
---
tags:
- MusicGeneration
- jukebox
---
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Jukebox
## Overview
The Jukebox model was proposed in [Jukebox: A generative model for music](https://arxiv.org/pdf/2005.00341.pdf)
by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford,
Ilya Sutskever.
This model proposes a generative music model which can be produce minute long samples which can bne conditionned on
artist, genre and lyrics.
The abstract from the paper is the following:
We introduce Jukebox, a model that generates
music with singing in the raw audio domain. We
tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes,
and modeling those using autoregressive Transformers. We show that the combined model at
scale can generate high-fidelity and diverse songs
with coherence up to multiple minutes. We can
condition on artist and genre to steer the musical
and vocal style, and on unaligned lyrics to make
the singing more controllable. We are releasing
thousands of non cherry-picked samples, along
with model weights and code.
Tips:
This model is very slow for now, and takes 18h to generate a minute long audio.
This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ).
The original code can be found [here](https://github.com/openai/jukebox).
|
AliMMZ/q-Taxi-v3
|
AliMMZ
| 2022-07-19T08:20:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-19T08:00:13Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AliMMZ/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
hirohiroz/wav2vec2-base-timit-demo-google-colab-tryjpn
|
hirohiroz
| 2022-07-19T08:16:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-14T03:11:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab-tryjpn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab-tryjpn
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1527
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 48.3474 | 6.67 | 100 | 68.0887 | 1.0 |
| 7.601 | 13.33 | 200 | 8.3667 | 1.0 |
| 4.9107 | 20.0 | 300 | 5.6991 | 1.0 |
| 4.379 | 26.67 | 400 | 5.1527 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Kayvane/distilbert-base-uncased-wandb-week-3-complaints-classifier-256
|
Kayvane
| 2022-07-19T06:29:12Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:consumer-finance-complaints",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-19T05:06:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- consumer-finance-complaints
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: distilbert-base-uncased-wandb-week-3-complaints-classifier-256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: consumer-finance-complaints
type: consumer-finance-complaints
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8234544620559604
- name: F1
type: f1
value: 0.8176243580045963
- name: Recall
type: recall
value: 0.8234544620559604
- name: Precision
type: precision
value: 0.8171438106054644
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-wandb-week-3-complaints-classifier-256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the consumer-finance-complaints dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5453
- Accuracy: 0.8235
- F1: 0.8176
- Recall: 0.8235
- Precision: 0.8171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.097565552226687e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 256
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.6691 | 0.61 | 1500 | 0.6475 | 0.7962 | 0.7818 | 0.7962 | 0.7875 |
| 0.5361 | 1.22 | 3000 | 0.5794 | 0.8161 | 0.8080 | 0.8161 | 0.8112 |
| 0.4659 | 1.83 | 4500 | 0.5453 | 0.8235 | 0.8176 | 0.8235 | 0.8171 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonaskoenig/topic_classification_01
|
jonaskoenig
| 2022-07-19T06:15:47Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-18T17:58:13Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: topic_classification_01
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# topic_classification_01
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0306
- Train Binary Crossentropy: 0.5578
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Binary Crossentropy | Epoch |
|:----------:|:-------------------------:|:-----:|
| 0.0397 | 0.7274 | 0 |
| 0.0352 | 0.6392 | 1 |
| 0.0339 | 0.6142 | 2 |
| 0.0330 | 0.5989 | 3 |
| 0.0324 | 0.5882 | 4 |
| 0.0319 | 0.5799 | 5 |
| 0.0315 | 0.5730 | 6 |
| 0.0312 | 0.5672 | 7 |
| 0.0309 | 0.5623 | 8 |
| 0.0306 | 0.5578 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fqw/t5-pegasus-finetuned_test
|
fqw
| 2022-07-19T06:14:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-19T03:32:58Z |
---
tags:
- generated_from_trainer
metrics:
- sacrebleu
model-index:
- name: t5-pegasus-finetuned_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-pegasus-finetuned_test
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0045
- Sacrebleu: 0.8737
- Rouge 1: 0.0237
- Rouge 2: 0.0
- Rouge L: 0.0232
- Bleu 1: 0.1444
- Bleu 2: 0.0447
- Bleu 3: 0.0175
- Bleu 4: 0.0083
- Meteor: 0.0609
- Gen Len: 15.098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Rouge 1 | Rouge 2 | Rouge L | Bleu 1 | Bleu 2 | Bleu 3 | Bleu 4 | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|:-------:|:-------:|:------:|:------:|:------:|:------:|:------:|:-------:|
| No log | 52.5 | 210 | 5.9818 | 0.9114 | 0.0229 | 0.0 | 0.0225 | 0.1424 | 0.0436 | 0.0183 | 0.0091 | 0.06 | 15.126 |
| No log | 70.0 | 280 | 6.0072 | 0.876 | 0.0233 | 0.0 | 0.0228 | 0.1437 | 0.0452 | 0.0177 | 0.0083 | 0.0607 | 15.088 |
| No log | 87.5 | 350 | 6.0017 | 0.8695 | 0.0229 | 0.0 | 0.0225 | 0.1445 | 0.0443 | 0.0175 | 0.0082 | 0.0609 | 15.12 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dafraile/Clini-dialog-sum-BART
|
dafraile
| 2022-07-19T05:12:30Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-13T03:49:10Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: tst-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tst-summarization
This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9975
- Rouge1: 56.239
- Rouge2: 28.9873
- Rougel: 38.5242
- Rougelsum: 53.7902
- Gen Len: 105.2973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Kayvane/distilroberta-base-wandb-week-3-complaints-classifier-512
|
Kayvane
| 2022-07-19T05:04:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:consumer-finance-complaints",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-19T03:40:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- consumer-finance-complaints
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: distilroberta-base-wandb-week-3-complaints-classifier-512
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: consumer-finance-complaints
type: consumer-finance-complaints
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8038326283064064
- name: F1
type: f1
value: 0.791857014338201
- name: Recall
type: recall
value: 0.8038326283064064
- name: Precision
type: precision
value: 0.7922430702228043
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-wandb-week-3-complaints-classifier-512
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the consumer-finance-complaints dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6004
- Accuracy: 0.8038
- F1: 0.7919
- Recall: 0.8038
- Precision: 0.7922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.7835312622444155e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 512
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.7559 | 0.61 | 1500 | 0.7307 | 0.7733 | 0.7411 | 0.7733 | 0.7286 |
| 0.6361 | 1.22 | 3000 | 0.6559 | 0.7846 | 0.7699 | 0.7846 | 0.7718 |
| 0.5774 | 1.83 | 4500 | 0.6004 | 0.8038 | 0.7919 | 0.8038 | 0.7922 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DsVuin/Flower
|
DsVuin
| 2022-07-19T03:46:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-07-19T03:45:37Z |
Field blue flowers and bright stars ethereal in holy lighting
|
gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1
|
gary109
| 2022-07-19T03:23:28Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-08T00:35:14Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1
This model is a fine-tuned version of [gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1](https://huggingface.co/gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5459
- Wer: 0.2463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| 0.3909 | 1.0 | 2309 | 0.5615 | 0.2459 |
| 0.4094 | 2.0 | 4618 | 0.5654 | 0.2439 |
| 0.326 | 3.0 | 6927 | 0.5568 | 0.2470 |
| 0.4577 | 4.0 | 9236 | 0.5795 | 0.2474 |
| 0.3628 | 5.0 | 11545 | 0.5459 | 0.2463 |
| 0.3135 | 6.0 | 13854 | 0.5582 | 0.2473 |
| 0.5058 | 7.0 | 16163 | 0.5677 | 0.2439 |
| 0.3188 | 8.0 | 18472 | 0.5646 | 0.2445 |
| 0.3589 | 9.0 | 20781 | 0.5626 | 0.2479 |
| 0.4021 | 10.0 | 23090 | 0.5722 | 0.2452 |
| 0.4362 | 11.0 | 25399 | 0.5659 | 0.2431 |
| 0.3215 | 12.0 | 27708 | 0.5658 | 0.2445 |
| 0.3646 | 13.0 | 30017 | 0.5785 | 0.2459 |
| 0.3757 | 14.0 | 32326 | 0.5757 | 0.2418 |
| 0.3311 | 15.0 | 34635 | 0.5672 | 0.2455 |
| 0.3709 | 16.0 | 36944 | 0.5669 | 0.2434 |
| 0.3342 | 17.0 | 39253 | 0.5610 | 0.2455 |
| 0.3236 | 18.0 | 41562 | 0.5652 | 0.2436 |
| 0.3566 | 19.0 | 43871 | 0.5773 | 0.2407 |
| 0.2912 | 20.0 | 46180 | 0.5764 | 0.2453 |
| 0.3652 | 21.0 | 48489 | 0.5732 | 0.2423 |
| 0.3785 | 22.0 | 50798 | 0.5696 | 0.2423 |
| 0.3968 | 23.0 | 53107 | 0.5690 | 0.2429 |
| 0.2968 | 24.0 | 55416 | 0.5800 | 0.2427 |
| 0.428 | 25.0 | 57725 | 0.5704 | 0.2441 |
| 0.383 | 26.0 | 60034 | 0.5739 | 0.2450 |
| 0.3694 | 27.0 | 62343 | 0.5791 | 0.2437 |
| 0.3449 | 28.0 | 64652 | 0.5780 | 0.2451 |
| 0.3008 | 29.0 | 66961 | 0.5749 | 0.2418 |
| 0.3939 | 30.0 | 69270 | 0.5737 | 0.2424 |
| 0.3451 | 31.0 | 71579 | 0.5805 | 0.2402 |
| 0.3513 | 32.0 | 73888 | 0.5670 | 0.2379 |
| 0.3866 | 33.0 | 76197 | 0.5706 | 0.2389 |
| 0.3831 | 34.0 | 78506 | 0.5635 | 0.2401 |
| 0.3641 | 35.0 | 80815 | 0.5708 | 0.2405 |
| 0.3345 | 36.0 | 83124 | 0.5699 | 0.2405 |
| 0.2902 | 37.0 | 85433 | 0.5711 | 0.2373 |
| 0.2868 | 38.0 | 87742 | 0.5713 | 0.2389 |
| 0.3232 | 39.0 | 90051 | 0.5702 | 0.2392 |
| 0.3277 | 40.0 | 92360 | 0.5658 | 0.2393 |
| 0.3234 | 41.0 | 94669 | 0.5732 | 0.2412 |
| 0.3625 | 42.0 | 96978 | 0.5740 | 0.2396 |
| 0.4075 | 43.0 | 99287 | 0.5733 | 0.2389 |
| 0.3473 | 44.0 | 101596 | 0.5735 | 0.2394 |
| 0.3157 | 45.0 | 103905 | 0.5721 | 0.2391 |
| 0.3866 | 46.0 | 106214 | 0.5715 | 0.2381 |
| 0.4062 | 47.0 | 108523 | 0.5711 | 0.2380 |
| 0.3871 | 48.0 | 110832 | 0.5716 | 0.2380 |
| 0.2924 | 49.0 | 113141 | 0.5723 | 0.2374 |
| 0.3655 | 50.0 | 115450 | 0.5709 | 0.2379 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
shivaniNK8/t5-small-finetuned-cnn-news
|
shivaniNK8
| 2022-07-19T02:37:27Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-07-19T01:48:34Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.7231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-news
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8412
- Rouge1: 24.7231
- Rouge2: 12.292
- Rougel: 20.5347
- Rougelsum: 23.4668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00056
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.0318 | 1.0 | 718 | 1.8028 | 24.5415 | 12.0907 | 20.5343 | 23.3386 |
| 1.8307 | 2.0 | 1436 | 1.8028 | 24.0965 | 11.6367 | 20.2078 | 22.8138 |
| 1.6881 | 3.0 | 2154 | 1.8136 | 25.0822 | 12.6509 | 20.9523 | 23.8303 |
| 1.5778 | 4.0 | 2872 | 1.8269 | 24.4271 | 11.8443 | 20.2281 | 23.0941 |
| 1.501 | 5.0 | 3590 | 1.8412 | 24.7231 | 12.292 | 20.5347 | 23.4668 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
KaliYuga/lapelpindiffusion
|
KaliYuga
| 2022-07-19T01:50:20Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2022-07-10T04:59:13Z |
---
license: other
---
NOT FOR PUBLIC USE RIGHT NOW
If you have found this model, I'd prefer you not use it at the moment--it's not ready for public release and I'm probably going to be releasing it for real as a patrons-only model. It's just hosted here so I can port it into the test notebook I'm running, since hosting private models doesnt work with colab!
Thanks, guys!!
|
helpmefindaname/mini-sequence-tagger-conll03
|
helpmefindaname
| 2022-07-19T00:53:03Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:conll2003",
"region:us"
] |
token-classification
| 2022-07-14T23:30:10Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language: en
datasets:
- conll2003
widget:
- text: "George Washington went to Washington"
---
This is a very small model I use for testing my [ner eval dashboard](https://github.com/helpmefindaname/ner-eval-dashboard)
F1-Score: **48,73** (CoNLL-03)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on huggingface minimal testing embeddings
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("helpmefindaname/mini-sequence-tagger-conll03")
# make example sentence
sentence = Sentence("George Washington went to Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (1.0)]
Span [5]: "Washington" [− Labels: LOC (1.0)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington went to Washington*".
---
### Training: Script to train this model
The following command was used to train this model:
where `examples\ner\run_ner.py` refers to [this script](https://github.com/flairNLP/flair/blob/master/examples/ner/run_ner.py)
```
python examples\ner\run_ner.py --model_name_or_path hf-internal-testing/tiny-random-bert --dataset_name CONLL_03 --learning_rate 0.002 --mini_batch_chunk_size 1024 --batch_size 64 --num_epochs 100
```
---
|
Kayvane/distilroberta-base-wandb-week-3-complaints-classifier-1024
|
Kayvane
| 2022-07-19T00:52:23Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:consumer-finance-complaints",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-18T17:43:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- consumer-finance-complaints
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: distilroberta-base-wandb-week-3-complaints-classifier-1024
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: consumer-finance-complaints
type: consumer-finance-complaints
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8279904184292339
- name: F1
type: f1
value: 0.8236604095677945
- name: Recall
type: recall
value: 0.8279904184292339
- name: Precision
type: precision
value: 0.8235526237070518
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-wandb-week-3-complaints-classifier-1024
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the consumer-finance-complaints dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5351
- Accuracy: 0.8280
- F1: 0.8237
- Recall: 0.8280
- Precision: 0.8236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.027176214786854e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1024
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.7756 | 0.61 | 1500 | 0.7411 | 0.7647 | 0.7375 | 0.7647 | 0.7606 |
| 0.5804 | 1.22 | 3000 | 0.6140 | 0.8088 | 0.8052 | 0.8088 | 0.8077 |
| 0.5008 | 1.83 | 4500 | 0.5351 | 0.8280 | 0.8237 | 0.8280 | 0.8236 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Evelyn18/roberta-base-spanish-squades-becas1
|
Evelyn18
| 2022-07-18T23:21:45Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-18T23:14:18Z |
---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-becas1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-becas1
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 11
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 1.8851 |
| No log | 2.0 | 12 | 1.7681 |
| No log | 3.0 | 18 | 2.0453 |
| No log | 4.0 | 24 | 2.2795 |
| No log | 5.0 | 30 | 2.4024 |
| No log | 6.0 | 36 | 2.4402 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ronanki/ml_use_512_MNR_10-2022-07-17_14-22-50
|
ronanki
| 2022-07-18T22:16:18Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-07-18T22:16:09Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# ronanki/ml_use_512_MNR_10-2022-07-17_14-22-50
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/ml_use_512_MNR_10-2022-07-17_14-22-50')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/ml_use_512_MNR_10-2022-07-17_14-22-50)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 22 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 22,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
rahuldebdas79/finetuning-sentiment-model-3000-samples
|
rahuldebdas79
| 2022-07-18T18:40:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-08T09:05:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8684210526315789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3157
- Accuracy: 0.8667
- F1: 0.8684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ChuVN/bart-base-finetuned-squad2
|
ChuVN
| 2022-07-18T17:00:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-18T04:13:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bart-base-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-squad2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9981 | 1.0 | 16319 | 0.9607 |
| 0.7521 | 2.0 | 32638 | 1.0446 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
elliotthwang/mt5-small-finetuned-tradition-zh
|
elliotthwang
| 2022-07-18T16:44:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-29T13:09:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-tradition-zh
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
args: chinese_traditional
metrics:
- name: Rouge1
type: rouge
value: 5.7806
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-tradition-zh
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9218
- Rouge1: 5.7806
- Rouge2: 1.266
- Rougel: 5.761
- Rougelsum: 5.7833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 4.542 | 1.0 | 2336 | 3.1979 | 4.8334 | 1.025 | 4.8142 | 4.8326 |
| 3.7542 | 2.0 | 4672 | 3.0662 | 5.2155 | 1.0978 | 5.2025 | 5.2158 |
| 3.5706 | 3.0 | 7008 | 3.0070 | 5.5471 | 1.3397 | 5.5386 | 5.5391 |
| 3.4668 | 4.0 | 9344 | 2.9537 | 5.5865 | 1.1558 | 5.5816 | 5.5964 |
| 3.4082 | 5.0 | 11680 | 2.9391 | 5.8061 | 1.3462 | 5.7944 | 5.812 |
| 3.375 | 6.0 | 14016 | 2.9218 | 5.7806 | 1.266 | 5.761 | 5.7833 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bothrajat/CartPole
|
bothrajat
| 2022-07-18T16:19:35Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-18T16:19:20Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole
results:
- metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
AliMMZ/q-FrozenLake-v1-4x4-noSlippery
|
AliMMZ
| 2022-07-18T16:07:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-18T16:07:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Evelyn18/roberta-base-spanish-squades-modelo-robertav0
|
Evelyn18
| 2022-07-18T16:01:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-18T15:52:15Z |
---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-modelo-robertav0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-modelo-robertav0
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 11
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 2.1175 |
| No log | 2.0 | 12 | 1.7427 |
| No log | 3.0 | 18 | 2.0810 |
| No log | 4.0 | 24 | 2.3820 |
| No log | 5.0 | 30 | 2.5007 |
| No log | 6.0 | 36 | 2.6782 |
| No log | 7.0 | 42 | 2.7578 |
| No log | 8.0 | 48 | 2.7703 |
| No log | 9.0 | 54 | 2.7654 |
| No log | 10.0 | 60 | 2.7628 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
silviacamplani/distilbert-uncase-finetuned-ai-ner
|
silviacamplani
| 2022-07-18T15:56:55Z | 8 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-08T09:55:39Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: silviacamplani/distilbert-uncase-finetuned-ai-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/distilbert-uncase-finetuned-ai-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5704
- Validation Loss: 2.5380
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2918 | 3.0479 | 0 |
| 2.8526 | 2.6902 | 1 |
| 2.5704 | 2.5380 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
domenicrosati/pegasus-xsum-finetuned-paws-parasci
|
domenicrosati
| 2022-07-18T15:35:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"paraphrasing",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-18T12:24:37Z |
---
tags:
- paraphrasing
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-xsum-finetuned-paws-parasci
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-xsum-finetuned-paws-parasci
This model is a fine-tuned version of [domenicrosati/pegasus-xsum-finetuned-paws](https://huggingface.co/domenicrosati/pegasus-xsum-finetuned-paws) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2256
- Rouge1: 61.8854
- Rouge2: 43.1061
- Rougel: 57.421
- Rougelsum: 57.4417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 0.05 | 1000 | 3.8024 | 49.471 | 24.8024 | 43.4857 | 43.5552 |
| No log | 0.09 | 2000 | 3.6533 | 49.1046 | 24.4038 | 43.0189 | 43.002 |
| No log | 0.14 | 3000 | 3.5867 | 49.5026 | 24.748 | 43.3059 | 43.2923 |
| No log | 0.19 | 4000 | 3.5613 | 49.4319 | 24.5444 | 43.2225 | 43.1965 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Kayvane/distilbert-base-uncased-wandb-week-3-complaints-classifier-1500
|
Kayvane
| 2022-07-18T15:32:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:consumer-finance-complaints",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-18T08:15:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- consumer-finance-complaints
model-index:
- name: distilbert-base-uncased-wandb-week-3-complaints-classifier-1500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-wandb-week-3-complaints-classifier-1500
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the consumer-finance-complaints dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
OATML-Markslab/Tranception_Large
|
OATML-Markslab
| 2022-07-18T15:25:35Z | 10 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tranception",
"fill-mask",
"arxiv:2205.13760",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-14T08:54:44Z |
# Tranception model
This Hugging Face Hub repo contains the model checkpoint for the Tranception model as described in our paper ["Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval"](https://arxiv.org/abs/2205.13760). The official GitHub repository can be accessed [here](https://github.com/OATML-Markslab/Tranception). This project is a joint collaboration between the [Marks lab](https://www.deboramarkslab.com/) and the [OATML group](https://oatml.cs.ox.ac.uk/).
## Abstract
The ability to accurately model the fitness landscape of protein sequences is critical to a wide range of applications, from quantifying the effects of human variants on disease likelihood, to predicting immune-escape mutations in viruses and designing novel biotherapeutic proteins. Deep generative models of protein sequences trained on multiple sequence alignments have been the most successful approaches so far to address these tasks. The performance of these methods is however contingent on the availability of sufficiently deep and diverse alignments for reliable training. Their potential scope is thus limited by the fact many protein families are hard, if not impossible, to align. Large language models trained on massive quantities of non-aligned protein sequences from diverse families address these problems and show potential to eventually bridge the performance gap. We introduce Tranception, a novel transformer architecture leveraging autoregressive predictions and retrieval of homologous sequences at inference to achieve state-of-the-art fitness prediction performance. Given its markedly higher performance on multiple mutants, robustness to shallow alignments and ability to score indels, our approach offers significant gain of scope over existing approaches. To enable more rigorous model testing across a broader range of protein families, we develop ProteinGym -- an extensive set of multiplexed assays of variant effects, substantially increasing both the number and diversity of assays compared to existing benchmarks.
## License
This project is available under the MIT license.
## Reference
If you use Tranception or other files provided through our GitHub repository, please cite the following paper:
```
Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A., Marks, D.S., Gal, Y. (2022). Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. ICML.
```
## Links
Pre-print: https://arxiv.org/abs/2205.13760
GitHub: https://github.com/OATML-Markslab/Tranception
|
yixi/bert-finetuned-ner
|
yixi
| 2022-07-18T13:42:24Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-17T23:09:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.934260639178672
- name: Recall
type: recall
value: 0.9495119488387749
- name: F1
type: f1
value: 0.9418245555462816
- name: Accuracy
type: accuracy
value: 0.9868281627126626
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0573
- Precision: 0.9343
- Recall: 0.9495
- F1: 0.9418
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0854 | 1.0 | 1756 | 0.0639 | 0.9148 | 0.9329 | 0.9238 | 0.9822 |
| 0.0403 | 2.0 | 3512 | 0.0542 | 0.9370 | 0.9512 | 0.9440 | 0.9866 |
| 0.0204 | 3.0 | 5268 | 0.0573 | 0.9343 | 0.9495 | 0.9418 | 0.9868 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
svenstahlmann/finetuned-distilbert-needmining
|
svenstahlmann
| 2022-07-18T13:15:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"needmining",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-18T12:50:37Z |
---
language: en
tags:
- distilbert
- needmining
license: apache-2.0
metric:
- f1
---
# Finetuned-Distilbert-needmining (uncased)
This model is a finetuned version of the [Distilbert base model](https://huggingface.co/distilbert-base-uncased). It was
trained to predict need-containing sentences from amazon product reviews.
## Model description
This mode is part of ongoing research, after the publication of the research more information will be added.
## Intended uses & limitations
You can use this model to identify sentences that contain customer needs in user-generated content. This can act as a filtering process to remove uninformative content for market research.
### How to use
You can use this model directly with a pipeline for text classification:
```python
>>> from transformers import pipeline
>>> classifier = pipeline("text-classification", model="svenstahlmann/finetuned-distilbert-needmining")
>>> classifier("the plasic feels super cheap.")
[{'label': 'contains need', 'score': 0.9397542476654053}]
```
### Limitations and bias
We are not aware of any bias in the training data.
## Training data
The training was done on a dataset of 6400 sentences. The sentences were taken from product reviews off amazon and coded if they express customer needs.
## Training procedure
For the training, we used [Population Based Training (PBT)](https://www.deepmind.com/blog/population-based-training-of-neural-networks) and optimized for f1 score on a validation set of 1600 sentences.
### Preprocessing
The preprocessing follows the [Distilbert base model](https://huggingface.co/distilbert-base-uncased).
### Pretraining
The model was trained on a titan RTX for 1 hour.
## Evaluation results
Results on the validation set:
| F1 |
|:----:|
| 76.0 |
### BibTeX entry and citation info
coming soon
|
MMVos/distilbert-base-uncased-finetuned-squad
|
MMVos
| 2022-07-18T12:16:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-18T09:52:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1814 | 1.0 | 8235 | 1.2488 |
| 0.9078 | 2.0 | 16470 | 1.3127 |
| 0.7439 | 3.0 | 24705 | 1.4214 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
pritoms/opt-350m-finetuned-stack
|
pritoms
| 2022-07-18T11:14:18Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-18T10:53:56Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: opt-350m-finetuned-stack
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m-finetuned-stack
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
julmarti/ppo-LunarLander-v2
|
julmarti
| 2022-07-18T11:06:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-18T11:06:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 246.73 +/- 23.48
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dpovedano/distilbert-base-uncased-finetuned-ner
|
dpovedano
| 2022-07-18T10:13:45Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-18T10:05:44Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: dpovedano/distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dpovedano/distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0285
- Validation Loss: 0.0612
- Train Precision: 0.9222
- Train Recall: 0.9358
- Train F1: 0.9289
- Train Accuracy: 0.9834
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.0289 | 0.0612 | 0.9222 | 0.9358 | 0.9289 | 0.9834 | 0 |
| 0.0284 | 0.0612 | 0.9222 | 0.9358 | 0.9289 | 0.9834 | 1 |
| 0.0285 | 0.0612 | 0.9222 | 0.9358 | 0.9289 | 0.9834 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Livingwithmachines/bert_1875_1890
|
Livingwithmachines
| 2022-07-18T09:37:54Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-18T09:35:12Z |
# Neural Language Models for Nineteenth-Century English: bert_1875_1890
## Introduction
BERT model trained on a large historical dataset of books in English, published between 1875-1890 and comprised of ~1.3 billion tokens.
- Data paper: http://doi.org/10.5334/johd.48
- Github repository: https://github.com/Living-with-machines/histLM
## License
The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode.
## Funding Statement
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1).
## Dataset creators
Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
|
Livingwithmachines/bert_1760_1900
|
Livingwithmachines
| 2022-07-18T09:30:32Z | 74 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-18T09:28:01Z |
# Neural Language Models for Nineteenth-Century English: bert_1760_1900
## Introduction
BERT model trained on a large historical dataset of books in English, published between 1760-1900 and comprised of ~5.1 billion tokens.
- Data paper: http://doi.org/10.5334/johd.48
- Github repository: https://github.com/Living-with-machines/histLM
## License
The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode.
## Funding Statement
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1).
## Dataset creators
Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
|
Livingwithmachines/bert_1760_1850
|
Livingwithmachines
| 2022-07-18T09:27:11Z | 67 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-18T09:22:18Z |
# Neural Language Models for Nineteenth-Century English: bert_1760_1850
## Introduction
BERT model trained on a large historical dataset of books in English, published between 1760-1850 and comprised of ~1.3 billion tokens.
- Data paper: http://doi.org/10.5334/johd.48
- Github repository: https://github.com/Living-with-machines/histLM
## License
The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode.
## Funding Statement
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1).
## Dataset creators
Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
|
rsuwaileh/IDRISI-LMR-HD-TB-partition
|
rsuwaileh
| 2022-07-18T09:17:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-11T20:32:05Z |
This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
The model is trained using Hurricane Dorian 2019 event (only the training data is used for training) from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the Type-based LMR mode and using the random version of the data.
You can download this data in BILOU format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/hurricane_dorian_2019).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-HD-TB](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB)
- [rsuwaileh/IDRISI-LMR-HD-TL](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL)
- [rsuwaileh/IDRISI-LMR-HD-TL-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL-partition/)
* Larger models are available at [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
* Models trained on the entire IDRISI-R dataset:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
To cite this model:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
rsuwaileh/IDRISI-LMR-HD-TL
|
rsuwaileh
| 2022-07-18T09:16:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-11T20:30:24Z |
This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
The model is trained using Hurricane Dorian 2019 event (training, development, and test data are used for training) from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the Type-less LMR mode and using the random version of the data.
You can download this data in BILOU format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/hurricane_dorian_2019).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-HD-TB](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB)
- [rsuwaileh/IDRISI-LMR-HD-TB-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB-partition/)
- [rsuwaileh/IDRISI-LMR-HD-TL-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL-partition)
* Larger models are available at [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
* Models trained on the entire IDRISI-R dataset:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
To cite this model:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
rsuwaileh/IDRISI-LMR-HD-TL-partition
|
rsuwaileh
| 2022-07-18T09:16:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-11T20:30:51Z |
This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
The model is trained using Hurricane Dorian 2019 event (training data is used for training) from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the Type-less LMR mode and using the random version of the data.
You can download this data in BILOU format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/hurricane_dorian_2019).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-HD-TB](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB)
- [rsuwaileh/IDRISI-LMR-HD-TB-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB-partition/)
- [rsuwaileh/IDRISI-LMR-HD-TL](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL)
* Larger models are available at [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
* Models trained on the entire IDRISI-R dataset:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
To cite this model:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
Jimmie/identify-this-insect
|
Jimmie
| 2022-07-18T07:17:00Z | 0 | 3 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-07-17T14:06:07Z |
---
tags:
- fastai
---
# Identify This Insect
## Model description
This is a model used to differentiate between three types of insects:
- Millipede
- Centipede
- Caterpillar
It was created as part of the end-to-end Deep Learning Project learning process.
The model is a pretrained `convnext_tiny_in22k` from the [timm library](https://github.com/rwightman/pytorch-image-models) fine-tuned on the new dataset.
## Intended uses & limitations
This model was trained on roughly 150 pictures of each category and performed well. However, it was not vigorously tested and so may perform badly on some edge cases images and may contain bias e.g. when training, I noticed that most images of caterpillars were next to leaves and plantation, so it may have learned to associate that environment with a caterpillar.
If you notice any weird behavior, leave a comment on the `Community Tab`.
## Training and evaluation data
I scrapped the internet for pictures of the three categories to train this model. Duckduckgo was used.
To learn how the model was trained, read [this notebook](https://github.com/jimmiemunyi/deeplearning-experiments/blob/main/notebooks/Centipede_vs_Millipede_vs_Caterpillar.ipynb).
|
namwoo/distilbert-base-uncased-finetuned-ner
|
namwoo
| 2022-07-18T00:38:09Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-18T00:35:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: namwoo/distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# namwoo/distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0339
- Validation Loss: 0.0623
- Train Precision: 0.9239
- Train Recall: 0.9335
- Train F1: 0.9287
- Train Accuracy: 0.9829
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1982 | 0.0715 | 0.9040 | 0.9218 | 0.9128 | 0.9799 | 0 |
| 0.0537 | 0.0618 | 0.9202 | 0.9305 | 0.9254 | 0.9827 | 1 |
| 0.0339 | 0.0623 | 0.9239 | 0.9335 | 0.9287 | 0.9829 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
pyronear/mobilenet_v3_small
|
pyronear
| 2022-07-17T23:48:39Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:pyronear/openfire",
"arxiv:1905.02244",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-13T23:53:41Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- pyronear/openfire
---
# MobileNet V3 - Small model
Pretrained on a dataset for wildfire binary classification (soon to be shared). The MobileNet V3 architecture was introduced in [this paper](https://arxiv.org/pdf/1905.02244.pdf).
## Model description
The core idea of the author is to simplify the final stage, while using SiLU as activations and making Squeeze-and-Excite blocks larger.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:
```shell
pip install pyrovision
```
or using [conda](https://anaconda.org/pyronear/pyrovision):
```shell
conda install -c pyronear pyrovision
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from pyrovision.models import model_from_hf_hub
model = model_from_hf_hub("pyronear/mobilenet_v3_small").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-1905-02244,
author = {Andrew Howard and
Mark Sandler and
Grace Chu and
Liang{-}Chieh Chen and
Bo Chen and
Mingxing Tan and
Weijun Wang and
Yukun Zhu and
Ruoming Pang and
Vijay Vasudevan and
Quoc V. Le and
Hartwig Adam},
title = {Searching for MobileNetV3},
journal = {CoRR},
volume = {abs/1905.02244},
year = {2019},
url = {http://arxiv.org/abs/1905.02244},
eprinttype = {arXiv},
eprint = {1905.02244},
timestamp = {Thu, 27 May 2021 16:20:51 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1905-02244.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{chintala_torchvision_2017,
author = {Chintala, Soumith},
month = {4},
title = {{Torchvision}},
url = {https://github.com/pytorch/vision},
year = {2017}
}
```
|
pyronear/resnet34
|
pyronear
| 2022-07-17T23:48:22Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:pyronear/openfire",
"arxiv:1512.03385",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-17T21:07:12Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- pyronear/openfire
---
# ResNet-34 model
Pretrained on a dataset for wildfire binary classification (soon to be shared).
## Model description
The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:
```shell
pip install pyrovision
```
or using [conda](https://anaconda.org/pyronear/pyrovision):
```shell
conda install -c pyronear pyrovision
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from pyrovision.models import model_from_hf_hub
model = model_from_hf_hub("pyronear/resnet34").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/HeZRS15,
author = {Kaiming He and
Xiangyu Zhang and
Shaoqing Ren and
Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {CoRR},
volume = {abs/1512.03385},
year = {2015},
url = {http://arxiv.org/abs/1512.03385},
eprinttype = {arXiv},
eprint = {1512.03385},
timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{chintala_torchvision_2017,
author = {Chintala, Soumith},
month = {4},
title = {{Torchvision}},
url = {https://github.com/pytorch/vision},
year = {2017}
}
```
|
pyronear/resnet18
|
pyronear
| 2022-07-17T23:48:06Z | 53 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:pyronear/openfire",
"arxiv:1512.03385",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-17T21:06:58Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- pyronear/openfire
---
# ResNet-18 model
Pretrained on a dataset for wildfire binary classification (soon to be shared).
## Model description
The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:
```shell
pip install pyrovision
```
or using [conda](https://anaconda.org/pyronear/pyrovision):
```shell
conda install -c pyronear pyrovision
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from pyrovision.models import model_from_hf_hub
model = model_from_hf_hub("pyronear/resnet18").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/HeZRS15,
author = {Kaiming He and
Xiangyu Zhang and
Shaoqing Ren and
Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {CoRR},
volume = {abs/1512.03385},
year = {2015},
url = {http://arxiv.org/abs/1512.03385},
eprinttype = {arXiv},
eprint = {1512.03385},
timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{chintala_torchvision_2017,
author = {Chintala, Soumith},
month = {4},
title = {{Torchvision}},
url = {https://github.com/pytorch/vision},
year = {2017}
}
```
|
alanwang8/default-longformer-base-4096-finetuned-cola
|
alanwang8
| 2022-07-17T23:19:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-11T17:58:47Z |
---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: longformer-base-4096-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-base-4096-finetuned-cola
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7005
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.7005 | 0.0 |
| 0.6995 | 2.0 | 536 | 0.6960 | -0.0043 |
| 0.6995 | 3.0 | 804 | 0.6976 | -0.0057 |
| 0.6962 | 4.0 | 1072 | 0.6983 | -0.0123 |
| 0.6962 | 5.0 | 1340 | 0.6977 | -0.0529 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
crdealme/q-FrozenLake-v1-4x4-noSlippery
|
crdealme
| 2022-07-17T18:09:52Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-17T18:09:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="crdealme/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese
|
Edresson
| 2022-07-17T17:39:10Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"arxiv:2204.00618",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: pt
datasets:
- Common Voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Edresson Casanova Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test Common Voice 7.0 WER
type: wer
value: 33.96
---
# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Portuguese
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Portuguese using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-7.0-2021-07-21")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
|
Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian
|
Edresson
| 2022-07-17T17:37:45Z | 20 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"Russian-speech-corpus",
"PyTorch",
"arxiv:2204.00618",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language: pt
datasets:
- Common Voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- Russian-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Edresson Casanova Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, MAILABS plus data augmentation
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test Common Voice 7.0 WER
type: wer
value: 19.46
---
# Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, MAILABS plus data augmentation
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) Wav2vec2 Large 100k Voxpopuli fine-tuned in Russian using the Common Voice 7.0, M-AILABS plus data augmentation method based on TTS and voice conversion.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-Common_Voice_plus_TTS-Dataset_plus_Data_Augmentation-russian")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
|
Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-portuguese
|
Edresson
| 2022-07-17T17:37:08Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"arxiv:2204.00618",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-17T17:18:45Z |
---
language: pt
datasets:
- Common Voice
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Edresson Casanova Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset in Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test Common Voice 7.0 WER
type: wer
value: 63.90
---
# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Portuguese
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Portuguese using a single-speaker dataset (TTS-Portuguese Corpus).
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-portuguese")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-7.0-2021-07-21")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
|
domenicrosati/pegasus-xsum-finetuned-paws
|
domenicrosati
| 2022-07-17T17:20:35Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"paraphrasing",
"generated_from_trainer",
"dataset:paws",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-17T16:58:16Z |
---
tags:
- paraphrasing
- generated_from_trainer
datasets:
- paws
metrics:
- rouge
model-index:
- name: pegasus-xsum-finetuned-paws
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: paws
type: paws
args: labeled_final
metrics:
- name: Rouge1
type: rouge
value: 92.4371
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-xsum-finetuned-paws
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the paws dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1199
- Rouge1: 92.4371
- Rouge2: 75.4061
- Rougel: 84.1519
- Rougelsum: 84.1958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.1481 | 1.46 | 1000 | 2.0112 | 93.7727 | 73.3021 | 84.2963 | 84.2506 |
| 2.0113 | 2.93 | 2000 | 2.0579 | 93.813 | 73.4119 | 84.3674 | 84.2693 |
| 2.054 | 4.39 | 3000 | 2.0890 | 93.3926 | 73.3727 | 84.2814 | 84.1649 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
lkm2835/distilbert-imdb
|
lkm2835
| 2022-07-17T14:47:59Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-28T04:29:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 391 | 0.1849 | 0.9281 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ranrinat/distilbert-base-uncased-finetuned-emotion
|
ranrinat
| 2022-07-17T14:28:45Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-17T12:46:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9246080819022496
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2158
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8152 | 1.0 | 250 | 0.2994 | 0.9095 | 0.9072 |
| 0.2424 | 2.0 | 500 | 0.2158 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.