modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 06:30:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 06:30:39
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
thammarat-th/distilbert-base-uncased-finetuned-imdb
|
thammarat-th
| 2022-08-31T04:46:34Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-31T04:01:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4216 | 1.0 | 782 | 2.2803 |
| 2.3719 | 2.0 | 1564 | 2.2577 |
| 2.3407 | 3.0 | 2346 | 2.2320 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ayameRushia/wav2vec2-large-xls-r-300m-el
|
ayameRushia
| 2022-08-31T04:43:27Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"el",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- el
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- robust-speech-event
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-el
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: el
metrics:
- name: Test WER using LM
type: wer
value: 20.9
- name: Test CER using LM
type: cer
value: 6.0466
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - EL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3218
- Wer: 0.3095
## Training and evaluation data
Evaluation is conducted in Notebook, you can see within the repo "notebook_evaluation_wav2vec2_el.ipynb"
Test WER without LM
wer = 31.1294 %
cer = 7.9509 %
Test WER using LM
wer = 20.7340 %
cer = 6.0466 %
How to use eval.py
```
huggingface-cli login #login to huggingface for getting auth token to access the common voice v8
#running with LM
!python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-el --dataset mozilla-foundation/common_voice_8_0 --config el --split test
# running without LM
!python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-el --dataset mozilla-foundation/common_voice_8_0 --config el --split test --greedy
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 80.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.3683 | 8.77 | 500 | 3.1280 | 1.0 |
| 1.9915 | 17.54 | 1000 | 0.6600 | 0.6444 |
| 0.6565 | 26.32 | 1500 | 0.4208 | 0.4486 |
| 0.4484 | 35.09 | 2000 | 0.3885 | 0.4006 |
| 0.3573 | 43.86 | 2500 | 0.3548 | 0.3626 |
| 0.3063 | 52.63 | 3000 | 0.3375 | 0.3430 |
| 0.2751 | 61.4 | 3500 | 0.3359 | 0.3241 |
| 0.2511 | 70.18 | 4000 | 0.3222 | 0.3108 |
| 0.2361 | 78.95 | 4500 | 0.3205 | 0.3084 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ayameRushia/wav2vec2-large-xls-r-300m-mn
|
ayameRushia
| 2022-08-31T04:43:06Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"mozilla-foundation/common_voice_8_0",
"mn",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- mn
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-mn
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: mn
metrics:
- name: Test WER using LM
type: wer
value: 31.3919
- name: Test CER using LM
type: cer
value: 10.2565
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: mn
metrics:
- name: Test WER
type: wer
value: 65.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: mn
metrics:
- name: Test WER
type: wer
value: 63.09
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5502
- Wer: 0.4042
## Training and evaluation data
Evaluation is conducted in Notebook, you can see within the repo "notebook_evaluation_wav2vec2_mn.ipynb"
Test WER without LM
wer = 58.2171 %
cer = 16.0670 %
Test WER using
wer = 31.3919 %
cer = 10.2565 %
How to use eval.py
```
huggingface-cli login #login to huggingface for getting auth token to access the common voice v8
#running with LM
python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-mn --dataset mozilla-foundation/common_voice_8_0 --config mn --split test
# running without LM
python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-mn --dataset mozilla-foundation/common_voice_8_0 --config mn --split test --greedy
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 6.35 | 400 | 0.9380 | 0.7902 |
| 3.2674 | 12.7 | 800 | 0.5794 | 0.5309 |
| 0.7531 | 19.05 | 1200 | 0.5749 | 0.4815 |
| 0.5382 | 25.4 | 1600 | 0.5530 | 0.4447 |
| 0.4293 | 31.75 | 2000 | 0.5709 | 0.4237 |
| 0.4293 | 38.1 | 2400 | 0.5476 | 0.4059 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Prang9/distilbert-base-uncased-finetuned-imdb
|
Prang9
| 2022-08-31T04:37:28Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-31T04:30:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Pawaret717/distilbert-base-uncased-finetuned-imdb
|
Pawaret717
| 2022-08-31T04:15:16Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-31T04:04:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
earthanan/distilbert-base-uncased-finetuned-imdb
|
earthanan
| 2022-08-31T04:13:43Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-31T04:05:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mooface/xlm-roberta-base-finetuned-panx-de
|
mooface
| 2022-08-31T02:07:15Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-31T01:43:13Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DylanJHJ/monot5m-large-msmarco-100k
|
DylanJHJ
| 2022-08-31T01:20:45Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-27T02:09:46Z |
Check our SIGIR2021 short paper: https://dl.acm.org/doi/10.1145/3404835.3463048
This checkpoint is a variant of monot5 (T5 pointwise re-ranking model).
Specifically, we fuse the "P2Q (i.e. doc2query)" and "Rank (i.e. passage ranking)" to learn the **discriminative** view (Rank) and **geneartive** view (P2Q).
We found that under the specific **mixing ratio** of these two task, the effectiveness of passage re-ranking improves on par with monot5-3B models.
Hence, you can try to do both the task with this checkpoint by the following input format:
- P2Q: Document: *\<here is a document or a passage\>* Translate Document to Query:
- Rank: Query: *\<here is a query\>* Document: *\<here is a document or a passage\>* Relevant:
which the outputs will be like:
- P2Q: *\<relevant query of the given text\>*
- Rank: *true* or *false*
```
Note that we usually use the logit values of *true*/ *false* token from T5 reranker as our query-passage relevant scores
Note the above tokens are all case-sensitive.
```
|
npc-engine/t5-small-mse-summarization
|
npc-engine
| 2022-08-30T23:43:58Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-30T21:24:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-mse-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-mse-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1108
- Rouge1: 43.1145
- Rouge2: 23.2262
- Rougel: 37.218
- Rougelsum: 41.0897
- Bleurt: -0.8051
- Gen Len: 18.549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleurt | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|
| 1.5207 | 1.0 | 267 | 1.2922 | 38.8738 | 19.1958 | 32.8458 | 36.9993 | -0.9061 | 18.668 |
| 1.363 | 2.0 | 534 | 1.2340 | 39.8466 | 20.0452 | 33.9101 | 37.7708 | -0.8925 | 18.657 |
| 1.3062 | 3.0 | 801 | 1.2057 | 40.5536 | 20.8249 | 34.5221 | 38.4648 | -0.8625 | 18.602 |
| 1.272 | 4.0 | 1068 | 1.1782 | 41.0078 | 21.2186 | 35.0101 | 38.9186 | -0.8595 | 18.602 |
| 1.2312 | 5.0 | 1335 | 1.1688 | 41.521 | 21.7934 | 35.704 | 39.4718 | -0.842 | 18.486 |
| 1.2052 | 6.0 | 1602 | 1.1557 | 42.1037 | 22.4291 | 36.3554 | 40.1124 | -0.8432 | 18.533 |
| 1.1842 | 7.0 | 1869 | 1.1440 | 42.4438 | 22.6456 | 36.5729 | 40.3134 | -0.8288 | 18.553 |
| 1.1643 | 8.0 | 2136 | 1.1408 | 42.245 | 22.4859 | 36.3637 | 40.2193 | -0.8284 | 18.622 |
| 1.1495 | 9.0 | 2403 | 1.1320 | 42.5362 | 22.5034 | 36.5092 | 40.4552 | -0.8211 | 18.57 |
| 1.1368 | 10.0 | 2670 | 1.1301 | 42.5159 | 22.462 | 36.4646 | 40.3968 | -0.819 | 18.538 |
| 1.1203 | 11.0 | 2937 | 1.1243 | 42.2803 | 22.5963 | 36.3454 | 40.2987 | -0.8242 | 18.522 |
| 1.1116 | 12.0 | 3204 | 1.1197 | 42.8078 | 22.8409 | 36.7344 | 40.8186 | -0.821 | 18.565 |
| 1.099 | 13.0 | 3471 | 1.1193 | 42.7423 | 22.9397 | 36.7894 | 40.7298 | -0.8125 | 18.552 |
| 1.0976 | 14.0 | 3738 | 1.1176 | 42.9002 | 23.2394 | 37.0215 | 40.9211 | -0.8156 | 18.568 |
| 1.0816 | 15.0 | 4005 | 1.1133 | 43.0007 | 23.3093 | 37.2037 | 40.9719 | -0.8059 | 18.519 |
| 1.084 | 16.0 | 4272 | 1.1146 | 42.9053 | 23.2391 | 37.0542 | 40.8826 | -0.8104 | 18.533 |
| 1.0755 | 17.0 | 4539 | 1.1124 | 43.0429 | 23.2773 | 37.1389 | 41.0755 | -0.8086 | 18.544 |
| 1.0748 | 18.0 | 4806 | 1.1121 | 43.2243 | 23.4179 | 37.2039 | 41.143 | -0.8048 | 18.548 |
| 1.072 | 19.0 | 5073 | 1.1106 | 43.1776 | 23.3061 | 37.3105 | 41.1392 | -0.8039 | 18.549 |
| 1.0671 | 20.0 | 5340 | 1.1108 | 43.1145 | 23.2262 | 37.218 | 41.0897 | -0.8051 | 18.549 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
microsoft/bloom-deepspeed-inference-int8
|
microsoft
| 2022-08-30T23:01:17Z | 7 | 28 |
transformers
|
[
"transformers",
"bloom",
"feature-extraction",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-08-18T18:26:43Z |
---
license: bigscience-bloom-rail-1.0
---
This is a custom INT8 version of the original [BLOOM weights](https://huggingface.co/bigscience/bloom) to make it fast to use with the [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/) engine which uses Tensor Parallelism. In this repo the tensors are split into 8 shards to target 8 GPUs.
The full BLOOM documentation is [here](https://huggingface.co/bigscience/bloom).
To use the weights in repo, you can adapt to your needs the scripts found [here](https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/main/scripts/inference) (XXX: they are going to migrate soon to HF Transformers code base, so will need to update the link once moved).
|
ruse40folly/distilbert-base-uncased-finetuned-emotion
|
ruse40folly
| 2022-08-30T22:15:45Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-30T21:58:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9235310384339321
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2236
- Accuracy: 0.9235
- F1: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8521 | 1.0 | 250 | 0.3251 | 0.9085 | 0.9063 |
| 0.2489 | 2.0 | 500 | 0.2236 | 0.9235 | 0.9235 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/joped
|
huggingtweets
| 2022-08-30T21:55:26Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-30T21:55:18Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/916403716210569216/C0_SAn42_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">joped</div>
<div style="text-align: center; font-size: 14px;">@joped</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from joped.
| Data | joped |
| --- | --- |
| Tweets downloaded | 3216 |
| Retweets | 505 |
| Short tweets | 117 |
| Tweets kept | 2594 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/116whcxp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @joped's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24oibz3y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24oibz3y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/joped')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nawage/ddpm-butterflies-128
|
nawage
| 2022-08-30T20:43:21Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-30T19:29:51Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/nawage/ddpm-butterflies-128/tensorboard?#scalars)
|
RussianNLP/ruRoBERTa-large-rucola
|
RussianNLP
| 2022-08-30T20:23:10Z | 586 | 5 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-30T19:54:51Z |
---
language: ru
license: apache-2.0
tags:
- transformers
thumbnail: "https://github.com/RussianNLP/RuCoLA/blob/main/logo.png"
widget:
- text: "Он решил ту или иную сложную задачу."
---
This is a finetuned version of [RuRoBERTa-large](https://huggingface.co/sberbank-ai/ruRoberta-large) for the task of linguistic acceptability classification on the [RuCoLA](https://rucola-benchmark.com/) benchmark.
The hyperparameters used for finetuning are as follows:
* 5 training epochs (with early stopping based on validation MCC)
* Peak learning rate: 1e-5, linear warmup for 10% of total training time
* Weight decay: 1e-4
* Batch size: 32
* Random seed: 5
* Optimizer: [torch.optim.AdamW](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html)
|
Asma-Kehila/finetuning-sentiment-model-3000-samples
|
Asma-Kehila
| 2022-08-30T19:57:49Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-09T13:16:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3175
- Accuracy: 0.8733
- F1: 0.8733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Tokenizers 0.12.1
|
vendorabc/modeltest
|
vendorabc
| 2022-08-30T19:01:03Z | 0 | 0 |
sklearn
|
[
"sklearn",
"skops",
"tabular-classification",
"license:mit",
"region:us"
] |
tabular-classification
| 2022-08-30T19:00:59Z |
---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
widget:
structuredData:
area error:
- 30.29
- 96.05
- 48.31
compactness error:
- 0.01911
- 0.01652
- 0.01484
concave points error:
- 0.01037
- 0.0137
- 0.01093
concavity error:
- 0.02701
- 0.02269
- 0.02813
fractal dimension error:
- 0.003586
- 0.001698
- 0.002461
mean area:
- 481.9
- 1130.0
- 748.9
mean compactness:
- 0.1058
- 0.1029
- 0.1223
mean concave points:
- 0.03821
- 0.07951
- 0.08087
mean concavity:
- 0.08005
- 0.108
- 0.1466
mean fractal dimension:
- 0.06373
- 0.05461
- 0.05796
mean perimeter:
- 81.09
- 123.6
- 101.7
mean radius:
- 12.47
- 18.94
- 15.46
mean smoothness:
- 0.09965
- 0.09009
- 0.1092
mean symmetry:
- 0.1925
- 0.1582
- 0.1931
mean texture:
- 18.6
- 21.31
- 19.48
perimeter error:
- 2.497
- 5.486
- 3.094
radius error:
- 0.3961
- 0.7888
- 0.4743
smoothness error:
- 0.006953
- 0.004444
- 0.00624
symmetry error:
- 0.01782
- 0.01386
- 0.01397
texture error:
- 1.044
- 0.7975
- 0.7859
worst area:
- 677.9
- 1866.0
- 1156.0
worst compactness:
- 0.2378
- 0.2336
- 0.2394
worst concave points:
- 0.1015
- 0.1789
- 0.1514
worst concavity:
- 0.2671
- 0.2687
- 0.3791
worst fractal dimension:
- 0.0875
- 0.06589
- 0.08019
worst perimeter:
- 96.05
- 165.9
- 124.9
worst radius:
- 14.97
- 24.86
- 19.26
worst smoothness:
- 0.1426
- 0.1193
- 0.1546
worst symmetry:
- 0.3014
- 0.2551
- 0.2837
worst texture:
- 24.64
- 26.58
- 26.0
---
# Model description
This is a HistGradientBoostingClassifier model trained on breast cancer dataset. It's trained with Halving Grid Search Cross Validation, with parameter grids on max_leaf_nodes and max_depth.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|---------------------------------|----------------------------------------------------------|
| aggressive_elimination | False |
| cv | 5 |
| error_score | nan |
| estimator__categorical_features | |
| estimator__early_stopping | auto |
| estimator__l2_regularization | 0.0 |
| estimator__learning_rate | 0.1 |
| estimator__loss | auto |
| estimator__max_bins | 255 |
| estimator__max_depth | |
| estimator__max_iter | 100 |
| estimator__max_leaf_nodes | 31 |
| estimator__min_samples_leaf | 20 |
| estimator__monotonic_cst | |
| estimator__n_iter_no_change | 10 |
| estimator__random_state | |
| estimator__scoring | loss |
| estimator__tol | 1e-07 |
| estimator__validation_fraction | 0.1 |
| estimator__verbose | 0 |
| estimator__warm_start | False |
| estimator | HistGradientBoostingClassifier() |
| factor | 3 |
| max_resources | auto |
| min_resources | exhaust |
| n_jobs | -1 |
| param_grid | {'max_leaf_nodes': [5, 10, 15], 'max_depth': [2, 5, 10]} |
| random_state | 42 |
| refit | True |
| resource | n_samples |
| return_train_score | True |
| scoring | |
| verbose | 0 |
</details>
### Model Plot
The model plot is below.
<style>#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 {color: black;background-color: white;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 pre{padding: 0;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-toggleable {background-color: white;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-estimator:hover {background-color: #d4ebff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-item {z-index: 1;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item:only-child::after {width: 0;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-text-repr-fallback {display: none;}</style><div id="sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={'max_depth': [2, 5, 10],'max_leaf_nodes': [5, 10, 15]},random_state=42)</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="ab167486-be7e-4eb5-be01-ba21adbd7469" type="checkbox" ><label for="ab167486-be7e-4eb5-be01-ba21adbd7469" class="sk-toggleable__label sk-toggleable__label-arrow">HalvingGridSearchCV</label><div class="sk-toggleable__content"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={'max_depth': [2, 5, 10],'max_leaf_nodes': [5, 10, 15]},random_state=42)</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="e9df9f06-8d9e-4379-ad72-52f461408663" type="checkbox" ><label for="e9df9f06-8d9e-4379-ad72-52f461408663" class="sk-toggleable__label sk-toggleable__label-arrow">HistGradientBoostingClassifier</label><div class="sk-toggleable__content"><pre>HistGradientBoostingClassifier()</pre></div></div></div></div></div></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|----------|
| accuracy | 0.959064 |
| f1 score | 0.959064 |
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import pickle
with open(pkl_filename, 'rb') as file:
clf = pickle.load(file)
```
</details>
# Model Card Authors
This model card is written by following authors:
skops_user
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
bibtex
@inproceedings{...,year={2020}}
```
# Additional Content
## Confusion matrix

## Hyperparameter search results
<details>
<summary> Click to expand </summary>
| iter | n_resources | mean_fit_time | std_fit_time | mean_score_time | std_score_time | param_max_depth | param_max_leaf_nodes | params | split0_test_score | split1_test_score | split2_test_score | split3_test_score | split4_test_score | mean_test_score | std_test_score | rank_test_score | split0_train_score | split1_train_score | split2_train_score | split3_train_score | split4_train_score | mean_train_score | std_train_score |
|--------|---------------|-----------------|----------------|-------------------|------------------|-------------------|------------------------|-----------------------------------------|---------------------|---------------------|---------------------|---------------------|---------------------|-------------------|------------------|-------------------|----------------------|----------------------|----------------------|----------------------|----------------------|--------------------|-------------------|
| 0 | 44 | 0.0498069 | 0.0107112 | 0.0121156 | 0.0061838 | 2 | 5 | {'max_depth': 2, 'max_leaf_nodes': 5} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0492636 | 0.0187271 | 0.00738611 | 0.00245441 | 2 | 10 | {'max_depth': 2, 'max_leaf_nodes': 10} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0572055 | 0.0153176 | 0.0111395 | 0.0010297 | 2 | 15 | {'max_depth': 2, 'max_leaf_nodes': 15} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0498482 | 0.0177091 | 0.00857358 | 0.00415935 | 5 | 5 | {'max_depth': 5, 'max_leaf_nodes': 5} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0500658 | 0.00992094 | 0.00998321 | 0.00527031 | 5 | 10 | {'max_depth': 5, 'max_leaf_nodes': 10} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0525903 | 0.0151616 | 0.00874681 | 0.00462998 | 5 | 15 | {'max_depth': 5, 'max_leaf_nodes': 15} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0512018 | 0.0130152 | 0.00881834 | 0.00500514 | 10 | 5 | {'max_depth': 10, 'max_leaf_nodes': 5} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0566921 | 0.0186051 | 0.00513492 | 0.000498488 | 10 | 10 | {'max_depth': 10, 'max_leaf_nodes': 10} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.060587 | 0.04041 | 0.00987453 | 0.00529624 | 10 | 15 | {'max_depth': 10, 'max_leaf_nodes': 15} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 1 | 132 | 0.232459 | 0.0479878 | 0.0145514 | 0.00856422 | 10 | 5 | {'max_depth': 10, 'max_leaf_nodes': 5} | 0.961538 | 0.923077 | 0.923077 | 0.961538 | 0.961538 | 0.946154 | 0.0188422 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 1 | 132 | 0.272297 | 0.0228833 | 0.011561 | 0.0068272 | 10 | 10 | {'max_depth': 10, 'max_leaf_nodes': 10} | 0.961538 | 0.923077 | 0.923077 | 0.961538 | 0.961538 | 0.946154 | 0.0188422 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 1 | 132 | 0.239161 | 0.0330412 | 0.0116591 | 0.003554 | 10 | 15 | {'max_depth': 10, 'max_leaf_nodes': 15} | 0.961538 | 0.923077 | 0.923077 | 0.961538 | 0.961538 | 0.946154 | 0.0188422 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 2 | 396 | 0.920334 | 0.18198 | 0.0166654 | 0.00776263 | 10 | 15 | {'max_depth': 10, 'max_leaf_nodes': 15} | 0.962025 | 0.911392 | 0.987342 | 0.974359 | 0.935897 | 0.954203 | 0.0273257 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
</details>
## Classification report
<details>
<summary> Click to expand </summary>
| index | precision | recall | f1-score | support |
|--------------|-------------|----------|------------|-----------|
| malignant | 0.951613 | 0.936508 | 0.944 | 63 |
| benign | 0.963303 | 0.972222 | 0.967742 | 108 |
| macro avg | 0.957458 | 0.954365 | 0.955871 | 171 |
| weighted avg | 0.958996 | 0.959064 | 0.958995 | 171 |
</details>
|
agustina/museo
|
agustina
| 2022-08-30T18:25:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-08-30T18:24:37Z |
museo de mariposis y insectos moderno, con muebles blancos yiluminados
|
VioletaMG/ddpm-butterflies-128_50epochs
|
VioletaMG
| 2022-08-30T18:09:04Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-30T17:38:43Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128_50epochs
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/VioletaMG/ddpm-butterflies-128_50epochs/tensorboard?#scalars)
|
epsil/Health_Psychology_Analysis
|
epsil
| 2022-08-30T17:49:10Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-08-30T15:49:29Z |
### TO BE ADDED
widget:
- text: "I am going through lot of stress"
|
Laksitha/autotrain-enhanced-tosdr-summariser-1339851272
|
Laksitha
| 2022-08-30T16:40:01Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:Laksitha/autotrain-data-enhanced-tosdr-summariser",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-08-30T16:38:01Z |
---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Laksitha/autotrain-data-enhanced-tosdr-summariser
co2_eq_emissions:
emissions: 0.011960118277424782
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1339851272
- CO2 Emissions (in grams): 0.0120
## Validation Metrics
- Loss: 2.416
- Rouge1: 34.945
- Rouge2: 12.533
- RougeL: 19.876
- RougeLsum: 31.821
- Gen Len: 92.917
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Laksitha/autotrain-enhanced-tosdr-summariser-1339851272
```
|
yasuaki0406/distilbert-base-uncased-finetuned-emotion
|
yasuaki0406
| 2022-08-30T16:01:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-30T15:51:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9244242594868723
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2123
- Accuracy: 0.9245
- F1: 0.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8144 | 1.0 | 250 | 0.3129 | 0.9055 | 0.9027 |
| 0.2457 | 2.0 | 500 | 0.2123 | 0.9245 | 0.9244 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
maxpe/twitter-roberta-base-jun2022_sem_eval_2018_task_1
|
maxpe
| 2022-08-30T15:33:52Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"doi:10.57967/hf/0033",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-30T14:26:53Z |
# Twitter-roBERTa-base-jun2022_sem_eval_2018_task1
This model was trained on ~7000 tweets in English annotated for 11 emotion categories in [SemEval-2018 Task 1: Affect in Tweets: SubTask 5: Emotion Classification](https://competitions.codalab.org/competitions/17751) (also available on the [Hugging Face Dataset Hub](https://huggingface.co/datasets/sem_eval_2018_task_1)).
The underlying model is a RoBERTa-base model trained on 132.26M tweets until the end of June 2022. Fore more details check out the [model page](https://huggingface.co/cardiffnlp/twitter-roberta-base-jun2022).
To quickly test it locally, use a pipeline:
```python
from transformers import pipeline
pipe = pipeline("text-classification",model="maxpe/twitter-roberta-base-jun2022_sem_eval_2018_task_1")
pipe("I couldn't see any seafood for a year after I went to that restaurant that they send all the tourists to!",top_k=11)
```
|
muhtasham/bert-small-finer-longer
|
muhtasham
| 2022-08-30T14:26:44Z | 180 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-29T12:21:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-small-finer-longer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finer-longer
This model is a fine-tuned version of [muhtasham/bert-small-finer](https://huggingface.co/muhtasham/bert-small-finer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 0.49 | 500 | 1.6683 |
| 1.5941 | 0.97 | 1000 | 1.6569 |
| 1.5941 | 1.46 | 1500 | 1.6436 |
| 1.5605 | 1.94 | 2000 | 1.6173 |
| 1.5605 | 2.43 | 2500 | 1.6073 |
| 1.5297 | 2.91 | 3000 | 1.6001 |
| 1.5297 | 3.4 | 3500 | 1.5815 |
| 1.5022 | 3.89 | 4000 | 1.5756 |
| 1.5022 | 4.37 | 4500 | 1.5568 |
| 1.4753 | 4.86 | 5000 | 1.5458 |
| 1.4753 | 5.34 | 5500 | 1.5399 |
| 1.4537 | 5.83 | 6000 | 1.5273 |
| 1.4537 | 6.32 | 6500 | 1.5192 |
| 1.433 | 6.8 | 7000 | 1.5099 |
| 1.433 | 7.29 | 7500 | 1.5083 |
| 1.4169 | 7.77 | 8000 | 1.4957 |
| 1.4169 | 8.26 | 8500 | 1.4914 |
| 1.3982 | 8.75 | 9000 | 1.4859 |
| 1.3982 | 9.23 | 9500 | 1.4697 |
| 1.3877 | 9.72 | 10000 | 1.4711 |
| 1.3877 | 10.2 | 10500 | 1.4608 |
| 1.3729 | 10.69 | 11000 | 1.4583 |
| 1.3729 | 11.18 | 11500 | 1.4513 |
| 1.3627 | 11.66 | 12000 | 1.4498 |
| 1.3627 | 12.15 | 12500 | 1.4396 |
| 1.357 | 12.63 | 13000 | 1.4415 |
| 1.357 | 13.12 | 13500 | 1.4347 |
| 1.3484 | 13.61 | 14000 | 1.4316 |
| 1.3484 | 14.09 | 14500 | 1.4319 |
| 1.3442 | 14.58 | 15000 | 1.4268 |
| 1.3442 | 15.06 | 15500 | 1.4293 |
| 1.3387 | 15.55 | 16000 | 1.4217 |
| 1.3387 | 16.03 | 16500 | 1.4241 |
| 1.3358 | 16.52 | 17000 | 1.4250 |
| 1.3358 | 17.01 | 17500 | 1.4196 |
| 1.3344 | 17.49 | 18000 | 1.4193 |
| 1.3344 | 17.98 | 18500 | 1.4200 |
| 1.3274 | 18.46 | 19000 | 1.4250 |
| 1.3274 | 18.95 | 19500 | 1.4168 |
| 1.3348 | 19.44 | 20000 | 1.4164 |
| 1.3348 | 19.92 | 20500 | 1.4264 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
turhancan97/a2c-AntBulletEnv-v0
|
turhancan97
| 2022-08-30T13:21:48Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-30T13:20:42Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1619.40 +/- 156.98
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
adelgalu/wav2vec2-base-klay-demo-google-colab
|
adelgalu
| 2022-08-30T12:48:07Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-30T11:13:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-klay-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-klay-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0060
- Wer: 0.1791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 300
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 15.0 | 300 | 2.4020 | 0.9889 |
| 2.4596 | 30.0 | 600 | 1.3773 | 0.9833 |
| 2.4596 | 45.0 | 900 | 0.5241 | 0.7253 |
| 1.1148 | 60.0 | 1200 | 0.2260 | 0.4472 |
| 0.3637 | 75.0 | 1500 | 0.1474 | 0.3682 |
| 0.3637 | 90.0 | 1800 | 0.0742 | 0.2848 |
| 0.1874 | 105.0 | 2100 | 0.0563 | 0.2681 |
| 0.1874 | 120.0 | 2400 | 0.0535 | 0.2436 |
| 0.1273 | 135.0 | 2700 | 0.0335 | 0.2258 |
| 0.0914 | 150.0 | 3000 | 0.0336 | 0.2214 |
| 0.0914 | 165.0 | 3300 | 0.0323 | 0.2136 |
| 0.0733 | 180.0 | 3600 | 0.0225 | 0.2069 |
| 0.0733 | 195.0 | 3900 | 0.0953 | 0.2314 |
| 0.0678 | 210.0 | 4200 | 0.0122 | 0.1902 |
| 0.0428 | 225.0 | 4500 | 0.0104 | 0.1869 |
| 0.0428 | 240.0 | 4800 | 0.0120 | 0.1791 |
| 0.0291 | 255.0 | 5100 | 0.0110 | 0.1835 |
| 0.0291 | 270.0 | 5400 | 0.0062 | 0.1802 |
| 0.0235 | 285.0 | 5700 | 0.0061 | 0.1802 |
| 0.0186 | 300.0 | 6000 | 0.0060 | 0.1791 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
jcmc/reinforce-Pixelcopter
|
jcmc
| 2022-08-30T12:36:49Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-30T12:07:22Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 8.80 +/- 7.30
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
huggingbase/xlm-roberta-base-finetuned-panx-all
|
huggingbase
| 2022-08-30T12:29:00Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-30T11:59:48Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1745
- F1: 0.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3055 | 1.0 | 835 | 0.1842 | 0.8099 |
| 0.1561 | 2.0 | 1670 | 0.1711 | 0.8452 |
| 0.1016 | 3.0 | 2505 | 0.1745 | 0.8505 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
jcmc/reinforce-carpole-op
|
jcmc
| 2022-08-30T11:58:30Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-30T11:56:44Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-carpole-op
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 81.40 +/- 25.41
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
abdoutony207/m2m100_418M-evaluated-en-to-ar-2000instancesUNMULTI-leaningRate2e-05-batchSize8-regu2
|
abdoutony207
| 2022-08-30T11:49:54Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"dataset:un_multi",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-30T10:50:49Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- un_multi
metrics:
- bleu
model-index:
- name: m2m100_418M-evaluated-en-to-ar-2000instancesUNMULTI-leaningRate2e-05-batchSize8-regu2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: un_multi
type: un_multi
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 40.8245
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-evaluated-en-to-ar-2000instancesUNMULTI-leaningRate2e-05-batchSize8-regu2
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the un_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3642
- Bleu: 40.8245
- Meteor: 0.4272
- Gen Len: 41.8075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 5.1584 | 0.5 | 100 | 3.2518 | 30.3723 | 0.3633 | 41.5 |
| 2.1351 | 1.0 | 200 | 0.9929 | 32.9915 | 0.3833 | 41.8225 |
| 0.568 | 1.5 | 300 | 0.4312 | 33.705 | 0.3896 | 42.6225 |
| 0.3749 | 2.0 | 400 | 0.3697 | 36.9316 | 0.4084 | 40.57 |
| 0.2376 | 2.5 | 500 | 0.3587 | 37.6782 | 0.4124 | 41.99 |
| 0.2435 | 3.0 | 600 | 0.3529 | 37.9931 | 0.4128 | 42.02 |
| 0.1706 | 3.5 | 700 | 0.3531 | 39.9972 | 0.4252 | 41.8025 |
| 0.165 | 4.0 | 800 | 0.3514 | 39.3155 | 0.42 | 41.0275 |
| 0.1273 | 4.5 | 900 | 0.3606 | 40.0765 | 0.4234 | 41.6175 |
| 0.1307 | 5.0 | 1000 | 0.3550 | 40.4468 | 0.428 | 41.72 |
| 0.0926 | 5.5 | 1100 | 0.3603 | 40.5454 | 0.4307 | 41.765 |
| 0.1096 | 6.0 | 1200 | 0.3613 | 40.5691 | 0.4298 | 42.31 |
| 0.0826 | 6.5 | 1300 | 0.3642 | 40.8245 | 0.4272 | 41.8075 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingbase/xlm-roberta-base-finetuned-panx-it
|
huggingbase
| 2022-08-30T11:42:25Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-30T11:24:40Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8124233755619126
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2630
- F1: 0.8124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8193 | 1.0 | 70 | 0.3200 | 0.7356 |
| 0.2773 | 2.0 | 140 | 0.2841 | 0.7882 |
| 0.1807 | 3.0 | 210 | 0.2630 | 0.8124 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingbase/xlm-roberta-base-finetuned-panx-de-fr
|
huggingbase
| 2022-08-30T11:03:59Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-30T10:35:53Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Applemoon/bert-finetuned-ner
|
Applemoon
| 2022-08-30T10:49:29Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-30T10:02:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9512644448166137
- name: Recall
type: recall
value: 0.9559071019858634
- name: F1
type: f1
value: 0.9535801225551919
- name: Accuracy
type: accuracy
value: 0.9921732019781161
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0399
- Precision: 0.9513
- Recall: 0.9559
- F1: 0.9536
- Accuracy: 0.9922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0548 | 1.0 | 1756 | 0.0438 | 0.9368 | 0.9411 | 0.9390 | 0.9900 |
| 0.021 | 2.0 | 3512 | 0.0395 | 0.9446 | 0.9519 | 0.9482 | 0.9914 |
| 0.0108 | 3.0 | 5268 | 0.0399 | 0.9513 | 0.9559 | 0.9536 | 0.9922 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingbase/xlm-roberta-base-finetuned-panx-de
|
huggingbase
| 2022-08-30T10:28:31Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-30T10:03:25Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
igpaub/q-FrozenLake-v1-8x8-noSlippery
|
igpaub
| 2022-08-30T09:15:32Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-30T09:15:24Z |
---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="igpaub/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
cynthiachan/finetuned-electra-base-10pct
|
cynthiachan
| 2022-08-30T08:01:17Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:cynthiachan/FeedRef_10pct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-30T07:58:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cynthiachan/FeedRef_10pct
model-index:
- name: training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the cynthiachan/FeedRef_10pct dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1520
- Attackid Precision: 1.0
- Attackid Recall: 1.0
- Attackid F1: 1.0
- Attackid Number: 6
- Cve Precision: 1.0
- Cve Recall: 1.0
- Cve F1: 1.0
- Cve Number: 11
- Defenderthreat Precision: 0.0
- Defenderthreat Recall: 0.0
- Defenderthreat F1: 0.0
- Defenderthreat Number: 2
- Domain Precision: 0.6154
- Domain Recall: 0.6957
- Domain F1: 0.6531
- Domain Number: 23
- Email Precision: 0.5
- Email Recall: 0.6667
- Email F1: 0.5714
- Email Number: 3
- Filepath Precision: 0.7010
- Filepath Recall: 0.8242
- Filepath F1: 0.7577
- Filepath Number: 165
- Hostname Precision: 0.9231
- Hostname Recall: 1.0
- Hostname F1: 0.9600
- Hostname Number: 12
- Ipv4 Precision: 0.7143
- Ipv4 Recall: 0.8333
- Ipv4 F1: 0.7692
- Ipv4 Number: 12
- Md5 Precision: 0.6528
- Md5 Recall: 0.9038
- Md5 F1: 0.7581
- Md5 Number: 52
- Sha1 Precision: 0.0
- Sha1 Recall: 0.0
- Sha1 F1: 0.0
- Sha1 Number: 7
- Sha256 Precision: 0.7692
- Sha256 Recall: 0.9091
- Sha256 F1: 0.8333
- Sha256 Number: 44
- Uri Precision: 0.0
- Uri Recall: 0.0
- Uri F1: 0.0
- Uri Number: 1
- Overall Precision: 0.6897
- Overall Recall: 0.8284
- Overall F1: 0.7527
- Overall Accuracy: 0.9589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Attackid Precision | Attackid Recall | Attackid F1 | Attackid Number | Cve Precision | Cve Recall | Cve F1 | Cve Number | Defenderthreat Precision | Defenderthreat Recall | Defenderthreat F1 | Defenderthreat Number | Domain Precision | Domain Recall | Domain F1 | Domain Number | Email Precision | Email Recall | Email F1 | Email Number | Filepath Precision | Filepath Recall | Filepath F1 | Filepath Number | Hostname Precision | Hostname Recall | Hostname F1 | Hostname Number | Ipv4 Precision | Ipv4 Recall | Ipv4 F1 | Ipv4 Number | Md5 Precision | Md5 Recall | Md5 F1 | Md5 Number | Sha1 Precision | Sha1 Recall | Sha1 F1 | Sha1 Number | Sha256 Precision | Sha256 Recall | Sha256 F1 | Sha256 Number | Uri Precision | Uri Recall | Uri F1 | Uri Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:-------------:|:----------:|:------:|:----------:|:------------------------:|:---------------------:|:-----------------:|:---------------------:|:----------------:|:-------------:|:---------:|:-------------:|:---------------:|:------------:|:--------:|:------------:|:------------------:|:---------------:|:-----------:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-------------:|:----------:|:------:|:----------:|:--------------:|:-----------:|:-------:|:-----------:|:----------------:|:-------------:|:---------:|:-------------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.5093 | 0.37 | 500 | 0.3512 | 0.0 | 0.0 | 0.0 | 6 | 0.0 | 0.0 | 0.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.2024 | 0.5091 | 0.2897 | 165 | 0.0 | 0.0 | 0.0 | 12 | 0.0 | 0.0 | 0.0 | 12 | 0.1724 | 0.4808 | 0.2538 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.3797 | 0.6818 | 0.4878 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.1844 | 0.4112 | 0.2546 | 0.9163 |
| 0.2742 | 0.75 | 1000 | 0.2719 | 0.0 | 0.0 | 0.0 | 6 | 0.0 | 0.0 | 0.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.4444 | 0.5217 | 0.48 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.4211 | 0.5333 | 0.4706 | 165 | 0.1111 | 0.25 | 0.1538 | 12 | 0.5 | 0.8333 | 0.625 | 12 | 0.6290 | 0.75 | 0.6842 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.4444 | 0.8182 | 0.5760 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.4322 | 0.5562 | 0.4864 | 0.9340 |
| 0.2072 | 1.12 | 1500 | 0.2008 | 0.0 | 0.0 | 0.0 | 6 | 0.2308 | 0.2727 | 0.2500 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.6842 | 0.5652 | 0.6190 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.4885 | 0.7758 | 0.5995 | 165 | 0.7857 | 0.9167 | 0.8462 | 12 | 0.75 | 0.75 | 0.75 | 12 | 0.6026 | 0.9038 | 0.7231 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.5970 | 0.9091 | 0.7207 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.5363 | 0.7426 | 0.6228 | 0.9484 |
| 0.1861 | 1.5 | 2000 | 0.2101 | 0.0 | 0.0 | 0.0 | 6 | 0.9091 | 0.9091 | 0.9091 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.5926 | 0.6957 | 0.6400 | 23 | 0.5 | 0.3333 | 0.4 | 3 | 0.6345 | 0.7576 | 0.6906 | 165 | 0.7333 | 0.9167 | 0.8148 | 12 | 0.8182 | 0.75 | 0.7826 | 12 | 0.6618 | 0.8654 | 0.75 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.525 | 0.9545 | 0.6774 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.6181 | 0.7663 | 0.6843 | 0.9495 |
| 0.1888 | 1.87 | 2500 | 0.1689 | 1.0 | 1.0 | 1.0 | 6 | 0.8182 | 0.8182 | 0.8182 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.6818 | 0.6522 | 0.6667 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.5806 | 0.7636 | 0.6597 | 165 | 0.8462 | 0.9167 | 0.8800 | 12 | 0.8182 | 0.75 | 0.7826 | 12 | 0.6486 | 0.9231 | 0.7619 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.6667 | 0.8636 | 0.7525 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.6329 | 0.7751 | 0.6968 | 0.9487 |
| 0.1409 | 2.25 | 3000 | 0.1520 | 1.0 | 1.0 | 1.0 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.6154 | 0.6957 | 0.6531 | 23 | 0.5 | 0.6667 | 0.5714 | 3 | 0.7010 | 0.8242 | 0.7577 | 165 | 0.9231 | 1.0 | 0.9600 | 12 | 0.7143 | 0.8333 | 0.7692 | 12 | 0.6528 | 0.9038 | 0.7581 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7692 | 0.9091 | 0.8333 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.6897 | 0.8284 | 0.7527 | 0.9589 |
| 0.1248 | 2.62 | 3500 | 0.1716 | 0.8571 | 1.0 | 0.9231 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.84 | 0.9130 | 0.8750 | 23 | 0.6667 | 0.6667 | 0.6667 | 3 | 0.8155 | 0.8303 | 0.8228 | 165 | 0.8571 | 1.0 | 0.9231 | 12 | 0.75 | 1.0 | 0.8571 | 12 | 0.7031 | 0.8654 | 0.7759 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7593 | 0.9318 | 0.8367 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.7928 | 0.8491 | 0.82 | 0.9583 |
| 0.1073 | 3.0 | 4000 | 0.1532 | 0.8571 | 1.0 | 0.9231 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.84 | 0.9130 | 0.8750 | 23 | 0.6667 | 0.6667 | 0.6667 | 3 | 0.7705 | 0.8545 | 0.8103 | 165 | 0.8571 | 1.0 | 0.9231 | 12 | 0.7059 | 1.0 | 0.8276 | 12 | 0.7313 | 0.9423 | 0.8235 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7241 | 0.9545 | 0.8235 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.7688 | 0.8757 | 0.8188 | 0.9618 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
HIT-TMG/GlyphBERT
|
HIT-TMG
| 2022-08-30T07:15:12Z | 7 | 5 |
transformers
|
[
"transformers",
"bert",
"fill-mask",
"bert-base-chinese",
"zh",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-24T03:05:34Z |
---
language:
- zh
tags:
- bert-base-chinese
license: afl-3.0
---
This project page is about the pytorch code implementation of GlyphBERT by the HITsz-TMG research group.

GlyphBERT is a Chinese pre-training model that includes Chinese character glyph features.It renders the input characters into images and designs them in the form of multi-channel location feature maps, and designs a two-layer residual convolutional neural network module to extract the image features of the characters for training.
The experimental results show that the performance of the pre-training model can be well improved by fusing the features of Chinese glyphs. GlyphBERT is much better than BERT in multiple downstream tasks, and has strong transferability.
For more details about using it, you can check the [github repo](https://github.com/HITsz-TMG/GlyphBERT)
|
philschmid/custom-handler-distilbert
|
philschmid
| 2022-08-30T06:58:57Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-30T06:52:47Z |
---
pipeline_tag: text-classification
---
|
cynthiachan/finetuned-roberta-base-10pct
|
cynthiachan
| 2022-08-30T06:49:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:cynthiachan/FeedRef_10pct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-29T03:56:32Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- cynthiachan/FeedRef_10pct
model-index:
- name: training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the cynthiachan/FeedRef_10pct dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1033
- Attackid Precision: 1.0
- Attackid Recall: 1.0
- Attackid F1: 1.0
- Attackid Number: 6
- Cve Precision: 1.0
- Cve Recall: 1.0
- Cve F1: 1.0
- Cve Number: 11
- Defenderthreat Precision: 0.0
- Defenderthreat Recall: 0.0
- Defenderthreat F1: 0.0
- Defenderthreat Number: 2
- Domain Precision: 0.8636
- Domain Recall: 0.8261
- Domain F1: 0.8444
- Domain Number: 23
- Email Precision: 1.0
- Email Recall: 1.0
- Email F1: 1.0
- Email Number: 3
- Filepath Precision: 0.8108
- Filepath Recall: 0.9091
- Filepath F1: 0.8571
- Filepath Number: 165
- Hostname Precision: 0.9231
- Hostname Recall: 1.0
- Hostname F1: 0.9600
- Hostname Number: 12
- Ipv4 Precision: 0.9167
- Ipv4 Recall: 0.9167
- Ipv4 F1: 0.9167
- Ipv4 Number: 12
- Md5 Precision: 0.875
- Md5 Recall: 0.9423
- Md5 F1: 0.9074
- Md5 Number: 52
- Sha1 Precision: 0.75
- Sha1 Recall: 0.8571
- Sha1 F1: 0.8000
- Sha1 Number: 7
- Sha256 Precision: 0.8
- Sha256 Recall: 1.0
- Sha256 F1: 0.8889
- Sha256 Number: 44
- Uri Precision: 0.0
- Uri Recall: 0.0
- Uri F1: 0.0
- Uri Number: 1
- Overall Precision: 0.8383
- Overall Recall: 0.9201
- Overall F1: 0.8773
- Overall Accuracy: 0.9816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Attackid Precision | Attackid Recall | Attackid F1 | Attackid Number | Cve Precision | Cve Recall | Cve F1 | Cve Number | Defenderthreat Precision | Defenderthreat Recall | Defenderthreat F1 | Defenderthreat Number | Domain Precision | Domain Recall | Domain F1 | Domain Number | Email Precision | Email Recall | Email F1 | Email Number | Filepath Precision | Filepath Recall | Filepath F1 | Filepath Number | Hostname Precision | Hostname Recall | Hostname F1 | Hostname Number | Ipv4 Precision | Ipv4 Recall | Ipv4 F1 | Ipv4 Number | Md5 Precision | Md5 Recall | Md5 F1 | Md5 Number | Sha1 Precision | Sha1 Recall | Sha1 F1 | Sha1 Number | Sha256 Precision | Sha256 Recall | Sha256 F1 | Sha256 Number | Uri Precision | Uri Recall | Uri F1 | Uri Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:-------------:|:----------:|:------:|:----------:|:------------------------:|:---------------------:|:-----------------:|:---------------------:|:----------------:|:-------------:|:---------:|:-------------:|:---------------:|:------------:|:--------:|:------------:|:------------------:|:---------------:|:-----------:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-------------:|:----------:|:------:|:----------:|:--------------:|:-----------:|:-------:|:-----------:|:----------------:|:-------------:|:---------:|:-------------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4353 | 0.37 | 500 | 0.3525 | 0.0 | 0.0 | 0.0 | 6 | 0.0 | 0.0 | 0.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.3984 | 0.6182 | 0.4846 | 165 | 0.0714 | 0.3333 | 0.1176 | 12 | 0.0 | 0.0 | 0.0 | 12 | 0.8936 | 0.8077 | 0.8485 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.4937 | 0.8864 | 0.6341 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.4156 | 0.5533 | 0.4746 | 0.9459 |
| 0.2089 | 0.75 | 1000 | 0.1812 | 0.0 | 0.0 | 0.0 | 6 | 0.9 | 0.8182 | 0.8571 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.15 | 0.2609 | 0.1905 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.6432 | 0.7758 | 0.7033 | 165 | 0.0 | 0.0 | 0.0 | 12 | 0.6471 | 0.9167 | 0.7586 | 12 | 0.7143 | 0.8654 | 0.7826 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.5286 | 0.8409 | 0.6491 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.5315 | 0.6982 | 0.6036 | 0.9626 |
| 0.1453 | 1.12 | 1500 | 0.1374 | 0.75 | 0.5 | 0.6 | 6 | 0.9167 | 1.0 | 0.9565 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.5135 | 0.8261 | 0.6333 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.6863 | 0.8485 | 0.7588 | 165 | 0.7 | 0.5833 | 0.6364 | 12 | 0.6667 | 0.6667 | 0.6667 | 12 | 0.8167 | 0.9423 | 0.8750 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8333 | 0.9091 | 0.8696 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.7048 | 0.8195 | 0.7579 | 0.9745 |
| 0.1277 | 1.5 | 2000 | 0.1400 | 1.0 | 1.0 | 1.0 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.7273 | 0.6957 | 0.7111 | 23 | 0.2 | 0.3333 | 0.25 | 3 | 0.7181 | 0.8182 | 0.7649 | 165 | 0.9167 | 0.9167 | 0.9167 | 12 | 0.7857 | 0.9167 | 0.8462 | 12 | 0.8167 | 0.9423 | 0.8750 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8302 | 1.0 | 0.9072 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.7634 | 0.8402 | 0.8000 | 0.9735 |
| 0.1074 | 1.87 | 2500 | 0.1101 | 1.0 | 1.0 | 1.0 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.72 | 0.7826 | 0.7500 | 23 | 0.2857 | 0.6667 | 0.4 | 3 | 0.7554 | 0.8424 | 0.7966 | 165 | 0.8571 | 1.0 | 0.9231 | 12 | 0.8182 | 0.75 | 0.7826 | 12 | 0.9259 | 0.9615 | 0.9434 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.6833 | 0.9318 | 0.7885 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.7660 | 0.8521 | 0.8067 | 0.9762 |
| 0.0758 | 2.25 | 3000 | 0.1161 | 1.0 | 1.0 | 1.0 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.9091 | 0.8696 | 0.8889 | 23 | 0.5 | 0.6667 | 0.5714 | 3 | 0.8251 | 0.9152 | 0.8678 | 165 | 1.0 | 1.0 | 1.0 | 12 | 1.0 | 0.6667 | 0.8 | 12 | 0.9259 | 0.9615 | 0.9434 | 52 | 1.0 | 0.5714 | 0.7273 | 7 | 0.8958 | 0.9773 | 0.9348 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.8722 | 0.9083 | 0.8899 | 0.9814 |
| 0.064 | 2.62 | 3500 | 0.1275 | 1.0 | 1.0 | 1.0 | 6 | 0.8333 | 0.9091 | 0.8696 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.8947 | 0.7391 | 0.8095 | 23 | 1.0 | 1.0 | 1.0 | 3 | 0.8418 | 0.9030 | 0.8713 | 165 | 0.8571 | 1.0 | 0.9231 | 12 | 1.0 | 0.75 | 0.8571 | 12 | 0.9245 | 0.9423 | 0.9333 | 52 | 0.6667 | 0.5714 | 0.6154 | 7 | 0.8113 | 0.9773 | 0.8866 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.8580 | 0.8935 | 0.8754 | 0.9793 |
| 0.0522 | 3.0 | 4000 | 0.1033 | 1.0 | 1.0 | 1.0 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.8636 | 0.8261 | 0.8444 | 23 | 1.0 | 1.0 | 1.0 | 3 | 0.8108 | 0.9091 | 0.8571 | 165 | 0.9231 | 1.0 | 0.9600 | 12 | 0.9167 | 0.9167 | 0.9167 | 12 | 0.875 | 0.9423 | 0.9074 | 52 | 0.75 | 0.8571 | 0.8000 | 7 | 0.8 | 1.0 | 0.8889 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.8383 | 0.9201 | 0.8773 | 0.9816 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Mcy/distilbert-base-uncased-finetuned-cola
|
Mcy
| 2022-08-30T06:47:24Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-29T09:31:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6550
- Matthews Correlation: 0.2820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 1.7255 | 1.0 | 712 | 1.6687 | 0.1995 |
| 1.3584 | 2.0 | 1424 | 1.6550 | 0.2820 |
| 1.024 | 3.0 | 2136 | 1.7990 | 0.2564 |
| 0.8801 | 4.0 | 2848 | 2.1304 | 0.2657 |
| 0.7138 | 5.0 | 3560 | 2.1946 | 0.2584 |
| 0.5799 | 6.0 | 4272 | 2.4351 | 0.2660 |
| 0.5385 | 7.0 | 4984 | 2.6819 | 0.2539 |
| 0.4088 | 8.0 | 5696 | 2.8667 | 0.2436 |
| 0.3722 | 9.0 | 6408 | 2.9077 | 0.2612 |
| 0.3173 | 10.0 | 7120 | 2.9795 | 0.2542 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
philschmid/custom-pipeline-text-classification
|
philschmid
| 2022-08-30T06:43:39Z | 0 | 1 |
generic
|
[
"generic",
"text-classification",
"region:us"
] |
text-classification
| 2022-07-18T12:21:29Z |
---
tags:
- text-classification
library_name: generic
---
# Text Classification repository template
This is a template repository for Text Classification to support generic inference with Hugging Face Hub generic Inference API. There are two required steps:
1. Specify the requirements by defining a `requirements.txt` file.
2. Implement the `pipeline.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload all the elements needed for inference (model, processors, tokenizers, etc.). This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
## How to start
First create a repo in https://hf.co/new.
Then clone this template and push it to your repo.
```
git clone https://huggingface.co/templates/text-classification
cd text-classification
git remote set-url origin https://huggingface.co/$YOUR_USER/$YOUR_REPO_NAME
git push --force
```
|
jaynlp/t5-large-samsum
|
jaynlp
| 2022-08-30T02:47:51Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2203.01552",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
We pre-trained `t5-large` on SAMSum Dialogue Summarization corpus.
If you use this work for your research, please cite our work [Dialogue Summaries as Dialogue States ({DS}2), Template-Guided Summarization for Few-shot Dialogue State Tracking](https://arxiv.org/abs/2203.01552)
### Citation
```
@inproceedings{shin-etal-2022-dialogue,
title = "Dialogue Summaries as Dialogue States ({DS}2), Template-Guided Summarization for Few-shot Dialogue State Tracking",
author = "Shin, Jamin and
Yu, Hangyeol and
Moon, Hyeongdon and
Madotto, Andrea and
Park, Juneyoung",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.302",
pages = "3824--3846",
abstract = "Annotating task-oriented dialogues is notorious for the expensive and difficult data collection process. Few-shot dialogue state tracking (DST) is a realistic solution to this problem. In this paper, we hypothesize that dialogue summaries are essentially unstructured dialogue states; hence, we propose to reformulate dialogue state tracking as a dialogue summarization problem. To elaborate, we train a text-to-text language model with synthetic template-based dialogue summaries, generated by a set of rules from the dialogue states. Then, the dialogue states can be recovered by inversely applying the summary generation rules. We empirically show that our method DS2 outperforms previous works on few-shot DST in MultiWoZ 2.0 and 2.1, in both cross-domain and multi-domain settings. Our method also exhibits vast speedup during both training and inference as it can generate all states at once.Finally, based on our analysis, we discover that the naturalness of the summary templates plays a key role for successful training.",
}
```
We used the following prompt for training
```
Summarize this dialogue:
<DIALOGUE>
...
```
|
JAlexis/modelv2
|
JAlexis
| 2022-08-30T02:38:24Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-30T02:20:27Z |
---
widget:
- text: "How can I protect myself against covid-19?"
context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. "
- text: "What are the risk factors for covid-19?"
context: "To identify risk factors for hospital deaths from COVID-19, the OpenSAFELY platform examined electronic health records from 17.4 million UK adults. The authors used multivariable Cox proportional hazards model to identify the association of risk of death with older age, lower socio-economic status, being male, non-white ethnic background and certain clinical conditions (diabetes, obesity, cancer, respiratory diseases, heart, kidney, liver, neurological and autoimmune conditions). Notably, asthma was identified as a risk factor, despite prior suggestion of a potential protective role. Interestingly, higher risks due to ethnicity or lower socio-economic status could not be completely attributed to pre-existing health conditions."
---
|
Sandeepanie/clinical-finetunedNew
|
Sandeepanie
| 2022-08-30T01:41:17Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-30T01:18:21Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: clinical-finetunedNew
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clinical-finetunedNew
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0423
- Accuracy: 0.84
- Precision: 0.8562
- Recall: 0.9191
- F1: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.0707 | 1.0 | 50 | 0.9997 | 0.86 | 0.86 | 0.9485 | 0.9021 |
| 0.0593 | 2.0 | 100 | 0.9293 | 0.845 | 0.8777 | 0.8971 | 0.8873 |
| 0.0273 | 3.0 | 150 | 0.9836 | 0.83 | 0.8643 | 0.8897 | 0.8768 |
| 0.039 | 4.0 | 200 | 1.0028 | 0.85 | 0.8732 | 0.9118 | 0.8921 |
| 0.0121 | 5.0 | 250 | 1.0423 | 0.84 | 0.8562 | 0.9191 | 0.8865 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
fabriceyhc/bert-base-uncased-imdb
|
fabriceyhc
| 2022-08-30T00:40:47Z | 1,156 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"sibyl",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: bert-base-uncased-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.91264
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-imdb
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4942
- Accuracy: 0.9126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1546
- training_steps: 15468
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3952 | 0.65 | 2000 | 0.4012 | 0.86 |
| 0.2954 | 1.29 | 4000 | 0.4535 | 0.892 |
| 0.2595 | 1.94 | 6000 | 0.4320 | 0.892 |
| 0.1516 | 2.59 | 8000 | 0.5309 | 0.896 |
| 0.1167 | 3.23 | 10000 | 0.4070 | 0.928 |
| 0.0624 | 3.88 | 12000 | 0.5055 | 0.908 |
| 0.0329 | 4.52 | 14000 | 0.4342 | 0.92 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.6.1
- Tokenizers 0.10.3
|
Einmalumdiewelt/DistilBART_CNN_GNAD_V2
|
Einmalumdiewelt
| 2022-08-29T23:21:34Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"de",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T15:01:52Z |
---
language:
- de
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: DistilBART_CNN_GNAD_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBART_CNN_GNAD_V2
This model is a fine-tuned version of [Einmalumdiewelt/DistilBART_CNN_GNAD_V2](https://huggingface.co/Einmalumdiewelt/DistilBART_CNN_GNAD_V2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7281
- Rouge1: 27.7253
- Rouge2: 8.4647
- Rougel: 18.2059
- Rougelsum: 23.238
- Gen Len: 91.6827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
daviddaubner/q-FrozenLake-v1-4x4-noSlippery
|
daviddaubner
| 2022-08-29T22:39:12Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-29T22:39:06Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="daviddaubner/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
skr1125/pegasus-samsum
|
skr1125
| 2022-08-29T21:18:03Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-16T20:21:38Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7003 | 0.54 | 500 | 1.4859 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 2.0.0
- Tokenizers 0.10.3
|
theunnecessarythings/ddpm-butterflies-128
|
theunnecessarythings
| 2022-08-29T19:31:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-29T18:19:26Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/sreerajr000/ddpm-butterflies-128/tensorboard?#scalars)
|
salmujaiwel/wav2vec2-large-xls-r-300m-arabic-saudi-colab
|
salmujaiwel
| 2022-08-29T19:30:47Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-29T19:13:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-arabic-saudi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-arabic-saudi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ish97/bert-finetuned-chunking-for-echo-reading
|
ish97
| 2022-08-29T19:27:28Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-29T18:07:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-chunking-for-echo-reading
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-chunking-for-echo-reading
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3411
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 2 | 0.4490 | 0.0 | 0.0 | 0.0 | 0.875 |
| No log | 2.0 | 4 | 0.3668 | 0.0 | 0.0 | 0.0 | 0.875 |
| No log | 3.0 | 6 | 0.3411 | 0.0 | 0.0 | 0.0 | 0.875 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
fractalego/creak-sense
|
fractalego
| 2022-08-29T19:24:27Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2109.01653",
"doi:10.57967/hf/0008",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-27T16:51:07Z |
# Testing whether a sentence is consistent with the CREAK dataset
This framework is trained on the [CREAK dataset](https://arxiv.org/abs/2109.01653).
# Install
pip install creak-sense
# Example
```python
from creak_sense import CreakSense
sense = CreakSense("fractalego/creak-sense")
claim = "Bananas can be found in a grocery list"
sense.make_sense(claim)
```
with output "True".
# Example with explanation
```python
from creak_sense import CreakSense
sense = CreakSense("fractalego/creak-sense")
claim = "Bananas can be found in a grocery list"
sense.get_reason(claim)
```
with output "Bananas are a staple food".
|
ntinosmg/dqn-SpaceInvadersNoFrameskip-v4
|
ntinosmg
| 2022-08-29T19:21:48Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-29T19:21:07Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 555.50 +/- 234.83
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ntinosmg -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ntinosmg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Dizzykong/Aristotle-8-29
|
Dizzykong
| 2022-08-29T17:46:28Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-29T16:31:34Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Aristotle-8-29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Aristotle-8-29
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/chrishildabrant
|
huggingtweets
| 2022-08-29T17:19:30Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-29T17:19:20Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1367991702523437062/x5beyUQ-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Chris Hildabrant</div>
<div style="text-align: center; font-size: 14px;">@chrishildabrant</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Chris Hildabrant.
| Data | Chris Hildabrant |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 243 |
| Tweets kept | 3007 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3dagd4ww/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chrishildabrant's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ctoe6ys) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ctoe6ys/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chrishildabrant')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
GItaf/bart-base-finetuned-mbti
|
GItaf
| 2022-08-29T17:08:37Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-28T15:05:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-mbti
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-mbti
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0025 | 1.0 | 9920 | 0.0000 |
| 0.0005 | 2.0 | 19840 | 0.0000 |
| 0.0002 | 3.0 | 29760 | 0.0000 |
| 0.0001 | 4.0 | 39680 | 0.0000 |
| 0.0001 | 5.0 | 49600 | 0.0000 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Atharvgarg/distilbart-xsum-6-6-finetuned-bbc-news-on-abstractive
|
Atharvgarg
| 2022-08-29T15:47:39Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarisation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T15:10:50Z |
---
license: apache-2.0
tags:
- summarisation
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-xsum-6-6-finetuned-bbc-news-on-abstractive
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-xsum-6-6-finetuned-bbc-news-on-abstractive
This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6549
- Rouge1: 38.9186
- Rouge2: 30.2223
- Rougel: 32.6201
- Rougelsum: 37.7502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.3838 | 1.0 | 445 | 1.4841 | 39.1621 | 30.4379 | 32.6613 | 37.9963 |
| 1.0077 | 2.0 | 890 | 1.5173 | 39.388 | 30.9125 | 33.099 | 38.2442 |
| 0.7983 | 3.0 | 1335 | 1.5726 | 38.7913 | 30.0766 | 32.6092 | 37.5953 |
| 0.6681 | 4.0 | 1780 | 1.6549 | 38.9186 | 30.2223 | 32.6201 | 37.7502 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
AliMMZ/Reinforce-model1000
|
AliMMZ
| 2022-08-29T12:48:31Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-29T12:48:23Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-model1000
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 87.00 +/- 31.38
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Atharvgarg/distilbart-xsum-6-6-finetuned-bbc-news
|
Atharvgarg
| 2022-08-29T12:38:44Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarisation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T11:36:02Z |
---
license: apache-2.0
tags:
- summarisation
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-xsum-6-6-finetuned-bbc-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-xsum-6-6-finetuned-bbc-news
This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2624
- Rouge1: 62.1927
- Rouge2: 54.4754
- Rougel: 55.868
- Rougelsum: 60.9345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.4213 | 1.0 | 445 | 0.2005 | 59.4886 | 51.7791 | 53.5126 | 58.3405 |
| 0.1355 | 2.0 | 890 | 0.1887 | 61.7474 | 54.2823 | 55.7324 | 60.5787 |
| 0.0891 | 3.0 | 1335 | 0.1932 | 61.1312 | 53.103 | 54.6992 | 59.8923 |
| 0.0571 | 4.0 | 1780 | 0.2141 | 60.8797 | 52.6195 | 54.4402 | 59.5298 |
| 0.0375 | 5.0 | 2225 | 0.2318 | 61.7875 | 53.8753 | 55.5068 | 60.5448 |
| 0.0251 | 6.0 | 2670 | 0.2484 | 62.3535 | 54.6029 | 56.2804 | 61.031 |
| 0.0175 | 7.0 | 3115 | 0.2542 | 61.6351 | 53.8248 | 55.6399 | 60.3765 |
| 0.0133 | 8.0 | 3560 | 0.2624 | 62.1927 | 54.4754 | 55.868 | 60.9345 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mayjul/t5-small-finetuned-xsum
|
mayjul
| 2022-08-29T11:52:46Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-28T14:36:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.2727
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4789
- Rouge1: 28.2727
- Rouge2: 7.7068
- Rougel: 22.1993
- Rougelsum: 22.2071
- Gen Len: 18.8238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7189 | 1.0 | 12753 | 2.4789 | 28.2727 | 7.7068 | 22.1993 | 22.2071 | 18.8238 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
muhtasham/bert-small-finer
|
muhtasham
| 2022-08-29T11:42:58Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-28T21:44:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 0.49 | 500 | 3.5536 |
| 3.752 | 0.97 | 1000 | 3.0406 |
| 3.752 | 1.46 | 1500 | 2.7601 |
| 2.6844 | 1.94 | 2000 | 2.5655 |
| 2.6844 | 2.43 | 2500 | 2.4174 |
| 2.3487 | 2.91 | 3000 | 2.3163 |
| 2.3487 | 3.4 | 3500 | 2.2146 |
| 2.1554 | 3.89 | 4000 | 2.1560 |
| 2.1554 | 4.37 | 4500 | 2.0935 |
| 2.019 | 4.86 | 5000 | 2.0375 |
| 2.019 | 5.34 | 5500 | 1.9942 |
| 1.9254 | 5.83 | 6000 | 1.9530 |
| 1.9254 | 6.32 | 6500 | 1.9215 |
| 1.8506 | 6.8 | 7000 | 1.8908 |
| 1.8506 | 7.29 | 7500 | 1.8693 |
| 1.793 | 7.77 | 8000 | 1.8399 |
| 1.793 | 8.26 | 8500 | 1.8191 |
| 1.7425 | 8.75 | 9000 | 1.8016 |
| 1.7425 | 9.23 | 9500 | 1.7760 |
| 1.7093 | 9.72 | 10000 | 1.7668 |
| 1.7093 | 10.2 | 10500 | 1.7474 |
| 1.6754 | 10.69 | 11000 | 1.7365 |
| 1.6754 | 11.18 | 11500 | 1.7229 |
| 1.6501 | 11.66 | 12000 | 1.7145 |
| 1.6501 | 12.15 | 12500 | 1.7029 |
| 1.633 | 12.63 | 13000 | 1.6965 |
| 1.633 | 13.12 | 13500 | 1.6878 |
| 1.6153 | 13.61 | 14000 | 1.6810 |
| 1.6153 | 14.09 | 14500 | 1.6775 |
| 1.6043 | 14.58 | 15000 | 1.6720 |
| 1.6043 | 15.06 | 15500 | 1.6719 |
| 1.5942 | 15.55 | 16000 | 1.6602 |
| 1.5942 | 16.03 | 16500 | 1.6643 |
| 1.5869 | 16.52 | 17000 | 1.6632 |
| 1.5869 | 17.01 | 17500 | 1.6551 |
| 1.5834 | 17.49 | 18000 | 1.6557 |
| 1.5834 | 17.98 | 18500 | 1.6561 |
| 1.5755 | 18.46 | 19000 | 1.6620 |
| 1.5755 | 18.95 | 19500 | 1.6524 |
| 1.5823 | 19.44 | 20000 | 1.6536 |
| 1.5823 | 19.92 | 20500 | 1.6627 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
PKM230/Lunar_lander
|
PKM230
| 2022-08-29T11:32:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-29T11:31:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 14.50 +/- 141.88
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hhffxx/pegasus-samsum
|
hhffxx
| 2022-08-29T10:52:44Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T06:48:07Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [stas/pegasus-cnn_dailymail-tiny-random](https://huggingface.co/stas/pegasus-cnn_dailymail-tiny-random) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 7.5735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.6148 | 0.54 | 500 | 7.5735 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
autoevaluate/summarization
|
autoevaluate
| 2022-08-29T10:12:08Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"dataset:xsum",
"dataset:autoevaluate/xsum-sample",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-05-28T12:27:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- summarization
datasets:
- xsum
- autoevaluate/xsum-sample
metrics:
- rouge
model-index:
- name: summarization
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 23.9405
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6690
- Rouge1: 23.9405
- Rouge2: 5.0879
- Rougel: 18.4981
- Rougelsum: 18.5032
- Gen Len: 18.7376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.9249 | 0.08 | 1000 | 2.6690 | 23.9405 | 5.0879 | 18.4981 | 18.5032 | 18.7376 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
autoevaluate/image-multi-class-classification
|
autoevaluate
| 2022-08-29T10:11:22Z | 118 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:mnist",
"dataset:autoevaluate/mnist-sample",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-21T08:52:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mnist
- autoevaluate/mnist-sample
metrics:
- accuracy
model-index:
- name: image-classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: mnist
type: mnist
args: mnist
metrics:
- name: Accuracy
type: accuracy
value: 0.9833333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image-classification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the mnist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0556
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3743 | 1.0 | 422 | 0.0556 | 0.9833 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
autoevaluate/translation
|
autoevaluate
| 2022-08-29T10:08:28Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"dataset:autoevaluate/wmt16-sample",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-28T14:14:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
- autoevaluate/wmt16-sample
metrics:
- bleu
model-index:
- name: translation
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.5866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3170
- Bleu: 28.5866
- Gen Len: 33.9575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.8302 | 0.03 | 1000 | 1.3170 | 28.5866 | 33.9575 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
artfrontier/ddpm-butterflies-128
|
artfrontier
| 2022-08-29T09:07:51Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-29T07:14:18Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/artfrontier/ddpm-butterflies-128/tensorboard?#scalars)
|
pinot/wav2vec2-large-xls-r-300m-ja-colab-new
|
pinot
| 2022-08-29T07:21:29Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_10_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-28T16:18:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_10_0
model-index:
- name: wav2vec2-large-xls-r-300m-ja-colab-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ja-colab-new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1931
- Wer: 0.2584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 637 | 5.3089 | 0.9670 |
| No log | 2.0 | 1274 | 3.2716 | 0.6123 |
| No log | 3.0 | 1911 | 2.1797 | 0.4708 |
| No log | 4.0 | 2548 | 1.8331 | 0.4113 |
| 6.3938 | 5.0 | 3185 | 1.5111 | 0.3460 |
| 6.3938 | 6.0 | 3822 | 1.3575 | 0.3132 |
| 6.3938 | 7.0 | 4459 | 1.2946 | 0.2957 |
| 6.3938 | 8.0 | 5096 | 1.2346 | 0.2762 |
| 1.023 | 9.0 | 5733 | 1.2053 | 0.2653 |
| 1.023 | 10.0 | 6370 | 1.1931 | 0.2584 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
akrisroof/ddpm-butterflies-128
|
akrisroof
| 2022-08-29T04:18:07Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-29T03:37:31Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/akrisroof/ddpm-butterflies-128/tensorboard?#scalars)
|
Shengyu/Evaluation_of_NER_models
|
Shengyu
| 2022-08-29T03:03:59Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-08-29T02:58:44Z |
# **Evaluation of the NER models in medical dataset**
The goal of the whole project is to compare the NER models and feature evaluation in the medical dataset, and the program of model comparison needs to be executed in the GPU environment. Here are the instructions for the two project.
## 1. Model Comparison
### 1.1 Environment setting:
(1) Python 3 environment (Python 3.6 and above)
The user can click the link (https://www.python.org/) to select the appropriate python version and download.
(2) Some related package in python
The version of the package we used is as follows:
```shell
Transformers: 4.8.2
NERDA: 0.9.5
Pytorch: 1.8.1+cu101
Tensorflow: 2.3.0
```
The user can execute the following command in python environment.
```shell
pip install tensorflow-gpu==2.3.0 -i https://pypi.doubanio.com/simple
pip install transformers==4.8.2
pip install NERDA
pip install sentencepiece
pip install torch==1.8.1+cu101 torchvision==0.9.1+cu101 torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
```
### 1.2 The process of implementation
(1) Training and testing
Users can check the "training&testing.ipynb" file. The user can load the models to be trained and download them locally, or directly import it into the internal model of transformers website.
For example:
```python
# Model loading in the "training&testing.ipynb" file
transformer = '../../Model/bigbird-roberta-base/'
or
transformer = 'google/bigbird-roberta-base'
```
Address of model download:
```http
https://huggingface.co/dmis-lab/biobert-base-cased-v1.1
https://huggingface.co/roberta-base
https://huggingface.co/google/bigbird-roberta-base
https://huggingface.co/microsoft/deberta-base
```
The user can download models through the above websites and put them in the "model" folder.
(2) Prediction program
Users can load the trained models and input new text to make that the model recognize the entities in the text. We give five trained models with the best training effect for RoBERTa, BigBird, DeBERTa, and BioBERT NER models ( The suffix of the five models ends with ". bin" ). These models is saved in "Trained model" file.
For example:
```python
import torch
model = torch.load('../../trained_model/trained_models_by_Revised_JNLPBA_dataset/deberta.bin')
model.predict_text('Number of glucocorticoid receptors in lymphocytes and their sensitivity to hormone action.')
->> ([['Number', 'of', 'glucocorticoid', 'receptors', 'in', 'lymphocytes', 'and', 'their', 'sensitivity', 'to', 'hormone','action','.']],
[['O', 'O', 'B-protein','I-protein','o','B-cell_type','O','O','O','O','O','O','O']])
```
## 2. Feature Evaluation
### 2.1 Environment setting:
(1) Some related package in python
Packages we used is as follows, users can download the latest packages by ”pip install package name“ commend.
```shell
1. warnings
2. matplotlib
3. pandas
4. seaborn
5. statsmodels
6. sklearn
```
### 2.2 The process of implementation
Users can check the "feature_selection.ipynb" and "feature_evaluation.ipynb"file. Due to the privacy of the data, we did not upload the feature data, so users can view different methods of feature selection in this file.
### 3. Contact
If user have any questions, please contact us.
(1) Sizhu Wu - [wu.sizhu@imicams.ac.cn]
(2) Shengyu Liu - [liu.shengyu@imicams.ac.cn]
|
rajistics/layoutlmv3-finetuned-cord_500
|
rajistics
| 2022-08-28T21:29:04Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T20:35:21Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_500
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: train
args: cord
metrics:
- name: Precision
type: precision
value: 0.9509293680297398
- name: Recall
type: recall
value: 0.9573353293413174
- name: F1
type: f1
value: 0.9541215964192465
- name: Accuracy
type: accuracy
value: 0.9609507640067911
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_500
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2339
- Precision: 0.9509
- Recall: 0.9573
- F1: 0.9541
- Accuracy: 0.9610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.5 | 250 | 0.9950 | 0.7114 | 0.7784 | 0.7434 | 0.7903 |
| 1.3831 | 5.0 | 500 | 0.5152 | 0.8483 | 0.8787 | 0.8632 | 0.8816 |
| 1.3831 | 7.5 | 750 | 0.3683 | 0.9013 | 0.9154 | 0.9083 | 0.9240 |
| 0.3551 | 10.0 | 1000 | 0.3051 | 0.9201 | 0.9304 | 0.9252 | 0.9363 |
| 0.3551 | 12.5 | 1250 | 0.2636 | 0.9375 | 0.9424 | 0.9399 | 0.9457 |
| 0.1562 | 15.0 | 1500 | 0.2498 | 0.9385 | 0.9476 | 0.9430 | 0.9508 |
| 0.1562 | 17.5 | 1750 | 0.2380 | 0.9414 | 0.9499 | 0.9456 | 0.9559 |
| 0.0863 | 20.0 | 2000 | 0.2355 | 0.9400 | 0.9491 | 0.9445 | 0.9542 |
| 0.0863 | 22.5 | 2250 | 0.2268 | 0.9451 | 0.9536 | 0.9493 | 0.9601 |
| 0.0512 | 25.0 | 2500 | 0.2277 | 0.9429 | 0.9513 | 0.9471 | 0.9588 |
| 0.0512 | 27.5 | 2750 | 0.2315 | 0.9473 | 0.9551 | 0.9512 | 0.9593 |
| 0.0358 | 30.0 | 3000 | 0.2294 | 0.9509 | 0.9573 | 0.9541 | 0.9605 |
| 0.0358 | 32.5 | 3250 | 0.2330 | 0.9458 | 0.9543 | 0.9501 | 0.9593 |
| 0.028 | 35.0 | 3500 | 0.2374 | 0.9487 | 0.9558 | 0.9523 | 0.9597 |
| 0.028 | 37.5 | 3750 | 0.2374 | 0.9501 | 0.9558 | 0.9530 | 0.9593 |
| 0.0244 | 40.0 | 4000 | 0.2339 | 0.9509 | 0.9573 | 0.9541 | 0.9610 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rajistics/layoutlmv3-finetuned-cord_800
|
rajistics
| 2022-08-28T20:21:22Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T19:24:26Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_800
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: train
args: cord
metrics:
- name: Precision
type: precision
value: 0.9445266272189349
- name: Recall
type: recall
value: 0.9558383233532934
- name: F1
type: f1
value: 0.9501488095238095
- name: Accuracy
type: accuracy
value: 0.9605263157894737
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_800
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2042
- Precision: 0.9445
- Recall: 0.9558
- F1: 0.9501
- Accuracy: 0.9605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.56 | 250 | 0.9737 | 0.7787 | 0.8166 | 0.7972 | 0.8188 |
| 1.3706 | 3.12 | 500 | 0.5489 | 0.8480 | 0.8645 | 0.8562 | 0.8680 |
| 1.3706 | 4.69 | 750 | 0.3857 | 0.8913 | 0.9087 | 0.8999 | 0.9147 |
| 0.3693 | 6.25 | 1000 | 0.3192 | 0.9117 | 0.9274 | 0.9195 | 0.9317 |
| 0.3693 | 7.81 | 1250 | 0.2816 | 0.9189 | 0.9326 | 0.9257 | 0.9355 |
| 0.1903 | 9.38 | 1500 | 0.2521 | 0.9277 | 0.9409 | 0.9342 | 0.9465 |
| 0.1903 | 10.94 | 1750 | 0.2353 | 0.9357 | 0.9476 | 0.9416 | 0.9550 |
| 0.1231 | 12.5 | 2000 | 0.2361 | 0.9293 | 0.9446 | 0.9369 | 0.9516 |
| 0.1231 | 14.06 | 2250 | 0.2194 | 0.9402 | 0.9528 | 0.9465 | 0.9576 |
| 0.0766 | 15.62 | 2500 | 0.2133 | 0.9416 | 0.9528 | 0.9472 | 0.9580 |
| 0.0766 | 17.19 | 2750 | 0.2117 | 0.9438 | 0.9558 | 0.9498 | 0.9597 |
| 0.0585 | 18.75 | 3000 | 0.2152 | 0.9417 | 0.9551 | 0.9483 | 0.9605 |
| 0.0585 | 20.31 | 3250 | 0.2070 | 0.9431 | 0.9551 | 0.9491 | 0.9588 |
| 0.0454 | 21.88 | 3500 | 0.2093 | 0.9489 | 0.9588 | 0.9538 | 0.9622 |
| 0.0454 | 23.44 | 3750 | 0.2034 | 0.9453 | 0.9566 | 0.9509 | 0.9610 |
| 0.0409 | 25.0 | 4000 | 0.2042 | 0.9445 | 0.9558 | 0.9501 | 0.9605 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ChaoLi/xlm-roberta-base-finetuned-panx-all
|
ChaoLi
| 2022-08-28T20:09:13Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T19:58:55Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1728
- F1: 0.8554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3009 | 1.0 | 835 | 0.1857 | 0.8082 |
| 0.1578 | 2.0 | 1670 | 0.1733 | 0.8416 |
| 0.1026 | 3.0 | 2505 | 0.1728 | 0.8554 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ChaoLi/xlm-roberta-base-finetuned-panx-it
|
ChaoLi
| 2022-08-28T19:55:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T19:52:28Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8224755700325732
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2521
- F1: 0.8225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8088 | 1.0 | 70 | 0.3423 | 0.7009 |
| 0.2844 | 2.0 | 140 | 0.2551 | 0.8027 |
| 0.1905 | 3.0 | 210 | 0.2521 | 0.8225 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ChaoLi/xlm-roberta-base-finetuned-panx-fr
|
ChaoLi
| 2022-08-28T19:52:12Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T19:47:35Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8325761399966348
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2978
- F1: 0.8326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.574 | 1.0 | 191 | 0.3495 | 0.7889 |
| 0.2649 | 2.0 | 382 | 0.2994 | 0.8242 |
| 0.1716 | 3.0 | 573 | 0.2978 | 0.8326 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ChaoLi/xlm-roberta-base-finetuned-panx-de-fr
|
ChaoLi
| 2022-08-28T19:46:37Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T19:37:01Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1643
- F1: 0.8626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1472 | 2.0 | 1430 | 0.1633 | 0.8488 |
| 0.0948 | 3.0 | 2145 | 0.1643 | 0.8626 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
baudm/crnn
|
baudm
| 2022-08-28T19:06:36Z | 0 | 0 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T19:03:22Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# CRNN v1.0
CRNN model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 128x32.
Disclaimer: this model card was not written by the original authors.
## Model description
*TODO*
## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@article{shi2016end,
title={An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition},
author={Shi, Baoguang and Bai, Xiang and Yao, Cong},
journal={IEEE transactions on pattern analysis and machine intelligence},
volume={39},
number={11},
pages={2298--2304},
year={2016},
publisher={IEEE}
}
```
|
baudm/trba
|
baudm
| 2022-08-28T19:03:01Z | 0 | 0 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T19:01:11Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# TRBA v1.0
TRBA model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 128x32.
Disclaimer: this model card was not written by the original authors.
## Model description
*TODO*
## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{Baek_2019_ICCV,
author = {Baek, Jeonghun and Kim, Geewook and Lee, Junyeop and Park, Sungrae and Han, Dongyoon and Yun, Sangdoo and Oh, Seong Joon and Lee, Hwalsuk},
title = {What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {10},
year = {2019}
}
```
|
baudm/vitstr-small
|
baudm
| 2022-08-28T18:47:40Z | 0 | 0 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T18:41:54Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# ViTSTR small v1.0
ViTSTR model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 128x32 with a patch size of 8x4.
Disclaimer: this model card was not written by the original author.
## Model description
*TODO*
## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{atienza2021vision,
title={Vision transformer for fast and efficient scene text recognition},
author={Atienza, Rowel},
booktitle={International Conference on Document Analysis and Recognition},
pages={319--334},
year={2021},
organization={Springer}
}
```
|
caffsean/t5-base-finetuned-keyword-to-text-generation
|
caffsean
| 2022-08-28T18:36:02Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-27T23:29:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-keyword-to-text-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-keyword-to-text-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4643
- Rouge1: 2.1108
- Rouge2: 0.3331
- Rougel: 1.7368
- Rougelsum: 1.7391
- Gen Len: 16.591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 375 | 3.4862 | 2.0718 | 0.326 | 1.7275 | 1.7308 | 16.7995 |
| 3.5928 | 2.0 | 750 | 3.4761 | 2.0829 | 0.3253 | 1.7192 | 1.7224 | 16.773 |
| 3.5551 | 3.0 | 1125 | 3.4701 | 2.1028 | 0.3272 | 1.7274 | 1.7296 | 16.6505 |
| 3.5225 | 4.0 | 1500 | 3.4671 | 2.11 | 0.3305 | 1.7343 | 1.7362 | 16.699 |
| 3.5225 | 5.0 | 1875 | 3.4653 | 2.1134 | 0.3319 | 1.7418 | 1.7437 | 16.5485 |
| 3.4987 | 6.0 | 2250 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
| 3.4939 | 7.0 | 2625 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
| 3.498 | 8.0 | 3000 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
baudm/parseq-small
|
baudm
| 2022-08-28T18:35:24Z | 0 | 3 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T18:31:18Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# PARSeq small v1.0
PARSeq model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 128x32 with a patch size of 8x4.
## Model description
PARSeq (Permuted Autoregressive Sequence) models unify the prevailing modeling/decoding schemes in Scene Text Recognition (STR). In particular, with a single model, it allows for context-free non-autoregressive inference (like CRNN and ViTSTR), context-aware autoregressive inference (like TRBA), and bidirectional iterative refinement (like ABINet).

## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{bautista2022parseq,
author={Bautista, Darwin and Atienza, Rowel},
title={Scene Text Recognition with Permuted Autoregressive Sequence Models},
booktitle={Proceedings of the 17th European Conference on Computer Vision (ECCV)},
month={10},
year={2022},
publisher={Springer International Publishing},
address={Cham}
}
```
|
baudm/parseq-tiny
|
baudm
| 2022-08-28T18:31:35Z | 0 | 2 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T18:31:35Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# PARSeq tiny v1.0
PARSeq model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 128x32 with a patch size of 8x4.
## Model description
PARSeq (Permuted Autoregressive Sequence) models unify the prevailing modeling/decoding schemes in Scene Text Recognition (STR). In particular, with a single model, it allows for context-free non-autoregressive inference (like CRNN and ViTSTR), context-aware autoregressive inference (like TRBA), and bidirectional iterative refinement (like ABINet).

## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{bautista2022parseq,
author={Bautista, Darwin and Atienza, Rowel},
title={Scene Text Recognition with Permuted Autoregressive Sequence Models},
booktitle={Proceedings of the 17th European Conference on Computer Vision (ECCV)},
month={10},
year={2022},
publisher={Springer International Publishing},
address={Cham}
}
```
|
baudm/parseq-small-patch16-224
|
baudm
| 2022-08-28T18:30:34Z | 0 | 0 | null |
[
"pytorch",
"image-to-text",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2022-08-28T17:54:13Z |
---
language:
- en
license: apache-2.0
tags:
- image-to-text
---
# PARSeq small v1.0
PARSeq model pre-trained on various real [STR datasets](https://github.com/baudm/parseq/blob/main/Datasets.md) at image size 224x224 with a patch size of 16x16.
## Model description
PARSeq (Permuted Autoregressive Sequence) models unify the prevailing modeling/decoding schemes in Scene Text Recognition (STR). In particular, with a single model, it allows for context-free non-autoregressive inference (like CRNN and ViTSTR), context-aware autoregressive inference (like TRBA), and bidirectional iterative refinement (like ABINet).

## Intended uses & limitations
You can use the model for STR on images containing Latin characters (62 case-sensitive alphanumeric + 32 punctuation marks).
### How to use
*TODO*
### BibTeX entry and citation info
```bibtex
@InProceedings{bautista2022parseq,
author={Bautista, Darwin and Atienza, Rowel},
title={Scene Text Recognition with Permuted Autoregressive Sequence Models},
booktitle={Proceedings of the 17th European Conference on Computer Vision (ECCV)},
month={10},
year={2022},
publisher={Springer International Publishing},
address={Cham}
}
```
|
silviacamplani/distilbert-finetuned-tapt-ner-music
|
silviacamplani
| 2022-08-28T16:35:05Z | 63 | 1 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T16:29:09Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: silviacamplani/distilbert-finetuned-tapt-ner-music
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/distilbert-finetuned-tapt-ner-music
This model is a fine-tuned version of [silviacamplani/distilbert-finetuned-tapt-lm-ai](https://huggingface.co/silviacamplani/distilbert-finetuned-tapt-lm-ai) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6932
- Validation Loss: 0.7886
- Train Precision: 0.5347
- Train Recall: 0.5896
- Train F1: 0.5608
- Train Accuracy: 0.8078
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 370, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 2.7047 | 2.0137 | 0.0 | 0.0 | 0.0 | 0.5482 | 0 |
| 1.7222 | 1.5112 | 0.0 | 0.0 | 0.0 | 0.5561 | 1 |
| 1.3564 | 1.2817 | 0.2382 | 0.2592 | 0.2483 | 0.6686 | 2 |
| 1.1641 | 1.1378 | 0.3605 | 0.3816 | 0.3708 | 0.7043 | 3 |
| 1.0188 | 1.0187 | 0.4583 | 0.4950 | 0.4760 | 0.7409 | 4 |
| 0.8983 | 0.9267 | 0.4946 | 0.5383 | 0.5155 | 0.7638 | 5 |
| 0.8117 | 0.8649 | 0.5152 | 0.5653 | 0.5391 | 0.7816 | 6 |
| 0.7550 | 0.8206 | 0.5283 | 0.5806 | 0.5532 | 0.8007 | 7 |
| 0.7132 | 0.7964 | 0.5326 | 0.5887 | 0.5592 | 0.8049 | 8 |
| 0.6932 | 0.7886 | 0.5347 | 0.5896 | 0.5608 | 0.8078 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
silviacamplani/distilbert-finetuned-dapt-ner-music
|
silviacamplani
| 2022-08-28T16:17:41Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T16:05:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: silviacamplani/distilbert-finetuned-dapt-ner-music
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/distilbert-finetuned-dapt-ner-music
This model is a fine-tuned version of [silviacamplani/distilbert-finetuned-dapt-lm-ai](https://huggingface.co/silviacamplani/distilbert-finetuned-dapt-lm-ai) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7656
- Validation Loss: 0.8288
- Train Precision: 0.5590
- Train Recall: 0.5968
- Train F1: 0.5773
- Train Accuracy: 0.7761
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 370, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 2.5668 | 1.9780 | 0.0 | 0.0 | 0.0 | 0.5482 | 0 |
| 1.7189 | 1.4888 | 0.1152 | 0.0396 | 0.0589 | 0.5905 | 1 |
| 1.3060 | 1.2236 | 0.3797 | 0.3564 | 0.3677 | 0.6839 | 2 |
| 1.0982 | 1.0637 | 0.4716 | 0.4635 | 0.4675 | 0.7155 | 3 |
| 0.9450 | 0.9504 | 0.5176 | 0.5167 | 0.5171 | 0.7385 | 4 |
| 0.8398 | 0.8775 | 0.5474 | 0.5671 | 0.5570 | 0.7579 | 5 |
| 0.7656 | 0.8288 | 0.5590 | 0.5968 | 0.5773 | 0.7761 | 6 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
aware-ai/wav2vec2-xls-r-300m-english
|
aware-ai
| 2022-08-28T16:15:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_10_0",
"generated_from_trainer",
"de",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-26T12:31:54Z |
---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_10_0
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-english
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_10_0 - DE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5577
- Wer: 0.3864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.317 | 1.0 | 7194 | 0.5577 | 0.3864 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rajistics/layoutlmv2-finetuned-cord_100
|
rajistics
| 2022-08-28T15:48:40Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T01:37:57Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
model-index:
- name: layoutlmv2-finetuned-cord_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the cord-layoutlmv3 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
buddhist-nlp/sanstib
|
buddhist-nlp
| 2022-08-28T15:02:42Z | 104 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"license:lgpl-lr",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-04-22T08:35:32Z |
---
license: lgpl-lr
---
This model creates Sanskrit and Tibetan sentence embeddings and can be used for semantic similarity tasks.
Sanskrit needs to be segmented first and converted into internal transliteration (I will upload the according script here soon). The Tibetan needs to be converted into wylie transliteration.
|
Mcy/t5-small-finetuned-xsum
|
Mcy
| 2022-08-28T12:40:36Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-26T08:59:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 178 | 1.9530 | 9.1314 | 1.226 | 9.1213 | 9.1047 | 14.4473 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/bmrf_alerts
|
huggingtweets
| 2022-08-28T11:57:30Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-25T15:42:06Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/947480106469023744/dxcygpaz_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Black Mesa Announcement System</div>
<div style="text-align: center; font-size: 14px;">@bmrf_alerts</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Black Mesa Announcement System.
| Data | Black Mesa Announcement System |
| --- | --- |
| Tweets downloaded | 3251 |
| Retweets | 0 |
| Short tweets | 2 |
| Tweets kept | 3249 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/c177htj1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bmrf_alerts's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/19dwnb8u) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/19dwnb8u/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bmrf_alerts')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ritog/PPO-LunarLander-v2
|
ritog
| 2022-08-28T11:52:27Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-28T11:51:57Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 187.69 +/- 76.55
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Shivus/q-Taxi-v3
|
Shivus
| 2022-08-28T11:28:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-28T11:28:36Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Shivus/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Shivus/q-FrozenLake-v1-4x4-noSlippery
|
Shivus
| 2022-08-28T11:25:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-28T11:25:18Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Shivus/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
codeparrot/codeparrot-small-code-to-text
|
codeparrot
| 2022-08-28T10:00:57Z | 57 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"code",
"generation",
"dataset:codeparrot/codeparrot-clean",
"dataset:codeparrot/github-jupyter-code-to-text",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-19T15:34:24Z |
---
language:
- code
license: apache-2.0
tags:
- code
- gpt2
- generation
datasets:
- codeparrot/codeparrot-clean
- codeparrot/github-jupyter-code-to-text
---
# CodeParrot 🦜 small for text-t-code generation
This model is [CodeParrot-small](https://huggingface.co/codeparrot/codeparrot-small) (from `branch megatron`) fine-tuned on [github-jupyter-code-to-text](https://huggingface.co/datasets/codeparrot/github-jupyter-code-to-text), a dataset where the samples are a succession of Python code and its explanation as a docstring, originally extracted from Jupyter notebooks parsed in this [dataset](https://huggingface.co/datasets/codeparrot/github-jupyter-parsed).
|
paola-md/recipe-lr1e05-wd0.02-bs32
|
paola-md
| 2022-08-28T08:41:28Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T08:13:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.02-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.02-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Rmse: 0.5250
- Mse: 0.2756
- Mae: 0.4181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2768 | 0.5261 | 0.2768 | 0.4281 |
| 0.2743 | 2.0 | 1246 | 0.2739 | 0.5234 | 0.2739 | 0.4152 |
| 0.2732 | 3.0 | 1869 | 0.2760 | 0.5253 | 0.2760 | 0.4229 |
| 0.2719 | 4.0 | 2492 | 0.2749 | 0.5243 | 0.2749 | 0.4041 |
| 0.271 | 5.0 | 3115 | 0.2761 | 0.5255 | 0.2761 | 0.4238 |
| 0.2699 | 6.0 | 3738 | 0.2756 | 0.5250 | 0.2756 | 0.4181 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr1e05-wd0.1-bs32
|
paola-md
| 2022-08-28T08:13:25Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T07:45:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.1-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.1-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Rmse: 0.5250
- Mse: 0.2756
- Mae: 0.4181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2768 | 0.5261 | 0.2768 | 0.4281 |
| 0.2743 | 2.0 | 1246 | 0.2739 | 0.5234 | 0.2739 | 0.4152 |
| 0.2732 | 3.0 | 1869 | 0.2760 | 0.5253 | 0.2760 | 0.4229 |
| 0.2719 | 4.0 | 2492 | 0.2749 | 0.5243 | 0.2749 | 0.4041 |
| 0.271 | 5.0 | 3115 | 0.2761 | 0.5255 | 0.2761 | 0.4238 |
| 0.2699 | 6.0 | 3738 | 0.2756 | 0.5250 | 0.2756 | 0.4181 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr1e05-wd0.005-bs32
|
paola-md
| 2022-08-28T07:45:24Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T07:17:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.005-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.005-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Rmse: 0.5250
- Mse: 0.2756
- Mae: 0.4181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2768 | 0.5261 | 0.2768 | 0.4281 |
| 0.2743 | 2.0 | 1246 | 0.2739 | 0.5234 | 0.2739 | 0.4153 |
| 0.2732 | 3.0 | 1869 | 0.2760 | 0.5253 | 0.2760 | 0.4229 |
| 0.2719 | 4.0 | 2492 | 0.2749 | 0.5243 | 0.2749 | 0.4041 |
| 0.271 | 5.0 | 3115 | 0.2761 | 0.5255 | 0.2761 | 0.4238 |
| 0.2699 | 6.0 | 3738 | 0.2756 | 0.5250 | 0.2756 | 0.4181 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr1e05-wd0.01-bs32
|
paola-md
| 2022-08-28T07:17:08Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T06:49:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr1e05-wd0.01-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr1e05-wd0.01-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Rmse: 0.5250
- Mse: 0.2756
- Mae: 0.4181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2768 | 0.5261 | 0.2768 | 0.4282 |
| 0.2743 | 2.0 | 1246 | 0.2739 | 0.5234 | 0.2739 | 0.4152 |
| 0.2732 | 3.0 | 1869 | 0.2760 | 0.5253 | 0.2760 | 0.4229 |
| 0.2719 | 4.0 | 2492 | 0.2749 | 0.5243 | 0.2749 | 0.4041 |
| 0.271 | 5.0 | 3115 | 0.2761 | 0.5255 | 0.2761 | 0.4238 |
| 0.2699 | 6.0 | 3738 | 0.2756 | 0.5250 | 0.2756 | 0.4181 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
paola-md/recipe-lr8e06-wd0.02-bs32
|
paola-md
| 2022-08-28T06:49:07Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-28T06:21:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr8e06-wd0.02-bs32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr8e06-wd0.02-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2752
- Rmse: 0.5246
- Mse: 0.2752
- Mae: 0.4184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2769 | 1.0 | 623 | 0.2773 | 0.5266 | 0.2773 | 0.4296 |
| 0.2745 | 2.0 | 1246 | 0.2739 | 0.5233 | 0.2739 | 0.4144 |
| 0.2733 | 3.0 | 1869 | 0.2752 | 0.5246 | 0.2752 | 0.4215 |
| 0.2722 | 4.0 | 2492 | 0.2744 | 0.5238 | 0.2744 | 0.4058 |
| 0.2714 | 5.0 | 3115 | 0.2758 | 0.5251 | 0.2758 | 0.4232 |
| 0.2705 | 6.0 | 3738 | 0.2752 | 0.5246 | 0.2752 | 0.4184 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ajtamayoh/NER_ehealth_Spanish_mBERT_fine_tuned
|
ajtamayoh
| 2022-08-28T06:21:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T05:42:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NER_ehealth_Spanish_mBERT_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER_ehealth_Spanish_mBERT_fine_tuned
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6563
- Precision: 0.8094
- Recall: 0.8330
- F1: 0.8210
- Accuracy: 0.9051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 100 | 0.5335 | 0.8018 | 0.8307 | 0.8160 | 0.9047 |
| No log | 2.0 | 200 | 0.5034 | 0.8110 | 0.8253 | 0.8181 | 0.9067 |
| No log | 3.0 | 300 | 0.5632 | 0.7932 | 0.8230 | 0.8078 | 0.9038 |
| No log | 4.0 | 400 | 0.5904 | 0.8004 | 0.8299 | 0.8149 | 0.9027 |
| 0.017 | 5.0 | 500 | 0.5958 | 0.7993 | 0.8330 | 0.8158 | 0.9071 |
| 0.017 | 6.0 | 600 | 0.6168 | 0.7980 | 0.8352 | 0.8162 | 0.9022 |
| 0.017 | 7.0 | 700 | 0.6219 | 0.8079 | 0.8314 | 0.8195 | 0.9062 |
| 0.017 | 8.0 | 800 | 0.6441 | 0.8046 | 0.8299 | 0.8171 | 0.9038 |
| 0.017 | 9.0 | 900 | 0.6338 | 0.8086 | 0.8253 | 0.8168 | 0.9051 |
| 0.0066 | 10.0 | 1000 | 0.6482 | 0.8021 | 0.8261 | 0.8139 | 0.9029 |
| 0.0066 | 11.0 | 1100 | 0.6578 | 0.8039 | 0.8291 | 0.8163 | 0.9038 |
| 0.0066 | 12.0 | 1200 | 0.6563 | 0.8094 | 0.8330 | 0.8210 | 0.9051 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
yoyoyo1118/xlm-roberta-base-finetuned-panx-de
|
yoyoyo1118
| 2022-08-28T06:05:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-28T05:45:44Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.863677639046538
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1343
- F1: 0.8637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2578 | 1.0 | 525 | 0.1562 | 0.8273 |
| 0.1297 | 2.0 | 1050 | 0.1330 | 0.8474 |
| 0.0809 | 3.0 | 1575 | 0.1343 | 0.8637 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.