modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 12:28:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 543
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 12:27:52
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
elopezlopez/distilbert-base-uncased_fold_3_ternary
|
elopezlopez
| 2022-07-31T23:52:36Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T23:35:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_3_ternary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_3_ternary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7987
- F1: 0.7460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.5903 | 0.6893 |
| 0.5417 | 2.0 | 578 | 0.5822 | 0.7130 |
| 0.5417 | 3.0 | 867 | 0.6471 | 0.7385 |
| 0.2298 | 4.0 | 1156 | 0.8933 | 0.7322 |
| 0.2298 | 5.0 | 1445 | 1.1002 | 0.7147 |
| 0.1012 | 6.0 | 1734 | 1.2041 | 0.7249 |
| 0.0508 | 7.0 | 2023 | 1.3575 | 0.7195 |
| 0.0508 | 8.0 | 2312 | 1.3896 | 0.7385 |
| 0.018 | 9.0 | 2601 | 1.5363 | 0.7238 |
| 0.018 | 10.0 | 2890 | 1.5336 | 0.7364 |
| 0.0142 | 11.0 | 3179 | 1.6335 | 0.7308 |
| 0.0142 | 12.0 | 3468 | 1.6915 | 0.7295 |
| 0.0047 | 13.0 | 3757 | 1.7087 | 0.7427 |
| 0.0058 | 14.0 | 4046 | 1.7875 | 0.7378 |
| 0.0058 | 15.0 | 4335 | 1.7649 | 0.7438 |
| 0.0051 | 16.0 | 4624 | 1.7987 | 0.7460 |
| 0.0051 | 17.0 | 4913 | 1.8435 | 0.7404 |
| 0.0025 | 18.0 | 5202 | 1.9623 | 0.7257 |
| 0.0025 | 19.0 | 5491 | 1.9005 | 0.7304 |
| 0.0029 | 20.0 | 5780 | 1.9437 | 0.7374 |
| 0.0011 | 21.0 | 6069 | 1.9840 | 0.7268 |
| 0.0011 | 22.0 | 6358 | 1.9411 | 0.7346 |
| 0.0025 | 23.0 | 6647 | 1.9233 | 0.7438 |
| 0.0025 | 24.0 | 6936 | 1.9415 | 0.7395 |
| 0.0015 | 25.0 | 7225 | 1.9481 | 0.7411 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_2_ternary
|
elopezlopez
| 2022-07-31T23:35:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T23:17:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_2_ternary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_2_ternary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5810
- F1: 0.7620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 294 | 0.5886 | 0.7239 |
| 0.557 | 2.0 | 588 | 0.5085 | 0.7524 |
| 0.557 | 3.0 | 882 | 0.6332 | 0.7530 |
| 0.2456 | 4.0 | 1176 | 0.8749 | 0.7161 |
| 0.2456 | 5.0 | 1470 | 1.0601 | 0.7371 |
| 0.1112 | 6.0 | 1764 | 1.1885 | 0.7451 |
| 0.0484 | 7.0 | 2058 | 1.3027 | 0.7240 |
| 0.0484 | 8.0 | 2352 | 1.4647 | 0.7259 |
| 0.0259 | 9.0 | 2646 | 1.4476 | 0.7322 |
| 0.0259 | 10.0 | 2940 | 1.4826 | 0.7388 |
| 0.0164 | 11.0 | 3234 | 1.5869 | 0.7333 |
| 0.0109 | 12.0 | 3528 | 1.5954 | 0.7539 |
| 0.0109 | 13.0 | 3822 | 1.5810 | 0.7620 |
| 0.0082 | 14.0 | 4116 | 1.7165 | 0.7335 |
| 0.0082 | 15.0 | 4410 | 1.8152 | 0.7414 |
| 0.004 | 16.0 | 4704 | 1.7411 | 0.7474 |
| 0.004 | 17.0 | 4998 | 1.8692 | 0.7355 |
| 0.0034 | 18.0 | 5292 | 1.8727 | 0.7303 |
| 0.0009 | 19.0 | 5586 | 1.9813 | 0.7305 |
| 0.0009 | 20.0 | 5880 | 1.9764 | 0.7391 |
| 0.0012 | 21.0 | 6174 | 2.0170 | 0.7291 |
| 0.0012 | 22.0 | 6468 | 2.0240 | 0.7391 |
| 0.0004 | 23.0 | 6762 | 2.0311 | 0.7352 |
| 0.0014 | 24.0 | 7056 | 2.0174 | 0.7334 |
| 0.0014 | 25.0 | 7350 | 2.0282 | 0.7381 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/xlnet-base-cased_fold_2_binary
|
elopezlopez
| 2022-07-31T23:13:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T22:50:03Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_2_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_2_binary
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4858
- F1: 0.7648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.4361 | 0.7404 |
| 0.4403 | 2.0 | 580 | 0.5363 | 0.7515 |
| 0.4403 | 3.0 | 870 | 0.4858 | 0.7648 |
| 0.2505 | 4.0 | 1160 | 0.7127 | 0.7612 |
| 0.2505 | 5.0 | 1450 | 0.8930 | 0.7554 |
| 0.1425 | 6.0 | 1740 | 0.9897 | 0.7580 |
| 0.0869 | 7.0 | 2030 | 1.2683 | 0.7615 |
| 0.0869 | 8.0 | 2320 | 1.4988 | 0.7343 |
| 0.0411 | 9.0 | 2610 | 1.5082 | 0.7492 |
| 0.0411 | 10.0 | 2900 | 1.4974 | 0.7450 |
| 0.0306 | 11.0 | 3190 | 1.5723 | 0.7435 |
| 0.0306 | 12.0 | 3480 | 1.8446 | 0.7432 |
| 0.0291 | 13.0 | 3770 | 1.7113 | 0.7639 |
| 0.0207 | 14.0 | 4060 | 1.8073 | 0.7394 |
| 0.0207 | 15.0 | 4350 | 1.7524 | 0.7585 |
| 0.0171 | 16.0 | 4640 | 1.8751 | 0.7374 |
| 0.0171 | 17.0 | 4930 | 1.7849 | 0.7561 |
| 0.0084 | 18.0 | 5220 | 1.8618 | 0.7441 |
| 0.0064 | 19.0 | 5510 | 1.9613 | 0.7437 |
| 0.0064 | 20.0 | 5800 | 1.8898 | 0.7430 |
| 0.006 | 21.0 | 6090 | 1.9889 | 0.7409 |
| 0.006 | 22.0 | 6380 | 1.9949 | 0.7488 |
| 0.0049 | 23.0 | 6670 | 1.9453 | 0.7488 |
| 0.0049 | 24.0 | 6960 | 1.9754 | 0.7472 |
| 0.002 | 25.0 | 7250 | 1.9946 | 0.7504 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
keithanpai/vit-base-patch32-384-finetuned-eurosat
|
keithanpai
| 2022-07-31T22:51:54Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-31T19:46:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch32-384-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8423153692614771
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch32-384-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch32-384](https://huggingface.co/google/vit-base-patch32-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4381
- Accuracy: 0.8423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.607 | 0.99 | 70 | 0.5609 | 0.8014 |
| 0.5047 | 1.99 | 140 | 0.4634 | 0.8373 |
| 0.4089 | 2.99 | 210 | 0.4381 | 0.8423 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/xlnet-base-cased_fold_1_binary
|
elopezlopez
| 2022-07-31T22:49:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T22:26:16Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_1_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_1_binary
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7607
- F1: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.4111 | 0.7555 |
| 0.4387 | 2.0 | 576 | 0.4075 | 0.7540 |
| 0.4387 | 3.0 | 864 | 0.5344 | 0.7567 |
| 0.2471 | 4.0 | 1152 | 0.7405 | 0.7597 |
| 0.2471 | 5.0 | 1440 | 1.0564 | 0.7508 |
| 0.1419 | 6.0 | 1728 | 1.0703 | 0.7751 |
| 0.0845 | 7.0 | 2016 | 1.0866 | 0.7609 |
| 0.0845 | 8.0 | 2304 | 1.2135 | 0.7751 |
| 0.05 | 9.0 | 2592 | 1.3649 | 0.7516 |
| 0.05 | 10.0 | 2880 | 1.4943 | 0.7590 |
| 0.0267 | 11.0 | 3168 | 1.5174 | 0.7412 |
| 0.0267 | 12.0 | 3456 | 1.4884 | 0.7559 |
| 0.0278 | 13.0 | 3744 | 1.5109 | 0.7405 |
| 0.0201 | 14.0 | 4032 | 1.7251 | 0.7409 |
| 0.0201 | 15.0 | 4320 | 1.5833 | 0.7354 |
| 0.0185 | 16.0 | 4608 | 1.7744 | 0.7598 |
| 0.0185 | 17.0 | 4896 | 1.8283 | 0.7619 |
| 0.0066 | 18.0 | 5184 | 1.7607 | 0.7778 |
| 0.0066 | 19.0 | 5472 | 1.7503 | 0.7719 |
| 0.0078 | 20.0 | 5760 | 1.7807 | 0.7508 |
| 0.006 | 21.0 | 6048 | 1.6887 | 0.7629 |
| 0.006 | 22.0 | 6336 | 1.7041 | 0.7678 |
| 0.0074 | 23.0 | 6624 | 1.7337 | 0.7633 |
| 0.0074 | 24.0 | 6912 | 1.7548 | 0.7645 |
| 0.0035 | 25.0 | 7200 | 1.7685 | 0.7621 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_5_binary
|
elopezlopez
| 2022-07-31T22:14:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T22:04:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_5_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_5_binary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5093
- F1: 0.7801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.4760 | 0.7315 |
| 0.3992 | 2.0 | 576 | 0.4428 | 0.7785 |
| 0.3992 | 3.0 | 864 | 0.5093 | 0.7801 |
| 0.2021 | 4.0 | 1152 | 0.6588 | 0.7634 |
| 0.2021 | 5.0 | 1440 | 0.9174 | 0.7713 |
| 0.0945 | 6.0 | 1728 | 0.9832 | 0.7726 |
| 0.0321 | 7.0 | 2016 | 1.2103 | 0.7672 |
| 0.0321 | 8.0 | 2304 | 1.3759 | 0.7616 |
| 0.0134 | 9.0 | 2592 | 1.4405 | 0.7570 |
| 0.0134 | 10.0 | 2880 | 1.4591 | 0.7710 |
| 0.0117 | 11.0 | 3168 | 1.4947 | 0.7713 |
| 0.0117 | 12.0 | 3456 | 1.6224 | 0.7419 |
| 0.0081 | 13.0 | 3744 | 1.6462 | 0.7520 |
| 0.0083 | 14.0 | 4032 | 1.6880 | 0.7637 |
| 0.0083 | 15.0 | 4320 | 1.7080 | 0.7380 |
| 0.0048 | 16.0 | 4608 | 1.7352 | 0.7551 |
| 0.0048 | 17.0 | 4896 | 1.6761 | 0.7713 |
| 0.0024 | 18.0 | 5184 | 1.7553 | 0.76 |
| 0.0024 | 19.0 | 5472 | 1.7312 | 0.7673 |
| 0.005 | 20.0 | 5760 | 1.7334 | 0.7713 |
| 0.0032 | 21.0 | 6048 | 1.7963 | 0.7578 |
| 0.0032 | 22.0 | 6336 | 1.7529 | 0.7679 |
| 0.0025 | 23.0 | 6624 | 1.7741 | 0.7662 |
| 0.0025 | 24.0 | 6912 | 1.7515 | 0.7679 |
| 0.0004 | 25.0 | 7200 | 1.7370 | 0.7765 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_2_binary
|
elopezlopez
| 2022-07-31T21:43:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T21:33:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_2_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_2_binary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4724
- F1: 0.7604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.4280 | 0.7515 |
| 0.4018 | 2.0 | 580 | 0.4724 | 0.7604 |
| 0.4018 | 3.0 | 870 | 0.5336 | 0.7428 |
| 0.1995 | 4.0 | 1160 | 0.8367 | 0.7476 |
| 0.1995 | 5.0 | 1450 | 0.9242 | 0.7412 |
| 0.089 | 6.0 | 1740 | 1.0987 | 0.7410 |
| 0.0318 | 7.0 | 2030 | 1.1853 | 0.7584 |
| 0.0318 | 8.0 | 2320 | 1.2509 | 0.7500 |
| 0.0189 | 9.0 | 2610 | 1.5060 | 0.7258 |
| 0.0189 | 10.0 | 2900 | 1.5607 | 0.7534 |
| 0.0084 | 11.0 | 3190 | 1.5871 | 0.7476 |
| 0.0084 | 12.0 | 3480 | 1.7206 | 0.7338 |
| 0.0047 | 13.0 | 3770 | 1.6776 | 0.7340 |
| 0.0068 | 14.0 | 4060 | 1.7339 | 0.7546 |
| 0.0068 | 15.0 | 4350 | 1.8279 | 0.7504 |
| 0.0025 | 16.0 | 4640 | 1.7791 | 0.7411 |
| 0.0025 | 17.0 | 4930 | 1.7917 | 0.7444 |
| 0.003 | 18.0 | 5220 | 1.7781 | 0.7559 |
| 0.0029 | 19.0 | 5510 | 1.8153 | 0.7559 |
| 0.0029 | 20.0 | 5800 | 1.7757 | 0.7414 |
| 0.0055 | 21.0 | 6090 | 1.8635 | 0.7454 |
| 0.0055 | 22.0 | 6380 | 1.8483 | 0.7460 |
| 0.001 | 23.0 | 6670 | 1.8620 | 0.7492 |
| 0.001 | 24.0 | 6960 | 1.9058 | 0.7508 |
| 0.0006 | 25.0 | 7250 | 1.8640 | 0.7504 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
abdulmatinomotoso/t5_large_headline_generator_testing_3
|
abdulmatinomotoso
| 2022-07-31T21:14:12Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-31T18:03:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5_large_headline_generator_testing_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_large_headline_generator_testing_3
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9638 | 0.79 | 500 | 0.8474 |
| 0.8478 | 1.57 | 1000 | 0.8356 |
| 0.6981 | 2.36 | 1500 | 0.8407 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ashraq/movie-recommender-movie-model
|
ashraq
| 2022-07-31T20:57:47Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-07-31T20:57:40Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
DS-20202/DoubleHardDebias
|
DS-20202
| 2022-07-31T20:32:45Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-07-31T12:08:09Z |
---
title: Double Hard Debiasing
emoji: 👁
colorFrom: blue
colorTo: pink
sdk: gradio
sdk_version: 3.1.1
app_file: app.py
pinned: false
license: mit
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
neuralmagic/oBERT-6-upstream-pretrained-dense
|
neuralmagic
| 2022-07-31T19:52:34Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:56:31Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-6-upstream-pretrained-dense
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to 6 layers from `neuralmagic/oBERT-12-upstream-pretrained-dense`, pretrained with knowledge distillation. This model is used as a starting point for downstream finetuning and pruning runs presented in the `Table 3 - 6 Layers`.
The model can also be used for finetuning on any downstream task, as a starting point instead of the two times larger `bert-base-uncased` model.
Finetuned and pruned versions of this model on the SQuADv1 downstream task, as described in the paper:
- 0%: `neuralmagic/oBERT-6-downstream-dense-squadv1`
- 80% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-90-squadv1`
```
Training objective: masked language modeling (MLM) + knowledge distillation
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 0%
Number of layers: 6
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-6-downstream-pruned-block4-90-squadv1
|
neuralmagic
| 2022-07-31T19:52:34Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:00:31Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-pruned-block4-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - Sparsity 90% - 4-block`.
```
Pruning method: oBERT downstream block-4
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 6
```
The dev-set performance of this model:
```
EM = 77.65
F1 = 85.34
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-6-downstream-pruned-block4-90-QAT-squadv1
|
neuralmagic
| 2022-07-31T19:52:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:21:02Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-pruned-block4-90-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - Sparsity 90% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 6
```
The dev-set performance of this model:
```
EM = 76.56
F1 = 84.59
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-6-downstream-pruned-unstructured-90-squadv1
|
neuralmagic
| 2022-07-31T19:52:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:00:05Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-pruned-unstructured-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - Sparsity 90% - unstructured`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 6
```
The dev-set performance of this model:
```
EM = 79.16
F1 = 86.78
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-teacher-squadv1
|
neuralmagic
| 2022-07-31T19:52:34Z | 396 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:47:26Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# SQuADv1 teacher
This model is used as a teacher for all runs on the SQuADv1 downstream task in the paper [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
SQuADv1 dev-set:
```
EM = 81.41
F1 = 88.54
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-6-downstream-dense-squadv1
|
neuralmagic
| 2022-07-31T19:52:33Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:59:35Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-dense-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - 0% Sparsity`, and it represents an upper bound for performance of the corresponding pruned models:
- 80% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-90-squadv1`
SQuADv1 dev-set:
```
EM = 81.17
F1 = 88.32
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-3-upstream-pretrained-dense
|
neuralmagic
| 2022-07-31T19:52:33Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:56:43Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-3-upstream-pretrained-dense
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to 3 layers from `neuralmagic/oBERT-12-upstream-pretrained-dense`, pretrained with knowledge distillation. This model is used as a starting point for downstream finetuning and pruning runs presented in the `Table 3 - 3 Layers`.
The model can also be used for finetuning on any downstream task, as a starting point instead of the three times larger `bert-base-uncased` model.
Finetuned and pruned versions of this model on the SQuADv1 downstream task, as described in the paper:
- 0%: `neuralmagic/oBERT-3-downstream-dense-squadv1`
- 80% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-90-squadv1`
```
Training objective: masked language modeling (MLM) + knowledge distillation
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 0%
Number of layers: 3
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-3-downstream-pruned-block4-80-QAT-squadv1
|
neuralmagic
| 2022-07-31T19:52:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:21:28Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-block4-80-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 80% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 72.70
F1 = 82.04
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-3-downstream-pruned-unstructured-90-squadv1
|
neuralmagic
| 2022-07-31T19:52:33Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:01:15Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-unstructured-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 90% - unstructured`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 73.61
F1 = 82.50
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-3-downstream-pruned-unstructured-80-squadv1
|
neuralmagic
| 2022-07-31T19:52:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:01:00Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-unstructured-80-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 80% - unstructured`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 75.62
F1 = 84.08
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-3-downstream-pruned-block4-90-QAT-squadv1
|
neuralmagic
| 2022-07-31T19:52:33Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:21:41Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-block4-90-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 90% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 70.00
F1 = 79.66
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2
|
neuralmagic
| 2022-07-31T19:52:32Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:31:44Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - QQP 90%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | acc | F1 |
| ------------- | ----- | ----- |
| seed=42 | 90.94 | 87.79 |
| seed=3407 | 91.00 | 87.81 |
| seed=123 | 90.94 | 87.73 |
| seed=12345 (*)| 91.07 | 87.92 |
| ------------- | ----- | ----- |
| mean | 90.99 | 87.81 |
| stdev | 0.061 | 0.079 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-v2
|
neuralmagic
| 2022-07-31T19:52:32Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:22:37Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-12-upstream-pruned-unstructured-90-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the upstream pruned model used as a starting point for sparse-transfer learning to downstream tasks presented in the `Table 2 - oBERT - {SQuADv1, MNLI, QQP} - 90%` (in the upcoming updated version of the paper).
Finetuned versions of this model for each downstream task are:
- SQuADv1: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1-v2`
- MNLI: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-mnli-v2`
- QQP: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2`
```
Pruning method: oBERT upstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 90%
Number of layers: 12
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1
|
neuralmagic
| 2022-07-31T19:52:32Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:57:34Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - SQuADv1 90%`.
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | F1 | EM |
| ------------ | ----- | ----- |
| seed=42 (*)| 88.47 | 81.43 |
| seed=3407 | 88.32 | 81.13 |
| seed=54321 | 88.47 | 81.38 |
| ------------ | ----- | ----- |
| mean | 88.42 | 81.31 |
| stdev | 0.086 | 0.160 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli
|
neuralmagic
| 2022-07-31T19:52:32Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:mnli",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:58:16Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: mnli
---
# oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - MNLI 97%`.
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: MNLI
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | m-acc | mm-acc|
| ------------ | ----- | ----- |
| seed=42 | 78.55 | 79.90 |
| seed=3407 | 78.88 | 79.78 |
| seed=54321(*)| 79.11 | 79.71 |
| ------------ | ----- | ----- |
| mean | 78.85 | 79.80 |
| stdev | 0.281 | 0.096 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-97
|
neuralmagic
| 2022-07-31T19:52:32Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:57:16Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-12-upstream-pruned-unstructured-97
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the upstream pruned model used as a starting point for sparse-transfer learning to downstream tasks presented in the `Table 2 - oBERT - {SQuADv1, MNLI, QQP} - 97%`.
Finetuned versions of this model for each downstream task are:
- SQuADv1: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1`
- MNLI: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli`
- QQP: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp`
```
Pruning method: oBERT upstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 97%
Number of layers: 12
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp
|
neuralmagic
| 2022-07-31T19:52:32Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:58:41Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - QQP 97%`.
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | acc | F1 |
| ------------ | ----- | ----- |
| seed=42 (*)| 89.85 | 86.41 |
| seed=3407 | 89.72 | 86.42 |
| seed=54321 | 89.70 | 86.24 |
| ------------ | ----- | ----- |
| mean | 89.76 | 86.35 |
| stdev | 0.081 | 0.101 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1-v2
|
neuralmagic
| 2022-07-31T19:52:32Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:30:41Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - SQuADv1 90%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | F1 | EM |
| ------------ | ----- | ----- |
| seed=42 | 88.55 | 81.48 |
| seed=3407 | 88.34 | 81.25 |
| seed=123 (*)| 88.64 | 81.57 |
| seed=12345 | 88.44 | 81.43 |
| ------------ | ----- | ----- |
| mean | 88.49 | 81.43 |
| stdev | 0.130 | 0.134 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2
|
neuralmagic
| 2022-07-31T19:52:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:30:56Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - SQuADv1 97%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | F1 | EM |
| ------------- | ----- | ----- |
| seed=42 | 84.92 | 76.94 |
| seed=3407 | 84.87 | 76.71 |
| seed=123 | 84.95 | 77.06 |
| seed=12345 (*)| 84.95 | 76.90 |
| ------------- | ----- | ----- |
| mean | 84.92 | 76.90 |
| stdev | 0.037 | 0.145 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-unstructured-90-qqp
|
neuralmagic
| 2022-07-31T19:52:31Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:55:50Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-downstream-pruned-unstructured-90-qqp
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - QQP 90%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | acc | F1 |
| ------------ | ----- | ----- |
| seed=42 | 91.30 | 88.24 |
| seed=3407 (*)| 91.39 | 88.36 |
| seed=54321 | 91.36 | 88.29 |
| ------------ | ----- | ----- |
| mean | 91.35 | 88.30 |
| stdev | 0.045 | 0.060 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-unstructured-97-qqp
|
neuralmagic
| 2022-07-31T19:52:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:56:04Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-downstream-pruned-unstructured-97-qqp
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - QQP 97%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | acc | F1 |
| ------------ | ----- | ----- |
| seed=42 (*)| 90.90 | 87.73 |
| seed=3407 | 90.80 | 87.57 |
| seed=54321 | 90.90 | 87.69 |
| ------------ | ----- | ----- |
| mean | 90.87 | 87.66 |
| stdev | 0.057 | 0.083 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-unstructured-97-squadv1
|
neuralmagic
| 2022-07-31T19:52:31Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:53:49Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-pruned-unstructured-97-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - SQuADv1 97%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | F1 | EM |
| ------------ | ----- | ----- |
| seed=42 (*)| 86.06 | 78.28 |
| seed=3407 | 86.04 | 78.12 |
| seed=54321 | 85.85 | 77.93 |
| ------------ | ----- | ----- |
| mean | 85.98 | 78.11 |
| stdev | 0.115 | 0.175 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-unstructured-90-mnli
|
neuralmagic
| 2022-07-31T19:52:31Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:mnli",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:54:55Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: mnli
---
# oBERT-12-downstream-pruned-unstructured-90-mnli
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - MNLI 90%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: MNLI
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | m-acc | mm-acc|
| ------------ | ----- | ----- |
| seed=42 | 83.74 | 84.31 |
| seed=3407 (*)| 83.85 | 84.40 |
| seed=54321 | 83.77 | 84.33 |
| ------------ | ----- | ----- |
| mean | 83.79 | 84.35 |
| stdev | 0.056 | 0.047 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-unstructured-80-qqp
|
neuralmagic
| 2022-07-31T19:52:31Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:55:37Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-downstream-pruned-unstructured-80-qqp
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - QQP 80%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 80%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 80% | acc | F1 |
| ------------ | ----- | ----- |
| seed=42 (*)| 91.66 | 88.72 |
| seed=3407 | 91.51 | 88.56 |
| seed=54321 | 91.54 | 88.60 |
| ------------ | ----- | ----- |
| mean | 91.57 | 88.63 |
| stdev | 0.079 | 0.083 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pretrained-dense
|
neuralmagic
| 2022-07-31T19:52:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:56:17Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-12-upstream-pretrained-dense
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the pretrained dense model used as a teacher for upstream pruning runs, as described in the paper. The model can be finetuned on any downstream task, just like the standard `bert-base-uncased` model which is used as initialization for training of this model.
Sparse versions of this model:
- 90% sparse: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90`
- 97% sparse: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97`
```
Training objective: masked language modeling (MLM)
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 0%
Number of layers: 12
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-dense-QAT-squadv1
|
neuralmagic
| 2022-07-31T19:52:30Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:19:55Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-dense-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 12 Layers - 0% Sparsity - QAT`, and it represents an upper bound for performance of the corresponding pruned and quantized models:
- 80% unstructured QAT: `neuralmagic/oBERT-12-downstream-pruned-unstructured-80-QAT-squadv1`
- 80% block-4 QAT: `neuralmagic/oBERT-12-downstream-pruned-block4-80-QAT-squadv1`
- 90% unstructured QAT: `neuralmagic/oBERT-12-downstream-pruned-unstructured-90-QAT-squadv1`
- 90% block-4 QAT: `neuralmagic/oBERT-12-downstream-pruned-block4-90-QAT-squadv1`
SQuADv1 dev-set:
```
EM = 81.99
F1 = 89.06
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-block4-80-squadv1
|
neuralmagic
| 2022-07-31T19:52:30Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:59:08Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-pruned-block4-80-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 12 Layers - Sparsity 80% - 4-block`.
```
Pruning method: oBERT downstream block-4
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 12
```
The dev-set performance of this model:
```
EM = 81.45
F1 = 88.57
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-unstructured-80-mnli
|
neuralmagic
| 2022-07-31T19:52:30Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:mnli",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:54:40Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: mnli
---
# oBERT-12-downstream-pruned-unstructured-80-mnli
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - MNLI 80%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: MNLI
Sparsity: 80%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 80% | m-acc | mm-acc|
| ------------ | ----- | ----- |
| seed=42 | 84.30 | 84.98 |
| seed=3407 (*)| 84.46 | 84.99 |
| seed=54321 | 84.18 | 84.76 |
| ------------ | ----- | ----- |
| mean | 84.32 | 84.91 |
| stdev | 0.140 | 0.133 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-block4-90-QAT-squadv1
|
neuralmagic
| 2022-07-31T19:52:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:20:22Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-pruned-block4-90-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 12 Layers - Sparsity 90% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 12
```
The dev-set performance of this model:
```
EM = 78.84
F1 = 86.68
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-dense-squadv1
|
neuralmagic
| 2022-07-31T19:52:30Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:58:54Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-dense-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 12 Layers - 0% Sparsity`, and it represents an upper bound for performance of the corresponding pruned models:
- 80% unstructured: `neuralmagic/oBERT-12-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-12-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-12-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-12-downstream-pruned-block4-90-squadv1`
SQuADv1 dev-set:
```
EM = 82.71
F1 = 89.48
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-block4-80-QAT-squadv1
|
neuralmagic
| 2022-07-31T19:52:30Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:20:09Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-pruned-block4-80-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 12 Layers - Sparsity 80% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 12
```
The dev-set performance of this model:
```
EM = 80.58
F1 = 87.89
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli-v2
|
neuralmagic
| 2022-07-31T19:50:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:mnli",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:31:30Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: mnli
---
# oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - MNLI 97%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: MNLI
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | m-acc | mm-acc|
| ------------- | ----- | ----- |
| seed=42 | 80.86 | 80.88 |
| seed=3407 | 80.83 | 81.65 |
| seed=123 (*)| 81.18 | 81.06 |
| seed=12345 | 80.79 | 80.95 |
| ------------- | ----- | ----- |
| mean | 80.91 | 81.13 |
| stdev | 0.178 | 0.351 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
SummerChiam/rust_image_classification_11
|
SummerChiam
| 2022-07-31T17:10:12Z | 52 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-31T17:10:01Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification_11
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9607142806053162
---
# rust_image_classification_11
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust0

#### rust0

|
QuickSilver007/Reinforce-Pong-PLE-v0
|
QuickSilver007
| 2022-07-31T16:23:22Z | 0 | 0 | null |
[
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-31T16:23:13Z |
---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pong-PLE-v0
results:
- metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
SummerChiam/pond_image_classification_12
|
SummerChiam
| 2022-07-31T16:07:40Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-31T16:07:23Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_12
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.997732400894165
---
# pond_image_classification_12
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae0

#### Boiling0

#### BoilingNight0

#### Normal0

#### NormalCement0

#### NormalNight0

#### NormalRain0

|
Forkits/a2c-AntBulletEnv-v0
|
Forkits
| 2022-07-31T16:01:05Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-31T15:59:55Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1775.25 +/- 223.91
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SummerChiam/pond_image_classification_11
|
SummerChiam
| 2022-07-31T15:36:10Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-31T15:35:57Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_11
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9951980710029602
---
# pond_image_classification_11
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae0

#### Boiling0

#### BoilingNight0

#### Normal0

#### NormalCement0

#### NormalNight0

#### NormalRain0

|
samwit/ddpm-afhq-cats-128
|
samwit
| 2022-07-31T15:31:53Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-07-31T00:49:28Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-afhq-cats-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/samwit/ddpm-afhq-cats-128/tensorboard?#scalars)
|
CuteBlack/gfp_guided_diffusion_200k
|
CuteBlack
| 2022-07-31T15:10:42Z | 0 | 6 | null |
[
"license:mit",
"region:us"
] | null | 2022-07-15T22:24:34Z |
---
license: mit
---
256x256 Diffusion model trained on 1000+ NSFW gay furry pics (with same composition)
'attention_resolutions': '16',
'class_cond': False,
'diffusion_steps': 1000,
'rescale_timesteps': True,
'timestep_respacing': 'ddim100',
'image_size': 256,
'learn_sigma': True,
'noise_schedule': 'linear',
'num_channels': 128,
'num_heads': 1,
'num_res_blocks': 2,
'use_checkpoint': use_checkpoint,
'use_fp16': True,
'use_scale_shift_norm': False,
|
Izarel/distilbert-base-uncased_fine_tuned_title
|
Izarel
| 2022-07-31T14:52:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T06:14:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
model-index:
- name: distilbert-base-uncased_fine_tuned_title
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fine_tuned_title
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2615
- Accuracy: {'accuracy': 0.877634820695319}
- Recall: {'recall': 0.8474786132372805}
- Precision: {'precision': 0.8953502200023784}
- F1: {'f1': 0.8707569536806801}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------------------:|:------------------------------:|:---------------------------------:|:--------------------------:|
| 0.3093 | 1.0 | 2284 | 0.3021 | {'accuracy': 0.8779085683000274} | {'recall': 0.8560333183250788} | {'precision': 0.8888499298737728} | {'f1': 0.8721330275229358} |
| 0.2459 | 2.0 | 4568 | 0.2909 | {'accuracy': 0.8894059676977827} | {'recall': 0.8513057181449797} | {'precision': 0.9153957879448076} | {'f1': 0.8821882654846612} |
| 0.1696 | 3.0 | 6852 | 0.3259 | {'accuracy': 0.8808102929099371} | {'recall': 0.8595227375056281} | {'precision': 0.8915353181552831} | {'f1': 0.875236403232277} |
| 0.1179 | 4.0 | 9136 | 0.4946 | {'accuracy': 0.8729811114152751} | {'recall': 0.8610986042323278} | {'precision': 0.8756868131868132} | {'f1': 0.8683314415437005} |
| 0.0775 | 5.0 | 11420 | 0.6547 | {'accuracy': 0.8708458800985491} | {'recall': 0.8041422782530392} | {'precision': 0.9202627850057967} | {'f1': 0.8582927854868745} |
| 0.0522 | 6.0 | 13704 | 0.6699 | {'accuracy': 0.8768683274021353} | {'recall': 0.8325078793336335} | {'precision': 0.9067058967757754} | {'f1': 0.8680241769849187} |
| 0.0406 | 7.0 | 15988 | 0.8149 | {'accuracy': 0.8739118532712838} | {'recall': 0.8330706888788834} | {'precision': 0.9002554433767181} | {'f1': 0.8653610055539316} |
| 0.0298 | 8.0 | 18272 | 0.8906 | {'accuracy': 0.8753353408157679} | {'recall': 0.8421882035119316} | {'precision': 0.8952973555103506} | {'f1': 0.8679310944840787} |
| 0.0217 | 9.0 | 20556 | 1.0192 | {'accuracy': 0.8754448398576512} | {'recall': 0.8624493471409275} | {'precision': 0.8791738382099827} | {'f1': 0.8707312915506562} |
| 0.017 | 10.0 | 22840 | 1.0550 | {'accuracy': 0.8758828360251848} | {'recall': 0.8556956325979289} | {'precision': 0.8852917200419238} | {'f1': 0.8702421155056951} |
| 0.0139 | 11.0 | 25124 | 1.0873 | {'accuracy': 0.8728716123733917} | {'recall': 0.8582845565060784} | {'precision': 0.8776473296500921} | {'f1': 0.8678579558388345} |
| 0.0114 | 12.0 | 27408 | 1.1506 | {'accuracy': 0.8716123733917328} | {'recall': 0.8628995947771274} | {'precision': 0.8718298646650745} | {'f1': 0.8673417435085139} |
| 0.0061 | 13.0 | 29692 | 1.2574 | {'accuracy': 0.8696961401587736} | {'recall': 0.874943719045475} | {'precision': 0.8596549435965495} | {'f1': 0.8672319535869686} |
| 0.0035 | 14.0 | 31976 | 1.2490 | {'accuracy': 0.8784560635094443} | {'recall': 0.85006753714543} | {'precision': 0.8947867298578199} | {'f1': 0.8718540752713001} |
| 0.0028 | 15.0 | 34260 | 1.2615 | {'accuracy': 0.877634820695319} | {'recall': 0.8474786132372805} | {'precision': 0.8953502200023784} | {'f1': 0.8707569536806801} |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Kinahem/Reinforce-3
|
Kinahem
| 2022-07-31T13:02:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-31T13:02:35Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-3
results:
- metrics:
- type: mean_reward
value: 471.20 +/- 86.40
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Vasanth/bert_emo_classifier
|
Vasanth
| 2022-07-31T12:34:43Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-30T23:30:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: bert_emo_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_emo_classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9063 | 0.25 | 500 | 0.4845 |
| 0.3362 | 0.5 | 1000 | 0.3492 |
| 0.2759 | 0.75 | 1500 | 0.2819 |
| 0.2521 | 1.0 | 2000 | 0.2464 |
| 0.1705 | 1.25 | 2500 | 0.2345 |
| 0.1841 | 1.5 | 3000 | 0.2013 |
| 0.1428 | 1.75 | 3500 | 0.1926 |
| 0.1747 | 2.0 | 4000 | 0.1866 |
| 0.1082 | 2.25 | 4500 | 0.2302 |
| 0.1142 | 2.5 | 5000 | 0.2118 |
| 0.1205 | 2.75 | 5500 | 0.2318 |
| 0.1135 | 3.0 | 6000 | 0.2306 |
| 0.0803 | 3.25 | 6500 | 0.2625 |
| 0.0745 | 3.5 | 7000 | 0.2850 |
| 0.085 | 3.75 | 7500 | 0.2719 |
| 0.0701 | 4.0 | 8000 | 0.2748 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.10.3
|
Vlasta/DNADebertaSentencepiece30k
|
Vlasta
| 2022-07-31T12:30:51Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:03:58Z |
---
tags:
- generated_from_trainer
model-index:
- name: DNADebertaSentencepiece30k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DNADebertaSentencepiece30k
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 7.9373 | 0.41 | 5000 | 7.8263 |
| 7.8005 | 0.81 | 10000 | 7.7871 |
| 7.7704 | 1.22 | 15000 | 7.7630 |
| 7.7477 | 1.62 | 20000 | 7.6857 |
| 7.6058 | 2.03 | 25000 | 7.5543 |
| 7.5281 | 2.44 | 30000 | 7.4839 |
| 7.4487 | 2.84 | 35000 | 7.3801 |
| 7.3368 | 3.25 | 40000 | 7.2603 |
| 7.1923 | 3.66 | 45000 | 7.0365 |
| 6.9858 | 4.06 | 50000 | 6.8793 |
| 6.8639 | 4.47 | 55000 | 6.7839 |
| 6.7877 | 4.87 | 60000 | 6.7176 |
| 6.728 | 5.28 | 65000 | 6.6680 |
| 6.6826 | 5.69 | 70000 | 6.6258 |
| 6.6414 | 6.09 | 75000 | 6.5847 |
| 6.6057 | 6.5 | 80000 | 6.5571 |
| 6.5794 | 6.91 | 85000 | 6.5279 |
| 6.5525 | 7.31 | 90000 | 6.5059 |
| 6.5354 | 7.72 | 95000 | 6.4816 |
| 6.5125 | 8.12 | 100000 | 6.4674 |
| 6.4958 | 8.53 | 105000 | 6.4486 |
| 6.4817 | 8.94 | 110000 | 6.4317 |
| 6.4674 | 9.34 | 115000 | 6.4195 |
| 6.4549 | 9.75 | 120000 | 6.4072 |
| 6.4409 | 10.16 | 125000 | 6.3945 |
| 6.4302 | 10.56 | 130000 | 6.3861 |
| 6.4214 | 10.97 | 135000 | 6.3755 |
| 6.4118 | 11.37 | 140000 | 6.3659 |
| 6.4058 | 11.78 | 145000 | 6.3604 |
| 6.3985 | 12.19 | 150000 | 6.3560 |
| 6.3899 | 12.59 | 155000 | 6.3473 |
| 6.3837 | 13.0 | 160000 | 6.3417 |
| 6.3782 | 13.41 | 165000 | 6.3361 |
| 6.3753 | 13.81 | 170000 | 6.3309 |
| 6.3733 | 14.22 | 175000 | 6.3285 |
| 6.3706 | 14.62 | 180000 | 6.3277 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Neha2608/xlm-roberta-base-finetuned-panx-it
|
Neha2608
| 2022-07-31T10:26:20Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-07-02T11:59:49Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2740
- F1: 0.7919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8185 | 1.0 | 70 | 0.3369 | 0.7449 |
| 0.2899 | 2.0 | 140 | 0.2740 | 0.7919 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CuteBlack/gfp_guided_diffusion_v4
|
CuteBlack
| 2022-07-31T10:04:18Z | 0 | 9 | null |
[
"license:mit",
"region:us"
] | null | 2022-07-31T09:48:47Z |
---
license: mit
---
Open AI diffusion model that has trained on every single NSFW gay furry illustrations on e621.net that’s over the community score of 100. Excluding extreme fetishes and underage contents.
'attention_resolutions': '32, 16, 8',
'class_cond': False,
'diffusion_steps': 1000,
'rescale_timesteps': True,
'image_size': 256,
'learn_sigma': True,
'noise_schedule': 'linear',
'num_channels': 128,
'num_heads': 4,
'num_res_blocks': 2,
'resblock_updown': True,
'use_checkpoint': use_checkpoint,
'use_fp16': True,
'use_scale_shift_norm': True
|
Neha2608/xlm-roberta-base-finetuned-panx-fr
|
Neha2608
| 2022-07-31T10:04:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-07-02T11:40:50Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1699
- F1: 0.8725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5975 | 1.0 | 191 | 0.2612 | 0.8237 |
| 0.2798 | 2.0 | 382 | 0.1699 | 0.8725 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
keithanpai/vit-base-patch16-224-finetuned-eurosat
|
keithanpai
| 2022-07-31T00:07:31Z | 55 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-30T23:42:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8632734530938124
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3953
- Accuracy: 0.8633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6081 | 0.99 | 70 | 0.5482 | 0.8004 |
| 0.4515 | 1.99 | 140 | 0.4245 | 0.8533 |
| 0.3967 | 2.99 | 210 | 0.3953 | 0.8633 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
abdulmatinomotoso/t5_large_headline_generator_testing_1
|
abdulmatinomotoso
| 2022-07-30T22:09:15Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-30T21:00:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5_large_headline_generator_testing_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_large_headline_generator_testing_1
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1969 | 0.77 | 500 | 1.0183 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mtreviso/ct5-base-en-wiki
|
mtreviso
| 2022-07-30T19:32:29Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"en",
"dataset:wikipedia",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-25T13:30:55Z |
---
license: afl-3.0
language: en
tags:
- t5
datasets:
- wikipedia
---
# chunked T5 - base (cT5-base)
Github: https://github.com/mtreviso/chunked-t5
A T5 model that uses a new loss where a special end-of-chunk token `</c>` is appended after sentinel tokens.
The decoder has to predict the full input with masked tokens followed by `</c>`.
This allows a much faster auto-regressive generation since the decoder can predict multiple tokens in parallel.
For example, for the input `the quick brown fox jumps over the lazy dog`:
```
encoder: the <extra_id_0> fox jumps <extra_id_1> the lazy dog
T5 decoder : <extra_id_0> quick brown <extra_id_1> over <extra_id_2>
cT5 decoder: <extra_id_0> quick brown </c> <extra_id_1> over </c> <extra_id_2>
```
The generation may look like this for T5 and cT5:
```
T5: <extra_id_0>
T5: <extra_id_0> quick
T5: <extra_id_0> quick brown
T5: <extra_id_0> quick brown <extra_id_1>
T5: <extra_id_0> quick brown <extra_id_1> over
T5: <extra_id_0> quick brown <extra_id_1> over <extra_id_2>
T5: <extra_id_0> quick brown <extra_id_1> over <extra_id_2> </s>
cT5: <extra_id_0> <pad> <extra_id_1> <pad> <extra_id_2> </s>
cT5: <extra_id_0> quick <pad> <extra_id_1> over <pad> <extra_id_2> </s>
cT5: <extra_id_0> quick brown <pad> <extra_id_1> over </c> <extra_id_2> </s>
cT5: <extra_id_0> quick brown </c> <extra_id_1> over </c> <extra_id_2> </s>
```
In the original T5, the decoder is called \\(n_s + 1 + \sum_i |s_i|\\) times autoregressively,
where \\(n_s\\) is the number of sentinel tokens and \\(s_1,...,s_{n_s}\\) are the predicted chunks.
In contrast, cT5's decoder is called just \\(max_i |s_i| + 1\\) times.
The generation stops when all sentences were fully translated to complete chunks, i.e., until all `</c>` tokens were generated.
Alternatively, you can also set `max_chunk_size` to manually force the model to stop after generating a chunk with `max_chunk_size` tokens.
The overhead of calling the decoder with a longer input is less pronounced since this computation can be parallelized in GPUs/TPUs.
## Training details
cT5 models used T5's weights as a starting point, and then it was finetuned on the
English [wikipedia](https://huggingface.co/datasets/wikipedia) for 3 epochs,
achieving ~74% validation accuracy (ct5-base).
The training script is in JAX + Flax and can be found in `pretrain_ct5.py`.
Flax checkpoints can be converted to PyTorch via `convert_flax_to_pytorch.py [flax_dirname]`.
## Checkpoints
- ct5-small: https://huggingface.co/mtreviso/ct5-small-en-wiki
- ct5-base: https://huggingface.co/mtreviso/ct5-base-en-wiki
- ct5-large: todo
## Usage
```python
from transformers import AutoTokenizer
from modeling_ct5 import CT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("mtreviso/ct5-base-en-wiki")
model = CT5ForConditionalGeneration.from_pretrained("mtreviso/ct5-base-en-wiki")
```
For training:
```python
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
labels = tokenizer("<extra_id_0> man </c> <extra_id_1> the </c> <extra_id_2>", return_tensors="pt").input_ids
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
logits = outputs.logits
```
For generation:
```python
texts = [
"The <extra_id_0> walks in <extra_id_1> park",
"UN Chief says there is no way to <extra_id_0> in Syria",
]
input_ids = tokenizer(texts, return_tensors="pt", padding=True).input_ids
generated_ids = model.generate(
input_ids,
use_cache=False, # important to set to False to avoid caching
eoc_token_id=tokenizer.vocab['</c>'], # important to set to the correct end-of-chunk id
max_chunk_size=5, # the default is 9999999, which is a large number
)
```
This will produce the following tokens:
```python
>> ['<pad>', '<extra_id_0>', '▁Walking', '▁Trail', '</c>', '<extra_id_1>', '▁the', '</c>', '<extra_id_2>', '</s>']
>> ['<pad>', '<extra_id_0>', '▁treat', '▁Syria', '</c>', '<extra_id_1>', '</s>', '<pad>', '<pad>', '<pad>']
```
You have to pass `use_cache=False` to `generate()` in order to avoid caching during the generation procedure as caching is not available for parallel decoding.
Currently, parallel decoding is only supported for PyTorch (greedy search, greedy sampling, beam search, beam sampling) and JAX (greedy search and greedy sampling).
**Note on the beam search implementation**: my beam search implementation is slower than optimal.
This is because I use the structures provided by HuggingFace's implementation, namely, BeamScores and BeamHypotheses to store the beam search results for each chunk in the input.
In other words, my implementation computes independent "beams" for each chunk rather than for each input sequence.
It is possible to make it faster by using a custom BeamScores and BeamHypotheses class, but I haven't done that yet.
## Evaluation
See the notebook `evaluate_ct5.ipynb` for an example of how to evaluate cT5 in terms of accuracy and perplexity.
The notebook `profile.ipynb` shows how to profile the model to get runtimes.
Here is a comparison between cT5-small and T5-small on a subset of the WikiText-103 dataset using deterministic greedy search:
| Model | Exact match ↑ | Edit distance ratio ↑ | Perplexity ↓ | Time (seconds) ↓ |
|-------|---------------|----------------------|--------------|-----------------|
| T5-small | 0.11 | 0.60 | 2.22 | 44.71 |
| cT5-small | 0.09 | 0.58 | 1.48 | 10.63 |
On this toy dataset, cT5-small has a lower perplexity while being faster than T5-small. However, more experiments are needed for a rigorous evaluation.
If you are interested in applying cT5 to real data, please contact me.
|
gazzehamine/data-augmentation-whitenoise-timit-1155
|
gazzehamine
| 2022-07-30T17:51:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-29T14:52:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: data-augmentation-whitenoise-timit-1155
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data-augmentation-whitenoise-timit-1155
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5458
- Wer: 0.3324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5204 | 0.8 | 500 | 1.6948 | 0.9531 |
| 0.8435 | 1.6 | 1000 | 0.5367 | 0.5113 |
| 0.4449 | 2.4 | 1500 | 0.4612 | 0.4528 |
| 0.3182 | 3.21 | 2000 | 0.4314 | 0.4156 |
| 0.2328 | 4.01 | 2500 | 0.4250 | 0.4031 |
| 0.1897 | 4.81 | 3000 | 0.4630 | 0.4023 |
| 0.1628 | 5.61 | 3500 | 0.4445 | 0.3922 |
| 0.1472 | 6.41 | 4000 | 0.4452 | 0.3793 |
| 0.1293 | 7.21 | 4500 | 0.4715 | 0.3847 |
| 0.1176 | 8.01 | 5000 | 0.4267 | 0.3757 |
| 0.1023 | 8.81 | 5500 | 0.4494 | 0.3821 |
| 0.092 | 9.62 | 6000 | 0.4501 | 0.3704 |
| 0.0926 | 10.42 | 6500 | 0.4722 | 0.3643 |
| 0.0784 | 11.22 | 7000 | 0.5033 | 0.3765 |
| 0.077 | 12.02 | 7500 | 0.5165 | 0.3684 |
| 0.0704 | 12.82 | 8000 | 0.5138 | 0.3646 |
| 0.0599 | 13.62 | 8500 | 0.5664 | 0.3674 |
| 0.0582 | 14.42 | 9000 | 0.5188 | 0.3575 |
| 0.0526 | 15.22 | 9500 | 0.5605 | 0.3621 |
| 0.0512 | 16.03 | 10000 | 0.5400 | 0.3585 |
| 0.0468 | 16.83 | 10500 | 0.5471 | 0.3603 |
| 0.0445 | 17.63 | 11000 | 0.5168 | 0.3555 |
| 0.0411 | 18.43 | 11500 | 0.5772 | 0.3542 |
| 0.0394 | 19.23 | 12000 | 0.5079 | 0.3567 |
| 0.0354 | 20.03 | 12500 | 0.5427 | 0.3613 |
| 0.0325 | 20.83 | 13000 | 0.5532 | 0.3572 |
| 0.0318 | 21.63 | 13500 | 0.5223 | 0.3514 |
| 0.0269 | 22.44 | 14000 | 0.6002 | 0.3460 |
| 0.028 | 23.24 | 14500 | 0.5591 | 0.3432 |
| 0.0254 | 24.04 | 15000 | 0.5837 | 0.3432 |
| 0.0235 | 24.84 | 15500 | 0.5571 | 0.3397 |
| 0.0223 | 25.64 | 16000 | 0.5470 | 0.3383 |
| 0.0193 | 26.44 | 16500 | 0.5611 | 0.3367 |
| 0.0227 | 27.24 | 17000 | 0.5405 | 0.3342 |
| 0.0183 | 28.04 | 17500 | 0.5205 | 0.3330 |
| 0.017 | 28.85 | 18000 | 0.5512 | 0.3330 |
| 0.0167 | 29.65 | 18500 | 0.5458 | 0.3324 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
huggingtweets/oooo_honey
|
huggingtweets
| 2022-07-30T16:30:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-30T16:18:37Z |
---
language: en
thumbnail: http://www.huggingtweets.com/oooo_honey/1659198603893/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442126088944062469/p-BikvvS_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rock'n'Pomp</div>
<div style="text-align: center; font-size: 14px;">@oooo_honey</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rock'n'Pomp.
| Data | Rock'n'Pomp |
| --- | --- |
| Tweets downloaded | 510 |
| Retweets | 100 |
| Short tweets | 48 |
| Tweets kept | 362 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28blz6k6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @oooo_honey's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35awxfoc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35awxfoc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/oooo_honey')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
comodoro/testpyramidsrnd2
|
comodoro
| 2022-07-30T15:58:53Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-30T15:58:47Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: comodoro/testpyramidsrnd2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BigSalmon/InformalToFormalLincoln59Paraphrase
|
BigSalmon
| 2022-07-30T14:50:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-30T02:29:52Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln59Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln59Paraphrase")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
|
azimuth3d/rf_lunarlander
|
azimuth3d
| 2022-07-30T14:49:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-30T14:48:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 208.70 +/- 73.75
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
constanter/PPO-LunarLander-v2
|
constanter
| 2022-07-30T13:34:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-30T13:33:54Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 268.37 +/- 20.32
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SummerChiam/rust_image_classification_9
|
SummerChiam
| 2022-07-30T12:33:20Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-30T12:33:08Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification_9
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9569620490074158
---
# rust_image_classification_9
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust

#### rust

|
SummerChiam/rust_image_classification_7
|
SummerChiam
| 2022-07-30T12:04:23Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-30T12:04:11Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification_7
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9645569324493408
---
# rust_image_classification_7
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust

#### rust

|
Perselope/thesis-audio-4
|
Perselope
| 2022-07-30T11:32:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-30T09:14:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: thesis-audio-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thesis-audio-4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5585
- Wer: 0.3457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.6041 | 1.0 | 500 | 2.7841 | 1.0 |
| 1.0447 | 2.01 | 1000 | 0.5261 | 0.5260 |
| 0.4404 | 3.01 | 1500 | 0.4699 | 0.4676 |
| 0.2945 | 4.02 | 2000 | 0.4232 | 0.4212 |
| 0.2223 | 5.02 | 2500 | 0.4348 | 0.4106 |
| 0.1849 | 6.02 | 3000 | 0.4559 | 0.4115 |
| 0.1566 | 7.03 | 3500 | 0.4942 | 0.3943 |
| 0.1389 | 8.03 | 4000 | 0.4142 | 0.3883 |
| 0.1244 | 9.04 | 4500 | 0.4382 | 0.3832 |
| 0.1028 | 10.04 | 5000 | 0.4644 | 0.3826 |
| 0.0972 | 11.04 | 5500 | 0.5119 | 0.3858 |
| 0.0868 | 12.05 | 6000 | 0.4886 | 0.3739 |
| 0.08 | 13.05 | 6500 | 0.5198 | 0.3736 |
| 0.0736 | 14.06 | 7000 | 0.4836 | 0.3672 |
| 0.0673 | 15.06 | 7500 | 0.5187 | 0.3769 |
| 0.0602 | 16.06 | 8000 | 0.6087 | 0.3800 |
| 0.0562 | 17.07 | 8500 | 0.5279 | 0.3630 |
| 0.0568 | 18.07 | 9000 | 0.5696 | 0.3700 |
| 0.047 | 19.08 | 9500 | 0.5964 | 0.3578 |
| 0.0426 | 20.08 | 10000 | 0.5801 | 0.3512 |
| 0.0411 | 21.08 | 10500 | 0.5889 | 0.3573 |
| 0.0349 | 22.09 | 11000 | 0.5654 | 0.3544 |
| 0.0342 | 23.09 | 11500 | 0.5610 | 0.3548 |
| 0.031 | 24.1 | 12000 | 0.5443 | 0.3468 |
| 0.0285 | 25.1 | 12500 | 0.5206 | 0.3469 |
| 0.0243 | 26.1 | 13000 | 0.5455 | 0.3484 |
| 0.0248 | 27.11 | 13500 | 0.5556 | 0.3474 |
| 0.0229 | 28.11 | 14000 | 0.5659 | 0.3457 |
| 0.0229 | 29.12 | 14500 | 0.5585 | 0.3457 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
NAACL2022/cogmen
|
NAACL2022
| 2022-07-30T11:16:43Z | 0 | 16 | null |
[
"arxiv:2205.02455",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2022-07-30T11:13:03Z |
---
license: cc-by-nc-4.0
---
## COGMEN; Official Pytorch Implementation
[](https://paperswithcode.com/sota/multimodal-emotion-recognition-on-iemocap?p=cogmen-contextualized-gnn-based-multimodal)
**CO**ntextualized **G**NN based **M**ultimodal **E**motion recognitio**N**

**Picture:** *COGMEN Model Architecture*
This repository contains the official Pytorch implementation of the following paper:
> **COGMEN: COntextualized GNN based Multimodal Emotion recognitioN**<br>
> **Paper:** https://arxiv.org/abs/2205.02455
> **Authors:** Abhinav Joshi, Ashwani Bhat, Ayush Jain, Atin Vikram Singh, Ashutosh Modi<br>
>
> **Abstract:** *Emotions are an inherent part of human interactions, and consequently, it is imperative to develop AI systems that understand and recognize human emotions. During a conversation involving various people, a person’s emotions are influenced by the other speaker’s utterances and their own emotional state over the utterances. In this paper, we propose COntextualized Graph Neural Network based Multimodal Emotion recognitioN (COGMEN) system that leverages local information (i.e., inter/intra dependency between speakers) and global information (context). The proposed model uses Graph Neural Network (GNN) based architecture to model the complex dependencies (local and global information) in a conversation. Our model gives state-of-theart (SOTA) results on IEMOCAP and MOSEI datasets, and detailed ablation experiments
show the importance of modeling information at both levels*
## Requirements
- We use PyG (PyTorch Geometric) for the GNN component in our architecture. [RGCNConv](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.RGCNConv) and [TransformerConv](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.TransformerConv)
- We use [comet](https://comet.ml) for logging all our experiments and its Bayesian optimizer for hyperparameter tuning.
- For textual features we use [SBERT](https://www.sbert.net/).
### Installations
- [Install PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html)
- [Install Comet.ml](https://www.comet.ml/docs/python-sdk/advanced/)
- [Install SBERT](https://www.sbert.net/)
## Preparing datasets for training
python preprocess.py --dataset="iemocap_4"
## Training networks
python train.py --dataset="iemocap_4" --modalities="atv" --from_begin --epochs=55
## Run Evaluation [](https://colab.research.google.com/drive/1biIvonBdJWo2TiYyTiQkxZ_V88JEXa_d?usp=sharing)
python eval.py --dataset="iemocap_4" --modalities="atv"
Please cite the paper using following citation:
## Citation
@inproceedings{joshi-etal-2022-cogmen,
title = "{COGMEN}: {CO}ntextualized {GNN} based Multimodal Emotion recognitio{N}",
author = "Joshi, Abhinav and
Bhat, Ashwani and
Jain, Ayush and
Singh, Atin and
Modi, Ashutosh",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.306",
pages = "4148--4164",
abstract = "Emotions are an inherent part of human interactions, and consequently, it is imperative to develop AI systems that understand and recognize human emotions. During a conversation involving various people, a person{'}s emotions are influenced by the other speaker{'}s utterances and their own emotional state over the utterances. In this paper, we propose COntextualized Graph Neural Network based Multi- modal Emotion recognitioN (COGMEN) system that leverages local information (i.e., inter/intra dependency between speakers) and global information (context). The proposed model uses Graph Neural Network (GNN) based architecture to model the complex dependencies (local and global information) in a conversation. Our model gives state-of-the- art (SOTA) results on IEMOCAP and MOSEI datasets, and detailed ablation experiments show the importance of modeling information at both levels.",}
## Acknowledgments
The structure of our code is inspired by [pytorch-DialogueGCN-mianzhang](https://github.com/mianzhang/dialogue_gcn).
|
SummerChiam/rust_image_classification_2
|
SummerChiam
| 2022-07-30T10:05:44Z | 51 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-30T10:05:33Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification_2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.853164553642273
---
# rust_image_classification_2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust

#### rust

|
SummerChiam/pond_image_classification_10
|
SummerChiam
| 2022-07-30T08:57:50Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-30T08:57:38Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_10
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9948979616165161
---
# pond_image_classification_10
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae

#### Boiling

#### BoilingNight

#### Normal

#### NormalCement

#### NormalNight

#### NormalRain

|
DrY/marian-finetuned-kde4-en-to-zh
|
DrY
| 2022-07-30T08:05:06Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-07-30T07:03:00Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-zh
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-zh_CN
split: train
args: en-zh_CN
metrics:
- name: Bleu
type: bleu
value: 40.66579724271391
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-zh
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9338
- Bleu: 40.6658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
r3sist/q-Taxi-v3
|
r3sist
| 2022-07-30T07:56:02Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-30T07:55:55Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="r3sist/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
r3sist/qLearning-frozenLake
|
r3sist
| 2022-07-30T07:51:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-30T07:47:48Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: qLearning-frozenLake
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="r3sist/qLearning-frozenLake", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Migga/ViT-BERT-Chess-V4
|
Migga
| 2022-07-30T04:26:03Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-07-29T16:57:48Z |
---
tags:
- generated_from_trainer
model-index:
- name: ViT-BERT-Chess-V4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT-BERT-Chess-V4
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.705 | 1.0 | 3895 | 3.5686 |
| 3.5139 | 2.0 | 7790 | 3.4288 |
| 3.4156 | 3.0 | 11685 | 3.3663 |
| 3.3661 | 4.0 | 15580 | 3.3331 |
| 3.3352 | 5.0 | 19475 | 3.3213 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu116
- Datasets 2.3.2
- Tokenizers 0.12.1
|
reachrkr/testpyramidsrnd
|
reachrkr
| 2022-07-30T02:46:18Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-28T06:59:12Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: reachrkr/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
huggingtweets/dags
|
huggingtweets
| 2022-07-30T01:32:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-30T01:30:26Z |
---
language: en
thumbnail: http://www.huggingtweets.com/dags/1659144733206/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/722815128501026817/IMWCRzEn_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">DAGs</div>
<div style="text-align: center; font-size: 14px;">@dags</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from DAGs.
| Data | DAGs |
| --- | --- |
| Tweets downloaded | 3003 |
| Retweets | 31 |
| Short tweets | 158 |
| Tweets kept | 2814 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3qyk6uzo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dags's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18qzuqjb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18qzuqjb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dags')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Frikallo/DeepDunk
|
Frikallo
| 2022-07-30T01:14:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-30T00:00:17Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: DeepDunk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeepDunk
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001372
- train_batch_size: 1
- eval_batch_size: 8
- seed: 1360794382
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
yanaiela/roberta-base-epoch_79
|
yanaiela
| 2022-07-29T23:08:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_79",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T18:02:16Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_79
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 79
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_79.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_78
|
yanaiela
| 2022-07-29T23:08:15Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_78",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T18:01:03Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_78
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 78
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_78.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_77
|
yanaiela
| 2022-07-29T23:07:53Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_77",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:59:57Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_77
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 77
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_77.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_71
|
yanaiela
| 2022-07-29T23:05:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_71",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:53:19Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_71
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 71
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_71.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_70
|
yanaiela
| 2022-07-29T23:05:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_70",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:52:21Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_70
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 70
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_70.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_69
|
yanaiela
| 2022-07-29T23:04:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_69",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:50:57Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_69
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 69
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_69.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_67
|
yanaiela
| 2022-07-29T23:04:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_67",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:48:39Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_67
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 67
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_67.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_65
|
yanaiela
| 2022-07-29T23:03:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_65",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:45:42Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_65
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 65
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_65.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_64
|
yanaiela
| 2022-07-29T23:02:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_64",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:43:33Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_64
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 64
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_64.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_63
|
yanaiela
| 2022-07-29T23:02:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_63",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:42:21Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_63
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 63
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_63.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_60
|
yanaiela
| 2022-07-29T23:01:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_60",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:36:36Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_60
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 60
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_60.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_59
|
yanaiela
| 2022-07-29T23:01:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_59",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:35:53Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_59
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 59
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_59.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_56
|
yanaiela
| 2022-07-29T22:59:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_56",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:33:29Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_56
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 56
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_56.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_52
|
yanaiela
| 2022-07-29T22:58:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_52",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:30:02Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_52
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 52
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_52.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_49
|
yanaiela
| 2022-07-29T22:57:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_49",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:27:40Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_49
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 49
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_49.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
mrm8488/q-Taxi-v3-2
|
mrm8488
| 2022-07-29T22:56:58Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-29T22:56:52Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-2
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mrm8488/q-Taxi-v3-2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
yanaiela/roberta-base-epoch_48
|
yanaiela
| 2022-07-29T22:56:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_48",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:26:55Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_48
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 48
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_48.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_44
|
yanaiela
| 2022-07-29T22:55:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_44",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:24:01Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_44
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 44
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_44.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_42
|
yanaiela
| 2022-07-29T22:54:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_42",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:22:35Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_42
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 42
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_42.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_41
|
yanaiela
| 2022-07-29T22:53:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_41",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:21:50Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_41
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 41
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_41.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_40
|
yanaiela
| 2022-07-29T22:53:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_40",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:21:04Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_40
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 40
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_40.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_38
|
yanaiela
| 2022-07-29T22:52:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_38",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:19:21Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_38
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 38
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_38.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.