modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-26 06:27:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-26 06:26:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
KDHyun08/TAACO_STS
|
KDHyun08
| 2022-08-01T05:00:14Z | 2,406 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"TAACO",
"ko",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-07-25T08:19:31Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- transformers
- TAACO
language: ko
---
# TAACO_Similarity
๋ณธ ๋ชจ๋ธ์ [Sentence-transformers](https://www.SBERT.net)๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ๋ฉฐ KLUE์ STS(Sentence Textual Similarity) ๋ฐ์ดํฐ์
์ ํตํด ํ๋ จ์ ์งํํ ๋ชจ๋ธ์
๋๋ค.
ํ์๊ฐ ์ ์ํ๊ณ ์๋ ํ๊ตญ์ด ๋ฌธ์ฅ๊ฐ ๊ฒฐ์์ฑ ์ธก์ ๋๊ตฌ์ธ K-TAACO(๊ฐ์ )์ ์งํ ์ค ํ๋์ธ ๋ฌธ์ฅ ๊ฐ ์๋ฏธ์ ๊ฒฐ์์ฑ์ ์ธก์ ํ๊ธฐ ์ํด ๋ณธ ๋ชจ๋ธ์ ์ ์ํ์์ต๋๋ค.
๋ํ ๋ชจ๋์ ๋ง๋ญ์น์ ๋ฌธ์ฅ๊ฐ ์ ์ฌ๋ ๋ฐ์ดํฐ ๋ฑ ๋ค์ํ ๋ฐ์ดํฐ๋ฅผ ๊ตฌํด ์ถ๊ฐ ํ๋ จ์ ์งํํ ์์ ์
๋๋ค.
## Train Data
KLUE-sts-v1.1._train.json
NLI-sts-train.tsv
## Usage (Sentence-Transformers)
๋ณธ ๋ชจ๋ธ์ ์ฌ์ฉํ๊ธฐ ์ํด์๋ [Sentence-transformers](https://www.SBERT.net)๋ฅผ ์ค์นํ์ฌ์ผ ํฉ๋๋ค.
```
pip install -U sentence-transformers
```
๋ชจ๋ธ์ ์ฌ์ฉํ๊ธฐ ์ํด์๋ ์๋ ์ฝ๋๋ฅผ ์ฐธ์กฐํ์๊ธธ ๋ฐ๋๋๋ค.
```python
from sentence_transformers import SentenceTransformer, models
sentences = ["This is an example sentence", "Each sentence is converted"]
embedding_model = models.Transformer(
model_name_or_path="KDHyun08/TAACO_STS",
max_seq_length=256,
do_lower_case=True
)
pooling_model = models.Pooling(
embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
pooling_mode_cls_token=False,
pooling_mode_max_tokens=False,
)
model = SentenceTransformer(modules=[embedding_model, pooling_model])
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (์ค์ ๋ฌธ์ฅ ๊ฐ ์ ์ฌ๋ ๋น๊ต)
[Sentence-transformers](https://www.SBERT.net) ๋ฅผ ์ค์นํ ํ ์๋ ๋ด์ฉ๊ณผ ๊ฐ์ด ๋ฌธ์ฅ ๊ฐ ์ ์ฌ๋๋ฅผ ๋น๊ตํ ์ ์์ต๋๋ค.
query ๋ณ์๋ ๋น๊ต ๊ธฐ์ค์ด ๋๋ ๋ฌธ์ฅ(Source Sentence)์ด๊ณ ๋น๊ต๋ฅผ ์งํํ ๋ฌธ์ฅ์ docs์ list ํ์์ผ๋ก ๊ตฌ์ฑํ์๋ฉด ๋ฉ๋๋ค.
```python
from sentence_transformers import SentenceTransformer, models
embedding_model = models.Transformer(
model_name_or_path="KDHyun08/TAACO_STS",
max_seq_length=256,
do_lower_case=True
)
pooling_model = models.Pooling(
embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=True,
pooling_mode_cls_token=False,
pooling_mode_max_tokens=False,
)
model = SentenceTransformer(modules=[embedding_model, pooling_model])
docs = ['์ด์ ๋ ์๋ด์ ์์ผ์ด์๋ค', '์์ผ์ ๋ง์ดํ์ฌ ์์นจ์ ์ค๋นํ๊ฒ ๋ค๊ณ ์ค์ 8์ 30๋ถ๋ถํฐ ์์์ ์ค๋นํ์๋ค. ์ฃผ๋ ๋ฉ๋ด๋ ์คํ
์ดํฌ์ ๋์ง๋ณถ์, ๋ฏธ์ญ๊ตญ, ์ก์ฑ, ์์ผ ๋ฑ์ด์๋ค', '์คํ
์ดํฌ๋ ์์ฃผ ํ๋ ์์์ด์ด์ ์์ ์ด ์ค๋นํ๋ ค๊ณ ํ๋ค', '์๋ค๋ 1๋ถ์ฉ 3๋ฒ ๋ค์ง๊ณ ๋์คํ
์ ์ ํ๋ฉด ์ก์ฆ์ด ๊ฐ๋ํ ์คํ
์ดํฌ๊ฐ ์ค๋น๋๋ค', '์๋ด๋ ๊ทธ๋ฐ ์คํ
์ดํฌ๋ฅผ ์ข์ํ๋ค. ๊ทธ๋ฐ๋ฐ ์์๋ ๋ชปํ ์ผ์ด ๋ฒ์ด์ง๊ณ ๋ง์๋ค', '๋ณดํต ์์ฆ๋์ด ๋์ง ์์ ์์ก์ ์ฌ์ ์คํ
์ดํฌ๋ฅผ ํ๋๋ฐ, ์ด๋ฒ์๋ ์์ฆ๋์ด ๋ ๋ถ์ฑ์ด์ ๊ตฌ์
ํด์ ํ๋ค', '๊ทธ๋ฐ๋ฐ ์ผ์ด์ค ์์ ๋ฐฉ๋ถ์ ๊ฐ ๋ค์ด์๋ ๊ฒ์ ์ธ์งํ์ง ๋ชปํ๊ณ ๋ฐฉ๋ถ์ ์ ๋์์ ํ๋ผ์ดํฌ์ ์ฌ๋ ค๋์ ๊ฒ์ด๋ค', '๊ทธ๊ฒ๋ ์ธ์ง ๋ชปํ ์ฒด... ์๋ฉด์ ์ผ ๋ถ์ 1๋ถ์ ๊ตฝ๊ณ ๋ค์ง๋ ์๊ฐ ๋ฐฉ๋ถ์ ๊ฐ ํจ๊ป ๊ตฌ์ด์ง ๊ฒ์ ์์๋ค', '์๋ด์ ์์ผ์ด๋ผ ๋ง์๊ฒ ๊ตฌ์๋ณด๊ณ ์ถ์๋๋ฐ ์ด์ฒ๊ตฌ๋์๋ ์ํฉ์ด ๋ฐ์ํ ๊ฒ์ด๋ค', '๋ฐฉ๋ถ์ ๊ฐ ์ผ ๋ถ์ ๋
น์์ ๊ทธ๋ฐ์ง ๋ฌผ์ฒ๋ผ ํ๋ฌ๋ด๋ ธ๋ค', ' ๊ณ ๋ฏผ์ ํ๋ค. ๋ฐฉ๋ถ์ ๊ฐ ๋ฌป์ ๋ถ๋ฌธ๋ง ์ ๊ฑฐํ๊ณ ๋ค์ ๊ตฌ์ธ๊น ํ๋๋ฐ ๋ฐฉ๋ถ์ ์ ์ ๋ ๋จน์ง ๋ง๋ผ๋ ๋ฌธ๊ตฌ๊ฐ ์์ด์ ์๊น์ง๋ง ๋ฒ๋ฆฌ๋ ๋ฐฉํฅ์ ํ๋ค', '๋๋ฌด๋ ์ํ๊น์ ๋ค', '์์นจ ์ผ์ฐ ์๋ด๊ฐ ์ข์ํ๋ ์คํ
์ดํฌ๋ฅผ ์ค๋นํ๊ณ ๊ทธ๊ฒ์ ๋ง์๊ฒ ๋จน๋ ์๋ด์ ๋ชจ์ต์ ๋ณด๊ณ ์ถ์๋๋ฐ ์ ํ ์๊ฐ์ง๋ ๋ชปํ ์ํฉ์ด ๋ฐ์ํด์... ํ์ง๋ง ์ ์ ์ ์ถ์ค๋ฅด๊ณ ๋ฐ๋ก ๋ค๋ฅธ ๋ฉ๋ด๋ก ๋ณ๊ฒฝํ๋ค', '์์ผ, ์์์ง ์ผ์ฑ๋ณถ์..', '์๋ด๊ฐ ์ข์ํ๋์ง ๋ชจ๋ฅด๊ฒ ์ง๋ง ๋์ฅ๊ณ ์์ ์๋ ํ๋ํฌ์์ธ์ง๋ฅผ ๋ณด๋ ๋ฐ๋ก ์์ผ๋ฅผ ํด์ผ๊ฒ ๋ค๋ ์๊ฐ์ด ๋ค์๋ค. ์์์ ์ฑ๊ณต์ ์ผ๋ก ์์ฑ์ด ๋์๋ค', '40๋ฒ์งธ๋ฅผ ๋ง์ดํ๋ ์๋ด์ ์์ผ์ ์ฑ๊ณต์ ์ผ๋ก ์ค๋น๊ฐ ๋์๋ค', '๋ง์๊ฒ ๋จน์ด ์ค ์๋ด์๊ฒ๋ ๊ฐ์ฌํ๋ค', '๋งค๋
์๋ด์ ์์ผ์ ๋ง์ดํ๋ฉด ์์นจ๋ง๋ค ์์ผ์ ์ฐจ๋ ค์ผ๊ฒ ๋ค. ์ค๋๋ ์ฆ๊ฑฐ์ด ํ๋ฃจ๊ฐ ๋์์ผ๋ฉด ์ข๊ฒ ๋ค', '์์ผ์ด๋๊น~']
#๊ฐ ๋ฌธ์ฅ์ vector๊ฐ encoding
document_embeddings = model.encode(docs)
query = '์์ผ์ ๋ง์ดํ์ฌ ์์นจ์ ์ค๋นํ๊ฒ ๋ค๊ณ ์ค์ 8์ 30๋ถ๋ถํฐ ์์์ ์ค๋นํ์๋ค'
query_embedding = model.encode(query)
top_k = min(10, len(docs))
# ์ฝ์ฌ์ธ ์ ์ฌ๋ ๊ณ์ฐ ํ,
cos_scores = util.pytorch_cos_sim(query_embedding, document_embeddings)[0]
# ์ฝ์ฌ์ธ ์ ์ฌ๋ ์์ผ๋ก ๋ฌธ์ฅ ์ถ์ถ
top_results = torch.topk(cos_scores, k=top_k)
print(f"์
๋ ฅ ๋ฌธ์ฅ: {query}")
print(f"\n<์
๋ ฅ ๋ฌธ์ฅ๊ณผ ์ ์ฌํ {top_k} ๊ฐ์ ๋ฌธ์ฅ>\n")
for i, (score, idx) in enumerate(zip(top_results[0], top_results[1])):
print(f"{i+1}: {docs[idx]} {'(์ ์ฌ๋: {:.4f})'.format(score)}\n")
```
## Evaluation Results
์ Usage๋ฅผ ์คํํ๊ฒ ๋๋ฉด ์๋์ ๊ฐ์ ๊ฒฐ๊ณผ๊ฐ ๋์ถ๋ฉ๋๋ค. 1์ ๊ฐ๊น์ธ์๋ก ์ ์ฌํ ๋ฌธ์ฅ์
๋๋ค.
```
์
๋ ฅ ๋ฌธ์ฅ: ์์ผ์ ๋ง์ดํ์ฌ ์์นจ์ ์ค๋นํ๊ฒ ๋ค๊ณ ์ค์ 8์ 30๋ถ๋ถํฐ ์์์ ์ค๋นํ์๋ค
<์
๋ ฅ ๋ฌธ์ฅ๊ณผ ์ ์ฌํ 10 ๊ฐ์ ๋ฌธ์ฅ>
1: ์์ผ์ ๋ง์ดํ์ฌ ์์นจ์ ์ค๋นํ๊ฒ ๋ค๊ณ ์ค์ 8์ 30๋ถ๋ถํฐ ์์์ ์ค๋นํ์๋ค. ์ฃผ๋ ๋ฉ๋ด๋ ์คํ
์ดํฌ์ ๋์ง๋ณถ์, ๋ฏธ์ญ๊ตญ, ์ก์ฑ, ์์ผ ๋ฑ์ด์๋ค (์ ์ฌ๋: 0.6687)
2: ๋งค๋
์๋ด์ ์์ผ์ ๋ง์ดํ๋ฉด ์์นจ๋ง๋ค ์์ผ์ ์ฐจ๋ ค์ผ๊ฒ ๋ค. ์ค๋๋ ์ฆ๊ฑฐ์ด ํ๋ฃจ๊ฐ ๋์์ผ๋ฉด ์ข๊ฒ ๋ค (์ ์ฌ๋: 0.6468)
3: 40๋ฒ์งธ๋ฅผ ๋ง์ดํ๋ ์๋ด์ ์์ผ์ ์ฑ๊ณต์ ์ผ๋ก ์ค๋น๊ฐ ๋์๋ค (์ ์ฌ๋: 0.4647)
4: ์๋ด์ ์์ผ์ด๋ผ ๋ง์๊ฒ ๊ตฌ์๋ณด๊ณ ์ถ์๋๋ฐ ์ด์ฒ๊ตฌ๋์๋ ์ํฉ์ด ๋ฐ์ํ ๊ฒ์ด๋ค (์ ์ฌ๋: 0.4469)
5: ์์ผ์ด๋๊น~ (์ ์ฌ๋: 0.4218)
6: ์ด์ ๋ ์๋ด์ ์์ผ์ด์๋ค (์ ์ฌ๋: 0.4192)
7: ์์นจ ์ผ์ฐ ์๋ด๊ฐ ์ข์ํ๋ ์คํ
์ดํฌ๋ฅผ ์ค๋นํ๊ณ ๊ทธ๊ฒ์ ๋ง์๊ฒ ๋จน๋ ์๋ด์ ๋ชจ์ต์ ๋ณด๊ณ ์ถ์๋๋ฐ ์ ํ ์๊ฐ์ง๋ ๋ชปํ ์ํฉ์ด ๋ฐ์ํด์... ํ์ง๋ง ์ ์ ์ ์ถ์ค๋ฅด๊ณ ๋ฐ๋ก ๋ค๋ฅธ ๋ฉ๋ด๋ก ๋ณ๊ฒฝํ๋ค (์ ์ฌ๋: 0.4156)
8: ๋ง์๊ฒ ๋จน์ด ์ค ์๋ด์๊ฒ๋ ๊ฐ์ฌํ๋ค (์ ์ฌ๋: 0.3093)
9: ์๋ด๊ฐ ์ข์ํ๋์ง ๋ชจ๋ฅด๊ฒ ์ง๋ง ๋์ฅ๊ณ ์์ ์๋ ํ๋ํฌ์์ธ์ง๋ฅผ ๋ณด๋ ๋ฐ๋ก ์์ผ๋ฅผ ํด์ผ๊ฒ ๋ค๋ ์๊ฐ์ด ๋ค์๋ค. ์์์ ์ฑ๊ณต์ ์ผ๋ก ์์ฑ์ด ๋์๋ค (์ ์ฌ๋: 0.2259)
10: ์๋ด๋ ๊ทธ๋ฐ ์คํ
์ดํฌ๋ฅผ ์ข์ํ๋ค. ๊ทธ๋ฐ๋ฐ ์์๋ ๋ชปํ ์ผ์ด ๋ฒ์ด์ง๊ณ ๋ง์๋ค (์ ์ฌ๋: 0.1967)
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 142 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
wenkai-li/distilroberta-base-finetuned-marktextepoch_n200
|
wenkai-li
| 2022-08-01T04:07:13Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-31T18:33:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-marktextepoch_n200
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-marktextepoch_n200
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.2313 | 1.0 | 1500 | 2.1592 |
| 2.1731 | 2.0 | 3000 | 2.1277 |
| 2.153 | 3.0 | 4500 | 2.1144 |
| 2.1469 | 4.0 | 6000 | 2.1141 |
| 2.1281 | 5.0 | 7500 | 2.1374 |
| 2.1043 | 6.0 | 9000 | 2.1069 |
| 2.0834 | 7.0 | 10500 | 2.0993 |
| 2.0602 | 8.0 | 12000 | 2.0817 |
| 2.024 | 9.0 | 13500 | 2.0918 |
| 2.0261 | 10.0 | 15000 | 2.0793 |
| 1.9889 | 11.0 | 16500 | 2.0567 |
| 1.9915 | 12.0 | 18000 | 2.0700 |
| 1.9532 | 13.0 | 19500 | 2.0436 |
| 1.9362 | 14.0 | 21000 | 2.0596 |
| 1.9024 | 15.0 | 22500 | 2.0189 |
| 1.9262 | 16.0 | 24000 | 2.0435 |
| 1.8883 | 17.0 | 25500 | 2.0430 |
| 1.8867 | 18.0 | 27000 | 2.0416 |
| 1.8807 | 19.0 | 28500 | 2.0051 |
| 1.8517 | 20.0 | 30000 | 2.0338 |
| 1.8357 | 21.0 | 31500 | 2.0166 |
| 1.8241 | 22.0 | 33000 | 2.0355 |
| 1.7985 | 23.0 | 34500 | 2.0073 |
| 1.8061 | 24.0 | 36000 | 2.0473 |
| 1.7996 | 25.0 | 37500 | 2.0446 |
| 1.7786 | 26.0 | 39000 | 2.0086 |
| 1.771 | 27.0 | 40500 | 2.0294 |
| 1.7549 | 28.0 | 42000 | 2.0127 |
| 1.7726 | 29.0 | 43500 | 2.0191 |
| 1.7275 | 30.0 | 45000 | 2.0182 |
| 1.708 | 31.0 | 46500 | 2.0130 |
| 1.7345 | 32.0 | 48000 | 2.0155 |
| 1.7044 | 33.0 | 49500 | 1.9898 |
| 1.7126 | 34.0 | 51000 | 2.0166 |
| 1.698 | 35.0 | 52500 | 1.9879 |
| 1.6637 | 36.0 | 54000 | 2.0311 |
| 1.6854 | 37.0 | 55500 | 2.0355 |
| 1.6585 | 38.0 | 57000 | 2.0094 |
| 1.6418 | 39.0 | 58500 | 2.0042 |
| 1.667 | 40.0 | 60000 | 2.0116 |
| 1.6507 | 41.0 | 61500 | 2.0095 |
| 1.622 | 42.0 | 63000 | 2.0158 |
| 1.6381 | 43.0 | 64500 | 2.0339 |
| 1.6099 | 44.0 | 66000 | 2.0082 |
| 1.6076 | 45.0 | 67500 | 2.0207 |
| 1.5805 | 46.0 | 69000 | 2.0172 |
| 1.5862 | 47.0 | 70500 | 2.0132 |
| 1.5806 | 48.0 | 72000 | 2.0198 |
| 1.574 | 49.0 | 73500 | 2.0181 |
| 1.5718 | 50.0 | 75000 | 2.0086 |
| 1.5591 | 51.0 | 76500 | 1.9832 |
| 1.5468 | 52.0 | 78000 | 2.0167 |
| 1.5637 | 53.0 | 79500 | 2.0118 |
| 1.5117 | 54.0 | 81000 | 2.0290 |
| 1.5363 | 55.0 | 82500 | 2.0011 |
| 1.4976 | 56.0 | 84000 | 2.0160 |
| 1.5129 | 57.0 | 85500 | 2.0224 |
| 1.4964 | 58.0 | 87000 | 2.0219 |
| 1.4906 | 59.0 | 88500 | 2.0212 |
| 1.4941 | 60.0 | 90000 | 2.0255 |
| 1.4876 | 61.0 | 91500 | 2.0116 |
| 1.4837 | 62.0 | 93000 | 2.0176 |
| 1.4661 | 63.0 | 94500 | 2.0388 |
| 1.4634 | 64.0 | 96000 | 2.0165 |
| 1.4449 | 65.0 | 97500 | 2.0185 |
| 1.468 | 66.0 | 99000 | 2.0246 |
| 1.4567 | 67.0 | 100500 | 2.0244 |
| 1.4367 | 68.0 | 102000 | 2.0093 |
| 1.4471 | 69.0 | 103500 | 2.0101 |
| 1.4255 | 70.0 | 105000 | 2.0248 |
| 1.4203 | 71.0 | 106500 | 2.0224 |
| 1.42 | 72.0 | 108000 | 2.0279 |
| 1.4239 | 73.0 | 109500 | 2.0295 |
| 1.4126 | 74.0 | 111000 | 2.0196 |
| 1.4038 | 75.0 | 112500 | 2.0225 |
| 1.3874 | 76.0 | 114000 | 2.0456 |
| 1.3758 | 77.0 | 115500 | 2.0423 |
| 1.3924 | 78.0 | 117000 | 2.0184 |
| 1.3744 | 79.0 | 118500 | 2.0555 |
| 1.3622 | 80.0 | 120000 | 2.0387 |
| 1.3653 | 81.0 | 121500 | 2.0344 |
| 1.3724 | 82.0 | 123000 | 2.0184 |
| 1.3684 | 83.0 | 124500 | 2.0285 |
| 1.3576 | 84.0 | 126000 | 2.0544 |
| 1.348 | 85.0 | 127500 | 2.0412 |
| 1.3387 | 86.0 | 129000 | 2.0459 |
| 1.3416 | 87.0 | 130500 | 2.0329 |
| 1.3421 | 88.0 | 132000 | 2.0274 |
| 1.3266 | 89.0 | 133500 | 2.0233 |
| 1.3183 | 90.0 | 135000 | 2.0319 |
| 1.322 | 91.0 | 136500 | 2.0080 |
| 1.32 | 92.0 | 138000 | 2.0472 |
| 1.304 | 93.0 | 139500 | 2.0538 |
| 1.3061 | 94.0 | 141000 | 2.0340 |
| 1.3199 | 95.0 | 142500 | 2.0456 |
| 1.2985 | 96.0 | 144000 | 2.0167 |
| 1.3021 | 97.0 | 145500 | 2.0204 |
| 1.2787 | 98.0 | 147000 | 2.0645 |
| 1.2879 | 99.0 | 148500 | 2.0345 |
| 1.2695 | 100.0 | 150000 | 2.0340 |
| 1.2884 | 101.0 | 151500 | 2.0602 |
| 1.2747 | 102.0 | 153000 | 2.0667 |
| 1.2607 | 103.0 | 154500 | 2.0551 |
| 1.2551 | 104.0 | 156000 | 2.0544 |
| 1.2557 | 105.0 | 157500 | 2.0553 |
| 1.2495 | 106.0 | 159000 | 2.0370 |
| 1.26 | 107.0 | 160500 | 2.0568 |
| 1.2499 | 108.0 | 162000 | 2.0427 |
| 1.2438 | 109.0 | 163500 | 2.0184 |
| 1.2496 | 110.0 | 165000 | 2.0227 |
| 1.2332 | 111.0 | 166500 | 2.0621 |
| 1.2231 | 112.0 | 168000 | 2.0661 |
| 1.211 | 113.0 | 169500 | 2.0673 |
| 1.217 | 114.0 | 171000 | 2.0544 |
| 1.2206 | 115.0 | 172500 | 2.0542 |
| 1.2083 | 116.0 | 174000 | 2.0592 |
| 1.2205 | 117.0 | 175500 | 2.0451 |
| 1.2065 | 118.0 | 177000 | 2.0402 |
| 1.1988 | 119.0 | 178500 | 2.0615 |
| 1.218 | 120.0 | 180000 | 2.0374 |
| 1.1917 | 121.0 | 181500 | 2.0349 |
| 1.1854 | 122.0 | 183000 | 2.0790 |
| 1.1819 | 123.0 | 184500 | 2.0766 |
| 1.2029 | 124.0 | 186000 | 2.0364 |
| 1.1851 | 125.0 | 187500 | 2.0568 |
| 1.1734 | 126.0 | 189000 | 2.0445 |
| 1.1701 | 127.0 | 190500 | 2.0770 |
| 1.1824 | 128.0 | 192000 | 2.0566 |
| 1.1604 | 129.0 | 193500 | 2.0542 |
| 1.1733 | 130.0 | 195000 | 2.0525 |
| 1.1743 | 131.0 | 196500 | 2.0577 |
| 1.1692 | 132.0 | 198000 | 2.0723 |
| 1.1519 | 133.0 | 199500 | 2.0567 |
| 1.1401 | 134.0 | 201000 | 2.0795 |
| 1.1692 | 135.0 | 202500 | 2.0625 |
| 1.157 | 136.0 | 204000 | 2.0793 |
| 1.1495 | 137.0 | 205500 | 2.0782 |
| 1.1479 | 138.0 | 207000 | 2.0392 |
| 1.1247 | 139.0 | 208500 | 2.0796 |
| 1.143 | 140.0 | 210000 | 2.0369 |
| 1.1324 | 141.0 | 211500 | 2.0699 |
| 1.1341 | 142.0 | 213000 | 2.0694 |
| 1.1317 | 143.0 | 214500 | 2.0569 |
| 1.1254 | 144.0 | 216000 | 2.0545 |
| 1.1156 | 145.0 | 217500 | 2.0708 |
| 1.1353 | 146.0 | 219000 | 2.0767 |
| 1.1312 | 147.0 | 220500 | 2.0523 |
| 1.1224 | 148.0 | 222000 | 2.0565 |
| 1.106 | 149.0 | 223500 | 2.0696 |
| 1.1069 | 150.0 | 225000 | 2.0478 |
| 1.1011 | 151.0 | 226500 | 2.0475 |
| 1.0985 | 152.0 | 228000 | 2.0888 |
| 1.1107 | 153.0 | 229500 | 2.0756 |
| 1.1058 | 154.0 | 231000 | 2.0812 |
| 1.1027 | 155.0 | 232500 | 2.0597 |
| 1.0996 | 156.0 | 234000 | 2.0684 |
| 1.0987 | 157.0 | 235500 | 2.0629 |
| 1.0881 | 158.0 | 237000 | 2.0701 |
| 1.1143 | 159.0 | 238500 | 2.0740 |
| 1.0823 | 160.0 | 240000 | 2.0869 |
| 1.0925 | 161.0 | 241500 | 2.0567 |
| 1.1034 | 162.0 | 243000 | 2.0833 |
| 1.0759 | 163.0 | 244500 | 2.0585 |
| 1.0998 | 164.0 | 246000 | 2.0293 |
| 1.0891 | 165.0 | 247500 | 2.0608 |
| 1.1036 | 166.0 | 249000 | 2.0831 |
| 1.076 | 167.0 | 250500 | 2.0979 |
| 1.0895 | 168.0 | 252000 | 2.0882 |
| 1.0825 | 169.0 | 253500 | 2.0742 |
| 1.0793 | 170.0 | 255000 | 2.0841 |
| 1.079 | 171.0 | 256500 | 2.0829 |
| 1.0653 | 172.0 | 258000 | 2.0888 |
| 1.0834 | 173.0 | 259500 | 2.0784 |
| 1.0721 | 174.0 | 261000 | 2.0859 |
| 1.0712 | 175.0 | 262500 | 2.0810 |
| 1.0494 | 176.0 | 264000 | 2.0605 |
| 1.0654 | 177.0 | 265500 | 2.0623 |
| 1.077 | 178.0 | 267000 | 2.0756 |
| 1.056 | 179.0 | 268500 | 2.0782 |
| 1.0523 | 180.0 | 270000 | 2.0966 |
| 1.0656 | 181.0 | 271500 | 2.0750 |
| 1.0636 | 182.0 | 273000 | 2.0769 |
| 1.0851 | 183.0 | 274500 | 2.0872 |
| 1.0562 | 184.0 | 276000 | 2.0893 |
| 1.0534 | 185.0 | 277500 | 2.0661 |
| 1.0514 | 186.0 | 279000 | 2.0712 |
| 1.062 | 187.0 | 280500 | 2.0769 |
| 1.0683 | 188.0 | 282000 | 2.0765 |
| 1.0606 | 189.0 | 283500 | 2.0735 |
| 1.0555 | 190.0 | 285000 | 2.0710 |
| 1.0568 | 191.0 | 286500 | 2.0860 |
| 1.0502 | 192.0 | 288000 | 2.0587 |
| 1.0437 | 193.0 | 289500 | 2.0998 |
| 1.0534 | 194.0 | 291000 | 2.0418 |
| 1.062 | 195.0 | 292500 | 2.0724 |
| 1.0457 | 196.0 | 294000 | 2.0612 |
| 1.0501 | 197.0 | 295500 | 2.1012 |
| 1.0728 | 198.0 | 297000 | 2.0721 |
| 1.0413 | 199.0 | 298500 | 2.0535 |
| 1.0461 | 200.0 | 300000 | 2.0531 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Izarel/distilbert-base-uncased_fine_tuned_body_text
|
Izarel
| 2022-08-01T03:52:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T19:03:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
model-index:
- name: distilbert-base-uncased_fine_tuned_body_text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fine_tuned_body_text
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2153
- Accuracy: {'accuracy': 0.8827265261428963}
- Recall: {'recall': 0.8641975308641975}
- Precision: {'precision': 0.8900034993584509}
- F1: {'f1': 0.8769106999195494}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------------------:|:------------------------------:|:---------------------------------:|:--------------------------:|
| 0.3056 | 1.0 | 2284 | 0.3040 | {'accuracy': 0.8874897344648235} | {'recall': 0.8466417487824216} | {'precision': 0.914261252446184} | {'f1': 0.8791531902381653} |
| 0.2279 | 2.0 | 4568 | 0.2891 | {'accuracy': 0.8908294552422666} | {'recall': 0.8606863744478424} | {'precision': 0.9086452230060983} | {'f1': 0.8840158213122382} |
| 0.1467 | 3.0 | 6852 | 0.3580 | {'accuracy': 0.8882562277580072} | {'recall': 0.8452825914599615} | {'precision': 0.9170557876628164} | {'f1': 0.8797076678257796} |
| 0.0921 | 4.0 | 9136 | 0.4560 | {'accuracy': 0.8754448398576512} | {'recall': 0.8948918337297542} | {'precision': 0.8543468858131488} | {'f1': 0.8741494717043756} |
| 0.0587 | 5.0 | 11420 | 0.5701 | {'accuracy': 0.8768135778811935} | {'recall': 0.8139087099331748} | {'precision': 0.9221095855254716} | {'f1': 0.8646372277704246} |
| 0.0448 | 6.0 | 13704 | 0.6738 | {'accuracy': 0.8767040788393101} | {'recall': 0.8794880507418734} | {'precision': 0.8673070479168994} | {'f1': 0.873355078168935} |
| 0.0289 | 7.0 | 15988 | 0.7965 | {'accuracy': 0.8798248015329866} | {'recall': 0.8491335372069317} | {'precision': 0.8967703349282297} | {'f1': 0.8723020536389552} |
| 0.0214 | 8.0 | 18272 | 0.8244 | {'accuracy': 0.8811387900355871} | {'recall': 0.8576282704723072} | {'precision': 0.8922931887815225} | {'f1': 0.8746173837712965} |
| 0.0147 | 9.0 | 20556 | 0.8740 | {'accuracy': 0.8806460443471119} | {'recall': 0.8669158455091177} | {'precision': 0.8839357893521191} | {'f1': 0.8753430924062213} |
| 0.0099 | 10.0 | 22840 | 0.9716 | {'accuracy': 0.8788940596769779} | {'recall': 0.8694076339336279} | {'precision': 0.8787635947338294} | {'f1': 0.8740605784559327} |
| 0.0092 | 11.0 | 25124 | 1.0296 | {'accuracy': 0.8822885299753627} | {'recall': 0.8669158455091177} | {'precision': 0.8870089233978444} | {'f1': 0.876847290640394} |
| 0.0039 | 12.0 | 27408 | 1.0974 | {'accuracy': 0.8787845606350945} | {'recall': 0.8628383735417374} | {'precision': 0.8836561883772184} | {'f1': 0.8731232091690544} |
| 0.0053 | 13.0 | 29692 | 1.0833 | {'accuracy': 0.8799890500958116} | {'recall': 0.8503794314191868} | {'precision': 0.8960496479293472} | {'f1': 0.8726173872617387} |
| 0.0032 | 14.0 | 31976 | 1.1731 | {'accuracy': 0.8813030385984123} | {'recall': 0.8705402650356778} | {'precision': 0.8823326828148318} | {'f1': 0.8763968072976055} |
| 0.0017 | 15.0 | 34260 | 1.2153 | {'accuracy': 0.8827265261428963} | {'recall': 0.8641975308641975} | {'precision': 0.8900034993584509} | {'f1': 0.8769106999195494} |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
keithanpai/resnet-50-finetuned-eurosat
|
keithanpai
| 2022-07-31T23:54:26Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-31T23:46:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6676646706586826
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-eurosat
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1981
- Accuracy: 0.6677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5279 | 0.99 | 70 | 1.5218 | 0.6677 |
| 1.1982 | 1.99 | 140 | 1.2405 | 0.6677 |
| 1.0836 | 2.99 | 210 | 1.1981 | 0.6677 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_3_ternary
|
elopezlopez
| 2022-07-31T23:52:36Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T23:35:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_3_ternary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_3_ternary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7987
- F1: 0.7460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.5903 | 0.6893 |
| 0.5417 | 2.0 | 578 | 0.5822 | 0.7130 |
| 0.5417 | 3.0 | 867 | 0.6471 | 0.7385 |
| 0.2298 | 4.0 | 1156 | 0.8933 | 0.7322 |
| 0.2298 | 5.0 | 1445 | 1.1002 | 0.7147 |
| 0.1012 | 6.0 | 1734 | 1.2041 | 0.7249 |
| 0.0508 | 7.0 | 2023 | 1.3575 | 0.7195 |
| 0.0508 | 8.0 | 2312 | 1.3896 | 0.7385 |
| 0.018 | 9.0 | 2601 | 1.5363 | 0.7238 |
| 0.018 | 10.0 | 2890 | 1.5336 | 0.7364 |
| 0.0142 | 11.0 | 3179 | 1.6335 | 0.7308 |
| 0.0142 | 12.0 | 3468 | 1.6915 | 0.7295 |
| 0.0047 | 13.0 | 3757 | 1.7087 | 0.7427 |
| 0.0058 | 14.0 | 4046 | 1.7875 | 0.7378 |
| 0.0058 | 15.0 | 4335 | 1.7649 | 0.7438 |
| 0.0051 | 16.0 | 4624 | 1.7987 | 0.7460 |
| 0.0051 | 17.0 | 4913 | 1.8435 | 0.7404 |
| 0.0025 | 18.0 | 5202 | 1.9623 | 0.7257 |
| 0.0025 | 19.0 | 5491 | 1.9005 | 0.7304 |
| 0.0029 | 20.0 | 5780 | 1.9437 | 0.7374 |
| 0.0011 | 21.0 | 6069 | 1.9840 | 0.7268 |
| 0.0011 | 22.0 | 6358 | 1.9411 | 0.7346 |
| 0.0025 | 23.0 | 6647 | 1.9233 | 0.7438 |
| 0.0025 | 24.0 | 6936 | 1.9415 | 0.7395 |
| 0.0015 | 25.0 | 7225 | 1.9481 | 0.7411 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/xlnet-base-cased_fold_2_binary
|
elopezlopez
| 2022-07-31T23:13:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T22:50:03Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_2_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_2_binary
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4858
- F1: 0.7648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.4361 | 0.7404 |
| 0.4403 | 2.0 | 580 | 0.5363 | 0.7515 |
| 0.4403 | 3.0 | 870 | 0.4858 | 0.7648 |
| 0.2505 | 4.0 | 1160 | 0.7127 | 0.7612 |
| 0.2505 | 5.0 | 1450 | 0.8930 | 0.7554 |
| 0.1425 | 6.0 | 1740 | 0.9897 | 0.7580 |
| 0.0869 | 7.0 | 2030 | 1.2683 | 0.7615 |
| 0.0869 | 8.0 | 2320 | 1.4988 | 0.7343 |
| 0.0411 | 9.0 | 2610 | 1.5082 | 0.7492 |
| 0.0411 | 10.0 | 2900 | 1.4974 | 0.7450 |
| 0.0306 | 11.0 | 3190 | 1.5723 | 0.7435 |
| 0.0306 | 12.0 | 3480 | 1.8446 | 0.7432 |
| 0.0291 | 13.0 | 3770 | 1.7113 | 0.7639 |
| 0.0207 | 14.0 | 4060 | 1.8073 | 0.7394 |
| 0.0207 | 15.0 | 4350 | 1.7524 | 0.7585 |
| 0.0171 | 16.0 | 4640 | 1.8751 | 0.7374 |
| 0.0171 | 17.0 | 4930 | 1.7849 | 0.7561 |
| 0.0084 | 18.0 | 5220 | 1.8618 | 0.7441 |
| 0.0064 | 19.0 | 5510 | 1.9613 | 0.7437 |
| 0.0064 | 20.0 | 5800 | 1.8898 | 0.7430 |
| 0.006 | 21.0 | 6090 | 1.9889 | 0.7409 |
| 0.006 | 22.0 | 6380 | 1.9949 | 0.7488 |
| 0.0049 | 23.0 | 6670 | 1.9453 | 0.7488 |
| 0.0049 | 24.0 | 6960 | 1.9754 | 0.7472 |
| 0.002 | 25.0 | 7250 | 1.9946 | 0.7504 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/xlnet-base-cased_fold_1_binary
|
elopezlopez
| 2022-07-31T22:49:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T22:26:16Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_1_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_1_binary
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7607
- F1: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.4111 | 0.7555 |
| 0.4387 | 2.0 | 576 | 0.4075 | 0.7540 |
| 0.4387 | 3.0 | 864 | 0.5344 | 0.7567 |
| 0.2471 | 4.0 | 1152 | 0.7405 | 0.7597 |
| 0.2471 | 5.0 | 1440 | 1.0564 | 0.7508 |
| 0.1419 | 6.0 | 1728 | 1.0703 | 0.7751 |
| 0.0845 | 7.0 | 2016 | 1.0866 | 0.7609 |
| 0.0845 | 8.0 | 2304 | 1.2135 | 0.7751 |
| 0.05 | 9.0 | 2592 | 1.3649 | 0.7516 |
| 0.05 | 10.0 | 2880 | 1.4943 | 0.7590 |
| 0.0267 | 11.0 | 3168 | 1.5174 | 0.7412 |
| 0.0267 | 12.0 | 3456 | 1.4884 | 0.7559 |
| 0.0278 | 13.0 | 3744 | 1.5109 | 0.7405 |
| 0.0201 | 14.0 | 4032 | 1.7251 | 0.7409 |
| 0.0201 | 15.0 | 4320 | 1.5833 | 0.7354 |
| 0.0185 | 16.0 | 4608 | 1.7744 | 0.7598 |
| 0.0185 | 17.0 | 4896 | 1.8283 | 0.7619 |
| 0.0066 | 18.0 | 5184 | 1.7607 | 0.7778 |
| 0.0066 | 19.0 | 5472 | 1.7503 | 0.7719 |
| 0.0078 | 20.0 | 5760 | 1.7807 | 0.7508 |
| 0.006 | 21.0 | 6048 | 1.6887 | 0.7629 |
| 0.006 | 22.0 | 6336 | 1.7041 | 0.7678 |
| 0.0074 | 23.0 | 6624 | 1.7337 | 0.7633 |
| 0.0074 | 24.0 | 6912 | 1.7548 | 0.7645 |
| 0.0035 | 25.0 | 7200 | 1.7685 | 0.7621 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_6_binary
|
elopezlopez
| 2022-07-31T22:25:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T22:14:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_6_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_6_binary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6838
- F1: 0.7881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.4181 | 0.7732 |
| 0.4097 | 2.0 | 580 | 0.3967 | 0.7697 |
| 0.4097 | 3.0 | 870 | 0.5811 | 0.7797 |
| 0.2034 | 4.0 | 1160 | 0.8684 | 0.7320 |
| 0.2034 | 5.0 | 1450 | 0.9116 | 0.7718 |
| 0.0794 | 6.0 | 1740 | 1.0588 | 0.7690 |
| 0.0278 | 7.0 | 2030 | 1.2092 | 0.7738 |
| 0.0278 | 8.0 | 2320 | 1.2180 | 0.7685 |
| 0.0233 | 9.0 | 2610 | 1.3005 | 0.7676 |
| 0.0233 | 10.0 | 2900 | 1.4009 | 0.7634 |
| 0.0093 | 11.0 | 3190 | 1.4528 | 0.7805 |
| 0.0093 | 12.0 | 3480 | 1.4803 | 0.7859 |
| 0.0088 | 13.0 | 3770 | 1.4775 | 0.7750 |
| 0.0077 | 14.0 | 4060 | 1.6171 | 0.7699 |
| 0.0077 | 15.0 | 4350 | 1.6429 | 0.7636 |
| 0.0047 | 16.0 | 4640 | 1.5619 | 0.7819 |
| 0.0047 | 17.0 | 4930 | 1.5833 | 0.7724 |
| 0.0034 | 18.0 | 5220 | 1.6400 | 0.7853 |
| 0.0008 | 19.0 | 5510 | 1.6508 | 0.7792 |
| 0.0008 | 20.0 | 5800 | 1.6838 | 0.7881 |
| 0.0009 | 21.0 | 6090 | 1.6339 | 0.7829 |
| 0.0009 | 22.0 | 6380 | 1.6824 | 0.7806 |
| 0.0016 | 23.0 | 6670 | 1.6867 | 0.7876 |
| 0.0016 | 24.0 | 6960 | 1.7107 | 0.7877 |
| 0.0013 | 25.0 | 7250 | 1.6933 | 0.7812 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_5_binary
|
elopezlopez
| 2022-07-31T22:14:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T22:04:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_5_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_5_binary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5093
- F1: 0.7801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.4760 | 0.7315 |
| 0.3992 | 2.0 | 576 | 0.4428 | 0.7785 |
| 0.3992 | 3.0 | 864 | 0.5093 | 0.7801 |
| 0.2021 | 4.0 | 1152 | 0.6588 | 0.7634 |
| 0.2021 | 5.0 | 1440 | 0.9174 | 0.7713 |
| 0.0945 | 6.0 | 1728 | 0.9832 | 0.7726 |
| 0.0321 | 7.0 | 2016 | 1.2103 | 0.7672 |
| 0.0321 | 8.0 | 2304 | 1.3759 | 0.7616 |
| 0.0134 | 9.0 | 2592 | 1.4405 | 0.7570 |
| 0.0134 | 10.0 | 2880 | 1.4591 | 0.7710 |
| 0.0117 | 11.0 | 3168 | 1.4947 | 0.7713 |
| 0.0117 | 12.0 | 3456 | 1.6224 | 0.7419 |
| 0.0081 | 13.0 | 3744 | 1.6462 | 0.7520 |
| 0.0083 | 14.0 | 4032 | 1.6880 | 0.7637 |
| 0.0083 | 15.0 | 4320 | 1.7080 | 0.7380 |
| 0.0048 | 16.0 | 4608 | 1.7352 | 0.7551 |
| 0.0048 | 17.0 | 4896 | 1.6761 | 0.7713 |
| 0.0024 | 18.0 | 5184 | 1.7553 | 0.76 |
| 0.0024 | 19.0 | 5472 | 1.7312 | 0.7673 |
| 0.005 | 20.0 | 5760 | 1.7334 | 0.7713 |
| 0.0032 | 21.0 | 6048 | 1.7963 | 0.7578 |
| 0.0032 | 22.0 | 6336 | 1.7529 | 0.7679 |
| 0.0025 | 23.0 | 6624 | 1.7741 | 0.7662 |
| 0.0025 | 24.0 | 6912 | 1.7515 | 0.7679 |
| 0.0004 | 25.0 | 7200 | 1.7370 | 0.7765 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_3_binary
|
elopezlopez
| 2022-07-31T21:53:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T21:43:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_3_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_3_binary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8310
- F1: 0.7584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.4560 | 0.7522 |
| 0.4008 | 2.0 | 578 | 0.4790 | 0.7567 |
| 0.4008 | 3.0 | 867 | 0.6368 | 0.7557 |
| 0.1967 | 4.0 | 1156 | 0.6854 | 0.7534 |
| 0.1967 | 5.0 | 1445 | 0.9823 | 0.7309 |
| 0.0768 | 6.0 | 1734 | 1.2531 | 0.7511 |
| 0.0202 | 7.0 | 2023 | 1.2906 | 0.7391 |
| 0.0202 | 8.0 | 2312 | 1.4025 | 0.7460 |
| 0.0087 | 9.0 | 2601 | 1.5713 | 0.7507 |
| 0.0087 | 10.0 | 2890 | 1.4212 | 0.7528 |
| 0.0162 | 11.0 | 3179 | 1.5775 | 0.7511 |
| 0.0162 | 12.0 | 3468 | 1.6361 | 0.7377 |
| 0.0048 | 13.0 | 3757 | 1.6972 | 0.7542 |
| 0.0098 | 14.0 | 4046 | 1.6569 | 0.7565 |
| 0.0098 | 15.0 | 4335 | 1.7547 | 0.7325 |
| 0.0042 | 16.0 | 4624 | 1.8108 | 0.7544 |
| 0.0042 | 17.0 | 4913 | 1.7593 | 0.7554 |
| 0.0041 | 18.0 | 5202 | 1.7582 | 0.7551 |
| 0.0041 | 19.0 | 5491 | 1.8200 | 0.7512 |
| 0.0029 | 20.0 | 5780 | 1.8310 | 0.7584 |
| 0.0018 | 21.0 | 6069 | 1.8146 | 0.7568 |
| 0.0018 | 22.0 | 6358 | 1.7870 | 0.7558 |
| 0.0029 | 23.0 | 6647 | 1.8508 | 0.7530 |
| 0.0029 | 24.0 | 6936 | 1.8327 | 0.7543 |
| 0.0001 | 25.0 | 7225 | 1.8546 | 0.7561 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
elopezlopez/distilbert-base-uncased_fold_1_binary
|
elopezlopez
| 2022-07-31T21:33:03Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T20:57:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-uncased_fold_1_binary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_1_binary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5992
- F1: 0.7687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.3960 | 0.7467 |
| 0.3988 | 2.0 | 576 | 0.3947 | 0.7487 |
| 0.3988 | 3.0 | 864 | 0.4511 | 0.7662 |
| 0.1853 | 4.0 | 1152 | 0.7226 | 0.7285 |
| 0.1853 | 5.0 | 1440 | 0.9398 | 0.7334 |
| 0.0827 | 6.0 | 1728 | 1.0547 | 0.7427 |
| 0.0287 | 7.0 | 2016 | 1.1602 | 0.7563 |
| 0.0287 | 8.0 | 2304 | 1.3332 | 0.7171 |
| 0.0219 | 9.0 | 2592 | 1.3429 | 0.7420 |
| 0.0219 | 10.0 | 2880 | 1.2603 | 0.7648 |
| 0.0139 | 11.0 | 3168 | 1.4126 | 0.7569 |
| 0.0139 | 12.0 | 3456 | 1.3195 | 0.7483 |
| 0.0115 | 13.0 | 3744 | 1.4356 | 0.7491 |
| 0.0035 | 14.0 | 4032 | 1.5693 | 0.7636 |
| 0.0035 | 15.0 | 4320 | 1.4071 | 0.7662 |
| 0.0071 | 16.0 | 4608 | 1.4561 | 0.7579 |
| 0.0071 | 17.0 | 4896 | 1.5405 | 0.7634 |
| 0.0041 | 18.0 | 5184 | 1.5862 | 0.7589 |
| 0.0041 | 19.0 | 5472 | 1.6782 | 0.76 |
| 0.0024 | 20.0 | 5760 | 1.5699 | 0.7677 |
| 0.0006 | 21.0 | 6048 | 1.5991 | 0.7467 |
| 0.0006 | 22.0 | 6336 | 1.6205 | 0.7682 |
| 0.0003 | 23.0 | 6624 | 1.6334 | 0.7643 |
| 0.0003 | 24.0 | 6912 | 1.5992 | 0.7687 |
| 0.0011 | 25.0 | 7200 | 1.6053 | 0.7624 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
DS-20202/DoubleHardDebias
|
DS-20202
| 2022-07-31T20:32:45Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-07-31T12:08:09Z |
---
title: Double Hard Debiasing
emoji: ๐
colorFrom: blue
colorTo: pink
sdk: gradio
sdk_version: 3.1.1
app_file: app.py
pinned: false
license: mit
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
neuralmagic/oBERT-6-downstream-pruned-block4-90-QAT-squadv1
|
neuralmagic
| 2022-07-31T19:52:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:21:02Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-pruned-block4-90-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - Sparsity 90% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 6
```
The dev-set performance of this model:
```
EM = 76.56
F1 = 84.59
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-6-downstream-pruned-unstructured-90-squadv1
|
neuralmagic
| 2022-07-31T19:52:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:00:05Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-pruned-unstructured-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - Sparsity 90% - unstructured`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 6
```
The dev-set performance of this model:
```
EM = 79.16
F1 = 86.78
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-teacher-squadv1
|
neuralmagic
| 2022-07-31T19:52:34Z | 396 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:47:26Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# SQuADv1 teacher
This model is used as a teacher for all runs on the SQuADv1 downstream task in the paper [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
SQuADv1 dev-set:
```
EM = 81.41
F1 = 88.54
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-3-downstream-pruned-block4-90-squadv1
|
neuralmagic
| 2022-07-31T19:52:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:01:41Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-block4-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 90% - 4-block`.
```
Pruning method: oBERT downstream block-4
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 71.36
F1 = 80.69
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-3-downstream-dense-squadv1
|
neuralmagic
| 2022-07-31T19:52:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:00:43Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-dense-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - 0% Sparsity`, and it represents an upper bound for performance of the corresponding pruned models:
- 80% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-90-squadv1`
SQuADv1 dev-set:
```
EM = 76.62
F1 = 84.65
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-6-downstream-dense-QAT-squadv1
|
neuralmagic
| 2022-07-31T19:52:33Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:20:36Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-dense-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - 0% Sparsity - QAT`, and it represents an upper bound for performance of the corresponding pruned and quantized models:
- 80% unstructured QAT: `neuralmagic/oBERT-6-downstream-pruned-unstructured-80-QAT-squadv1`
- 80% block-4 QAT: `neuralmagic/oBERT-6-downstream-pruned-block4-80-QAT-squadv1`
- 90% unstructured QAT: `neuralmagic/oBERT-6-downstream-pruned-unstructured-90-QAT-squadv1`
- 90% block-4 QAT: `neuralmagic/oBERT-6-downstream-pruned-block4-90-QAT-squadv1`
SQuADv1 dev-set:
```
EM = 80.85
F1 = 87.94
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-3-downstream-pruned-block4-80-QAT-squadv1
|
neuralmagic
| 2022-07-31T19:52:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:21:28Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-block4-80-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 80% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 72.70
F1 = 82.04
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-6-downstream-dense-squadv1
|
neuralmagic
| 2022-07-31T19:52:33Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:59:35Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-6-downstream-dense-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 6 Layers - 0% Sparsity`, and it represents an upper bound for performance of the corresponding pruned models:
- 80% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-90-squadv1`
SQuADv1 dev-set:
```
EM = 81.17
F1 = 88.32
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-3-downstream-pruned-block4-90-QAT-squadv1
|
neuralmagic
| 2022-07-31T19:52:33Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:21:41Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-block4-90-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 90% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 70.00
F1 = 79.66
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-3-downstream-pruned-unstructured-90-squadv1
|
neuralmagic
| 2022-07-31T19:52:33Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T14:01:15Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-3-downstream-pruned-unstructured-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 3 Layers - Sparsity 90% - unstructured`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 3
```
The dev-set performance of this model:
```
EM = 73.61
F1 = 82.50
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-3-upstream-pretrained-dense
|
neuralmagic
| 2022-07-31T19:52:33Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:56:43Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-3-upstream-pretrained-dense
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to 3 layers from `neuralmagic/oBERT-12-upstream-pretrained-dense`, pretrained with knowledge distillation. This model is used as a starting point for downstream finetuning and pruning runs presented in the `Table 3 - 3 Layers`.
The model can also be used for finetuning on any downstream task, as a starting point instead of the three times larger `bert-base-uncased` model.
Finetuned and pruned versions of this model on the SQuADv1 downstream task, as described in the paper:
- 0%: `neuralmagic/oBERT-3-downstream-dense-squadv1`
- 80% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-80-squadv1`
- 80% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-80-squadv1`
- 90% unstructured: `neuralmagic/oBERT-3-downstream-pruned-unstructured-90-squadv1`
- 90% block-4: `neuralmagic/oBERT-3-downstream-pruned-block4-90-squadv1`
```
Training objective: masked language modeling (MLM) + knowledge distillation
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 0%
Number of layers: 3
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-v2
|
neuralmagic
| 2022-07-31T19:52:32Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:25:30Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-12-upstream-pruned-unstructured-97-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the upstream pruned model used as a starting point for sparse-transfer learning to downstream tasks presented in the `Table 2 - oBERT - {SQuADv1, MNLI, QQP} - 97%` (in the upcoming updated version of the paper).
Finetuned versions of this model for each downstream task are:
- SQuADv1: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2`
- MNLI: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-mnli-v2`
- QQP: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-qqp-v2`
```
Pruning method: oBERT upstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 97%
Number of layers: 12
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2
|
neuralmagic
| 2022-07-31T19:52:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:30:56Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-upstream-pruned-unstructured-97-finetuned-squadv1-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - SQuADv1 97%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | F1 | EM |
| ------------- | ----- | ----- |
| seed=42 | 84.92 | 76.94 |
| seed=3407 | 84.87 | 76.71 |
| seed=123 | 84.95 | 77.06 |
| seed=12345 (*)| 84.95 | 76.90 |
| ------------- | ----- | ----- |
| mean | 84.92 | 76.90 |
| stdev | 0.037 | 0.145 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2
|
neuralmagic
| 2022-07-31T19:52:32Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:31:44Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - QQP 90%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | acc | F1 |
| ------------- | ----- | ----- |
| seed=42 | 90.94 | 87.79 |
| seed=3407 | 91.00 | 87.81 |
| seed=123 | 90.94 | 87.73 |
| seed=12345 (*)| 91.07 | 87.92 |
| ------------- | ----- | ----- |
| mean | 90.99 | 87.81 |
| stdev | 0.061 | 0.079 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1-v2
|
neuralmagic
| 2022-07-31T19:52:32Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:30:41Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - SQuADv1 90%` (in the upcoming updated version of the paper).
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over four seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | F1 | EM |
| ------------ | ----- | ----- |
| seed=42 | 88.55 | 81.48 |
| seed=3407 | 88.34 | 81.25 |
| seed=123 (*)| 88.64 | 81.57 |
| seed=12345 | 88.44 | 81.43 |
| ------------ | ----- | ----- |
| mean | 88.49 | 81.43 |
| stdev | 0.130 | 0.134 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-v2
|
neuralmagic
| 2022-07-31T19:52:32Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-06-17T07:22:37Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-12-upstream-pruned-unstructured-90-v2
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the upstream pruned model used as a starting point for sparse-transfer learning to downstream tasks presented in the `Table 2 - oBERT - {SQuADv1, MNLI, QQP} - 90%` (in the upcoming updated version of the paper).
Finetuned versions of this model for each downstream task are:
- SQuADv1: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-squadv1-v2`
- MNLI: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-mnli-v2`
- QQP: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp-v2`
```
Pruning method: oBERT upstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 90%
Number of layers: 12
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp
|
neuralmagic
| 2022-07-31T19:52:32Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:58:30Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-upstream-pruned-unstructured-90-finetuned-qqp
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 2 - oBERT - QQP 90%`.
```
Pruning method: oBERT upstream unstructured + sparse-transfer to downstream
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | acc | F1 |
| ------------ | ----- | ----- |
| seed=42 (*)| 90.93 | 87.77 |
| seed=3407 | 90.70 | 87.49 |
| seed=54321 | 90.86 | 87.68 |
| ------------ | ----- | ----- |
| mean | 90.83 | 87.65 |
| stdev | 0.117 | 0.143 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-unstructured-90-mnli
|
neuralmagic
| 2022-07-31T19:52:31Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:mnli",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:54:55Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: mnli
---
# oBERT-12-downstream-pruned-unstructured-90-mnli
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - MNLI 90%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: MNLI
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | m-acc | mm-acc|
| ------------ | ----- | ----- |
| seed=42 | 83.74 | 84.31 |
| seed=3407 (*)| 83.85 | 84.40 |
| seed=54321 | 83.77 | 84.33 |
| ------------ | ----- | ----- |
| mean | 83.79 | 84.35 |
| stdev | 0.056 | 0.047 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-unstructured-97-mnli
|
neuralmagic
| 2022-07-31T19:52:31Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:mnli",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:55:09Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: mnli
---
# oBERT-12-downstream-pruned-unstructured-97-mnli
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - MNLI 97%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: MNLI
Sparsity: 97%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 97% | m-acc | mm-acc|
| ------------ | ----- | ----- |
| seed=42 (*)| 82.10 | 81.94 |
| seed=3407 | 81.81 | 82.27 |
| seed=54321 | 81.40 | 81.83 |
| ------------ | ----- | ----- |
| mean | 81.77 | 82.01 |
| stdev | 0.351 | 0.228 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-unstructured-90-squadv1
|
neuralmagic
| 2022-07-31T19:52:31Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:53:32Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-pruned-unstructured-90-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - SQuADv1 90%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | F1 | EM |
| ------------ | ----- | ----- |
| seed=42 | 88.22 | 81.10 |
| seed=3407 (*)| 88.46 | 81.26 |
| seed=54321 | 88.26 | 81.00 |
| ------------ | ----- | ----- |
| mean | 88.31 | 81.12 |
| stdev | 0.128 | 0.131 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-unstructured-90-qqp
|
neuralmagic
| 2022-07-31T19:52:31Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:qqp",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:55:50Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: qqp
---
# oBERT-12-downstream-pruned-unstructured-90-qqp
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - QQP 90%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: QQP
Sparsity: 90%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 90% | acc | F1 |
| ------------ | ----- | ----- |
| seed=42 | 91.30 | 88.24 |
| seed=3407 (*)| 91.39 | 88.36 |
| seed=54321 | 91.36 | 88.29 |
| ------------ | ----- | ----- |
| mean | 91.35 | 88.30 |
| stdev | 0.045 | 0.060 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-upstream-pretrained-dense
|
neuralmagic
| 2022-07-31T19:52:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:56:17Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets:
- bookcorpus
- wikipedia
---
# oBERT-12-upstream-pretrained-dense
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the pretrained dense model used as a teacher for upstream pruning runs, as described in the paper. The model can be finetuned on any downstream task, just like the standard `bert-base-uncased` model which is used as initialization for training of this model.
Sparse versions of this model:
- 90% sparse: `neuralmagic/oBERT-12-upstream-pruned-unstructured-90`
- 97% sparse: `neuralmagic/oBERT-12-upstream-pruned-unstructured-97`
```
Training objective: masked language modeling (MLM)
Paper: https://arxiv.org/abs/2203.07259
Dataset: BookCorpus and English Wikipedia
Sparsity: 0%
Number of layers: 12
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-unstructured-80-squadv1
|
neuralmagic
| 2022-07-31T19:52:31Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:53:16Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-pruned-unstructured-80-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - SQuADv1 80%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 80%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 80% | F1 | EM |
| ------------ | ----- | ----- |
| seed=42 | 88.95 | 82.08 |
| seed=3407 (*)| 89.16 | 82.05 |
| seed=54321 | 89.01 | 82.12 |
| ------------ | ----- | ----- |
| mean | 89.04 | 82.08 |
| stdev | 0.108 | 0.035 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-block4-90-QAT-squadv1
|
neuralmagic
| 2022-07-31T19:52:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:squad",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T19:20:22Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: squad
---
# oBERT-12-downstream-pruned-block4-90-QAT-squadv1
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 3 - 12 Layers - Sparsity 90% - 4-block + QAT`.
```
Pruning method: oBERT downstream block-4 + QAT
Paper: https://arxiv.org/abs/2203.07259
Dataset: SQuADv1
Sparsity: 90%
Number of layers: 12
```
The dev-set performance of this model:
```
EM = 78.84
F1 = 86.68
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
neuralmagic/oBERT-12-downstream-pruned-unstructured-80-mnli
|
neuralmagic
| 2022-07-31T19:52:30Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"oBERT",
"sparsity",
"pruning",
"compression",
"en",
"dataset:mnli",
"arxiv:2203.07259",
"endpoints_compatible",
"region:us"
] | null | 2022-05-25T13:54:40Z |
---
tags:
- bert
- oBERT
- sparsity
- pruning
- compression
language: en
datasets: mnli
---
# oBERT-12-downstream-pruned-unstructured-80-mnli
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
It corresponds to the model presented in the `Table 1 - 30 Epochs - oBERT - MNLI 80%`.
```
Pruning method: oBERT downstream unstructured
Paper: https://arxiv.org/abs/2203.07259
Dataset: MNLI
Sparsity: 80%
Number of layers: 12
```
The dev-set performance reported in the paper is averaged over three seeds, and we release the best model (marked with `(*)`):
```
| oBERT 80% | m-acc | mm-acc|
| ------------ | ----- | ----- |
| seed=42 | 84.30 | 84.98 |
| seed=3407 (*)| 84.46 | 84.99 |
| seed=54321 | 84.18 | 84.76 |
| ------------ | ----- | ----- |
| mean | 84.32 | 84.91 |
| stdev | 0.140 | 0.133 |
```
Code: [https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT](https://github.com/neuralmagic/sparseml/tree/main/research/optimal_BERT_surgeon_oBERT)
If you find the model useful, please consider citing our work.
## Citation info
```bibtex
@article{kurtic2022optimal,
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
journal={arXiv preprint arXiv:2203.07259},
year={2022}
}
```
|
Ebuu/Aaaaa
|
Ebuu
| 2022-07-31T19:00:30Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-07-31T19:00:30Z |
---
license: bigscience-bloom-rail-1.0
---
|
SummerChiam/rust_image_classification_12
|
SummerChiam
| 2022-07-31T17:33:58Z | 48 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-31T17:33:47Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification_12
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9679595232009888
---
# rust_image_classification_12
Autogenerated by HuggingPics๐ค๐ผ๏ธ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust0

#### rust0

|
QuickSilver007/Reinforce-Pong-PLE-v0
|
QuickSilver007
| 2022-07-31T16:23:22Z | 0 | 0 | null |
[
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-31T16:23:13Z |
---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pong-PLE-v0
results:
- metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
anneke/finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples-final
|
anneke
| 2022-07-31T16:05:59Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T15:49:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples-final
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1289
- Accuracy: 0.977
- F1: 0.9878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SummerChiam/pond_image_classification_11
|
SummerChiam
| 2022-07-31T15:36:10Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-31T15:35:57Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_11
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9951980710029602
---
# pond_image_classification_11
Autogenerated by HuggingPics๐ค๐ผ๏ธ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae0

#### Boiling0

#### BoilingNight0

#### Normal0

#### NormalCement0

#### NormalNight0

#### NormalRain0

|
samwit/ddpm-afhq-cats-128
|
samwit
| 2022-07-31T15:31:53Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-07-31T00:49:28Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-afhq-cats-128
## Model description
This diffusion model is trained with the [๐ค Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
๐ [TensorBoard logs](https://huggingface.co/samwit/ddpm-afhq-cats-128/tensorboard?#scalars)
|
CuteBlack/gfp_guided_diffusion_200k
|
CuteBlack
| 2022-07-31T15:10:42Z | 0 | 6 | null |
[
"license:mit",
"region:us"
] | null | 2022-07-15T22:24:34Z |
---
license: mit
---
256x256 Diffusion model trained on 1000+ NSFW gay furry pics (with same composition)
'attention_resolutions': '16',
'class_cond': False,
'diffusion_steps': 1000,
'rescale_timesteps': True,
'timestep_respacing': 'ddim100',
'image_size': 256,
'learn_sigma': True,
'noise_schedule': 'linear',
'num_channels': 128,
'num_heads': 1,
'num_res_blocks': 2,
'use_checkpoint': use_checkpoint,
'use_fp16': True,
'use_scale_shift_norm': False,
|
Kinahem/Reinforce-3
|
Kinahem
| 2022-07-31T13:02:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-31T13:02:35Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-3
results:
- metrics:
- type: mean_reward
value: 471.20 +/- 86.40
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Vasanth/bert_emo_classifier
|
Vasanth
| 2022-07-31T12:34:43Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-30T23:30:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: bert_emo_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_emo_classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9063 | 0.25 | 500 | 0.4845 |
| 0.3362 | 0.5 | 1000 | 0.3492 |
| 0.2759 | 0.75 | 1500 | 0.2819 |
| 0.2521 | 1.0 | 2000 | 0.2464 |
| 0.1705 | 1.25 | 2500 | 0.2345 |
| 0.1841 | 1.5 | 3000 | 0.2013 |
| 0.1428 | 1.75 | 3500 | 0.1926 |
| 0.1747 | 2.0 | 4000 | 0.1866 |
| 0.1082 | 2.25 | 4500 | 0.2302 |
| 0.1142 | 2.5 | 5000 | 0.2118 |
| 0.1205 | 2.75 | 5500 | 0.2318 |
| 0.1135 | 3.0 | 6000 | 0.2306 |
| 0.0803 | 3.25 | 6500 | 0.2625 |
| 0.0745 | 3.5 | 7000 | 0.2850 |
| 0.085 | 3.75 | 7500 | 0.2719 |
| 0.0701 | 4.0 | 8000 | 0.2748 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.10.3
|
QuickSilver007/Reinforce-CartPole-v1
|
QuickSilver007
| 2022-07-31T12:21:39Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-31T12:21:29Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- metrics:
- type: mean_reward
value: 123.40 +/- 12.38
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Kinahem/Reinforce-1
|
Kinahem
| 2022-07-31T12:07:01Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-31T12:06:53Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- metrics:
- type: mean_reward
value: 18.30 +/- 7.93
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
fabf21/finetuning-sentiment-model-3000-samples
|
fabf21
| 2022-07-31T11:16:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-31T11:05:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Neha2608/xlm-roberta-base-finetuned-panx-en
|
Neha2608
| 2022-07-31T10:42:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-07-02T12:17:17Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4329
- F1: 0.6431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1554 | 1.0 | 50 | 0.5989 | 0.4571 |
| 0.5361 | 2.0 | 100 | 0.4329 | 0.6431 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Okyx/finetuned-amazon-en-es
|
Okyx
| 2022-07-31T10:33:05Z | 10 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-07-31T09:41:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Okyx/finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Okyx/finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0154
- Validation Loss: 3.3292
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.2009 | 4.0465 | 0 |
| 5.7436 | 3.6640 | 1 |
| 5.0419 | 3.5296 | 2 |
| 4.6412 | 3.4582 | 3 |
| 4.3722 | 3.3943 | 4 |
| 4.1947 | 3.3610 | 5 |
| 4.0747 | 3.3295 | 6 |
| 4.0154 | 3.3292 | 7 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Neha2608/xlm-roberta-base-finetuned-panx-it
|
Neha2608
| 2022-07-31T10:26:20Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-07-02T11:59:49Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2740
- F1: 0.7919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8185 | 1.0 | 70 | 0.3369 | 0.7449 |
| 0.2899 | 2.0 | 140 | 0.2740 | 0.7919 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CuteBlack/gfp_guided_diffusion_v4
|
CuteBlack
| 2022-07-31T10:04:18Z | 0 | 9 | null |
[
"license:mit",
"region:us"
] | null | 2022-07-31T09:48:47Z |
---
license: mit
---
Open AI diffusion model that has trained on every single NSFW gay furry illustrations on e621.net thatโs over the community score of 100. Excluding extreme fetishes and underage contents.
'attention_resolutions': '32, 16, 8',
'class_cond': False,
'diffusion_steps': 1000,
'rescale_timesteps': True,
'image_size': 256,
'learn_sigma': True,
'noise_schedule': 'linear',
'num_channels': 128,
'num_heads': 4,
'num_res_blocks': 2,
'resblock_updown': True,
'use_checkpoint': use_checkpoint,
'use_fp16': True,
'use_scale_shift_norm': True
|
ijnekonasa/ppo-LunarLander-v2
|
ijnekonasa
| 2022-07-31T03:58:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-31T03:57:44Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 252.64 +/- 18.29
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Frikallo/DeepLeffen-TSM_Leffen
|
Frikallo
| 2022-07-31T01:31:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-31T01:27:45Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: DeepLeffen-TSM_Leffen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeepLeffen-TSM_Leffen
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001372
- train_batch_size: 1
- eval_batch_size: 8
- seed: 2780791035
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
keithanpai/vit-base-patch16-224-finetuned-eurosat
|
keithanpai
| 2022-07-31T00:07:31Z | 55 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-30T23:42:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8632734530938124
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3953
- Accuracy: 0.8633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6081 | 0.99 | 70 | 0.5482 | 0.8004 |
| 0.4515 | 1.99 | 140 | 0.4245 | 0.8533 |
| 0.3967 | 2.99 | 210 | 0.3953 | 0.8633 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sophiestein/experiment_2
|
sophiestein
| 2022-07-30T17:57:00Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-07-30T10:21:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: experiment_2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.8840954508052192
- name: Recall
type: recall
value: 0.8925943508188939
- name: F1
type: f1
value: 0.8883245733183724
- name: Accuracy
type: accuracy
value: 0.9746737103791174
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# experiment_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1211
- Precision: 0.8841
- Recall: 0.8926
- F1: 0.8883
- Accuracy: 0.9747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2418 | 1.0 | 878 | 0.0695 | 0.9159 | 0.9255 | 0.9207 | 0.9816 |
| 0.0541 | 2.0 | 1756 | 0.0592 | 0.9244 | 0.9343 | 0.9293 | 0.9833 |
| 0.0303 | 3.0 | 2634 | 0.0602 | 0.9260 | 0.9388 | 0.9323 | 0.9838 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.11.0+cpu
- Datasets 2.4.0
- Tokenizers 0.12.1
|
anzorq/kbd_lat-ru_char_tokenizer
|
anzorq
| 2022-07-30T16:16:55Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"translation",
"ru",
"kbd",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-07-29T10:31:32Z |
---
language:
- ru
- kbd
tags:
- translation
---
|
comodoro/testpyramidsrnd2
|
comodoro
| 2022-07-30T15:58:53Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-30T15:58:47Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: comodoro/testpyramidsrnd2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
constanter/PPO-LunarLander-v2
|
constanter
| 2022-07-30T13:34:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-30T13:33:54Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 268.37 +/- 20.32
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SummerChiam/rust_image_classification_7
|
SummerChiam
| 2022-07-30T12:04:23Z | 57 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-30T12:04:11Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification_7
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9645569324493408
---
# rust_image_classification_7
Autogenerated by HuggingPics๐ค๐ผ๏ธ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust

#### rust

|
devetle/a2c-AntBulletEnv-v0
|
devetle
| 2022-07-30T10:14:08Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-30T10:13:05Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1098.81 +/- 321.12
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SummerChiam/pond_image_classification_10
|
SummerChiam
| 2022-07-30T08:57:50Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-30T08:57:38Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pond_image_classification_10
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9948979616165161
---
# pond_image_classification_10
Autogenerated by HuggingPics๐ค๐ผ๏ธ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Algae

#### Boiling

#### BoilingNight

#### Normal

#### NormalCement

#### NormalNight

#### NormalRain

|
mbarnig/lb-de-fr-en-pt-coqui-vits-tts
|
mbarnig
| 2022-07-30T06:00:58Z | 222 | 7 |
transformers
|
[
"transformers",
"tensorboard",
"TTS",
"audio",
"synthesis",
"yourTTS",
"speech",
"coqui.ai",
"lb",
"de",
"fr",
"en",
"pt",
"dataset:mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-07-08T20:42:32Z |
---
license: cc-by-nc-sa-4.0
language:
- lb
- de
- fr
- en
- pt
tags:
- TTS
- audio
- synthesis
- yourTTS
- speech
- coqui.ai
datasets:
- mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS
---
#### This model has been trained from scratch with my customized dataset [mbarnig/lb-de-fr-en-pt-12800-TTS_CORPUS](https://huggingface.co/datasets/mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS) and the ๐ธ [Coqui-TTS multilingual VITS-model recipe](https://github.com/coqui-ai/TTS/tree/dev/recipes/multilingual/vits_tts) (version 0.7.1). The model was trained without phonemes with the following character-set:
```
characters="abcdefghijklmnopqrstuvwxyzย รร รกรขรฃรครงรจรฉรชรซรญรฎรฏรณรดรตรถรนรบรปรผ",
punctuations="!'(),-.:;? ",
phonemes=None,
```
#### A live inference-demo of the model is available in my HuggingFace space โจ๏ธ ๐ฑ๐บ ๐ [mbarnig/lb_de_fr_en_pt_COQUI_VITS_TTS](https://huggingface.co/spaces/mbarnig/lb_de_fr_en_pt_COQUI_VITS_TTS).
#### Click the tab *training metrics* above to view the live Tensorboard of the model training.

|
vinitharaj/distilbert-base-uncased-finetuned-squad2
|
vinitharaj
| 2022-07-30T05:47:35Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-29T07:47:14Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: vinitharaj/distilbert-base-uncased-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vinitharaj/distilbert-base-uncased-finetuned-squad2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4953
- Validation Loss: 0.3885
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.7037 | 0.4222 | 0 |
| 0.4953 | 0.3885 | 1 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Migga/ViT-BERT-Chess-V4
|
Migga
| 2022-07-30T04:26:03Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-07-29T16:57:48Z |
---
tags:
- generated_from_trainer
model-index:
- name: ViT-BERT-Chess-V4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT-BERT-Chess-V4
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.705 | 1.0 | 3895 | 3.5686 |
| 3.5139 | 2.0 | 7790 | 3.4288 |
| 3.4156 | 3.0 | 11685 | 3.3663 |
| 3.3661 | 4.0 | 15580 | 3.3331 |
| 3.3352 | 5.0 | 19475 | 3.3213 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu116
- Datasets 2.3.2
- Tokenizers 0.12.1
|
reachrkr/testpyramidsrnd
|
reachrkr
| 2022-07-30T02:46:18Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-07-28T06:59:12Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: reachrkr/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
huggingtweets/dags
|
huggingtweets
| 2022-07-30T01:32:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-30T01:30:26Z |
---
language: en
thumbnail: http://www.huggingtweets.com/dags/1659144733206/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/722815128501026817/IMWCRzEn_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">DAGs</div>
<div style="text-align: center; font-size: 14px;">@dags</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from DAGs.
| Data | DAGs |
| --- | --- |
| Tweets downloaded | 3003 |
| Retweets | 31 |
| Short tweets | 158 |
| Tweets kept | 2814 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3qyk6uzo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dags's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18qzuqjb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18qzuqjb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dags')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rebolforces/ppo-LunarLander-v2
|
rebolforces
| 2022-07-30T00:43:21Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-07-23T09:28:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 285.83 +/- 15.59
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yanaiela/roberta-base-epoch_80
|
yanaiela
| 2022-07-29T23:08:59Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_80",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T18:03:25Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_80
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 80
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_80.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_78
|
yanaiela
| 2022-07-29T23:08:15Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_78",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T18:01:03Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_78
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 78
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_78.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_77
|
yanaiela
| 2022-07-29T23:07:53Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_77",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:59:57Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_77
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 77
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_77.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_73
|
yanaiela
| 2022-07-29T23:06:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_73",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:55:51Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_73
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 73
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_73.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_71
|
yanaiela
| 2022-07-29T23:05:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_71",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:53:19Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_71
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 71
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_71.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_67
|
yanaiela
| 2022-07-29T23:04:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_67",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:48:39Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_67
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 67
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_67.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_66
|
yanaiela
| 2022-07-29T23:03:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_66",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:46:45Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_66
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 66
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_66.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_64
|
yanaiela
| 2022-07-29T23:02:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_64",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:43:33Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_64
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 64
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_64.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_61
|
yanaiela
| 2022-07-29T23:01:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_61",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:39:32Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_61
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 61
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_61.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_59
|
yanaiela
| 2022-07-29T23:01:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_59",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:35:53Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_59
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 59
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_59.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_57
|
yanaiela
| 2022-07-29T23:00:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_57",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:34:22Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_57
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 57
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_57.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_56
|
yanaiela
| 2022-07-29T22:59:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_56",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:33:29Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_56
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 56
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_56.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_54
|
yanaiela
| 2022-07-29T22:59:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_54",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:31:39Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_54
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 54
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_54.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_52
|
yanaiela
| 2022-07-29T22:58:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_52",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:30:02Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_52
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 52
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_52.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_50
|
yanaiela
| 2022-07-29T22:57:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_50",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:28:26Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_50
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 50
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_50.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_47
|
yanaiela
| 2022-07-29T22:56:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_47",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:26:12Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_47
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 47
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_47.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_46
|
yanaiela
| 2022-07-29T22:55:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_46",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:25:28Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_46
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 46
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_46.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_45
|
yanaiela
| 2022-07-29T22:55:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_45",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:24:44Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_45
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 45
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_45.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_43
|
yanaiela
| 2022-07-29T22:54:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_43",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:23:18Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_43
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 43
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_43.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_39
|
yanaiela
| 2022-07-29T22:53:02Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_39",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:20:23Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_39
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 39
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_39.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_31
|
yanaiela
| 2022-07-29T22:50:29Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_31",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:14:05Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_31
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 31
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_31.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_27
|
yanaiela
| 2022-07-29T22:49:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_27",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:10:38Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_27
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 27
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_27.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_26
|
yanaiela
| 2022-07-29T22:48:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_26",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:09:55Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_26
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 26
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_26.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_21
|
yanaiela
| 2022-07-29T22:47:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_21",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:06:01Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_21
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 21
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_21.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_20
|
yanaiela
| 2022-07-29T22:47:06Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_20",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:05:11Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_20
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 20
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_20.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_17
|
yanaiela
| 2022-07-29T22:46:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_17",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:02:47Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_17
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 17
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_17.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_15
|
yanaiela
| 2022-07-29T22:45:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_15",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:01:23Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_15
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 15
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_15.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_14
|
yanaiela
| 2022-07-29T22:45:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_14",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T17:00:38Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_14
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 14
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_14.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_9
|
yanaiela
| 2022-07-29T22:43:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_9",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T16:56:14Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_9
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 9
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_9.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_7
|
yanaiela
| 2022-07-29T22:43:03Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_7",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T16:54:40Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_7
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 7
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_7.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
yanaiela/roberta-base-epoch_3
|
yanaiela
| 2022-07-29T22:41:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"roberta-base",
"roberta-base-epoch_3",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:2207.14251",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-28T16:51:25Z |
---
language: en
tags:
- roberta-base
- roberta-base-epoch_3
license: mit
datasets:
- wikipedia
- bookcorpus
---
# RoBERTa, Intermediate Checkpoint - Epoch 3
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_3.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.