modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 18:27:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 18:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
diegopetrola/vit-for-kaggle-mayo-clinic
|
diegopetrola
| 2022-09-06T20:12:36Z | 226 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-08-14T01:01:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-for-kaggle-mayo-clinic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-for-kaggle-mayo-clinic
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5538
- Accuracy: 0.7616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 0.5944 | 0.7483 |
| No log | 2.0 | 20 | 0.5640 | 0.7483 |
| No log | 3.0 | 30 | 0.5582 | 0.7483 |
| No log | 4.0 | 40 | 0.5585 | 0.7483 |
| No log | 5.0 | 50 | 0.5598 | 0.7483 |
| No log | 6.0 | 60 | 0.5484 | 0.7483 |
| No log | 7.0 | 70 | 0.5524 | 0.7417 |
| No log | 8.0 | 80 | 0.5538 | 0.7616 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Imene/vit-base-patch16-384-wi4
|
Imene
| 2022-09-06T16:20:09Z | 81 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-06T09:00:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Imene/vit-base-patch16-384-wi4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Imene/vit-base-patch16-384-wi4
This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1742
- Train Accuracy: 0.9982
- Train Top-3-accuracy: 0.9997
- Validation Loss: 1.5010
- Validation Accuracy: 0.5746
- Validation Top-3-accuracy: 0.8040
- Epoch: 10
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 3.7777 | 0.0845 | 0.1855 | 3.3754 | 0.1543 | 0.3014 | 0 |
| 2.7253 | 0.3277 | 0.5560 | 2.4975 | 0.3452 | 0.5892 | 1 |
| 2.0079 | 0.5236 | 0.7589 | 2.1228 | 0.4234 | 0.6882 | 2 |
| 1.5256 | 0.6663 | 0.8549 | 1.9117 | 0.4734 | 0.7445 | 3 |
| 1.1602 | 0.7712 | 0.9270 | 1.8059 | 0.5162 | 0.7560 | 4 |
| 0.8509 | 0.8659 | 0.9614 | 1.6534 | 0.5516 | 0.7758 | 5 |
| 0.5955 | 0.9353 | 0.9836 | 1.6139 | 0.5610 | 0.7935 | 6 |
| 0.4229 | 0.9687 | 0.9940 | 1.5655 | 0.5631 | 0.7925 | 7 |
| 0.3045 | 0.9859 | 0.9979 | 1.5290 | 0.5714 | 0.7987 | 8 |
| 0.2221 | 0.9958 | 0.9990 | 1.5061 | 0.5954 | 0.8008 | 9 |
| 0.1742 | 0.9982 | 0.9997 | 1.5010 | 0.5746 | 0.8040 | 10 |
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SiraH/wangchanberta-wiki-qa-finetuned-squad
|
SiraH
| 2022-09-06T15:53:23Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"question-answering",
"th",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-06T15:12:46Z |
---
language: th
widget:
- text: "สโมสรฟุตบอลเชลซีเล่นอยู่ในลีกอะไร"
context: "สโมสรฟุตบอลเชลซี (อังกฤษ: Chelsea Football Club) เป็นสโมสรฟุตบอลอาชีพที่ตั้งอยู่ในเขตฟูลัม, ลอนดอน ซึ่งเล่นอยู่ในพรีเมียร์ลีก ลีกสูงสุดของฟุตบอลอังกฤษ ก่อตั้งขึ้นใน ค.ศ. 1905 มีสนามเหย้าคือสแตมฟอร์ดบริดจ์ เป็นหนึ่งในสโมสรที่ประสบความสำเร็จมากที่สุดของอังกฤษ[3][4][5] ในการแข่งขันภายในประเทศ เชลซีชนะเลิศลีกสูงสุด 6 สมัย, เอฟเอคัพ 8 สมัย, ลีกคัพ 5 สมัย และ เอฟเอคอมมิวนิตีชีลด์ 4 สมัย และในการแข่งขันระหว่างประเทศ พวกเขาชนะเลิศยูฟ่าแชมเปียนส์ลีก 2 สมัย, ยูฟ่าคัพวินเนอร์สคัพ 2 สมัย, ยูฟ่ายูโรปาลีก 2 สมัย, ยูฟ่าซูเปอร์คัพ 2 สมัย และฟุตบอลชิงแชมป์สโมสรโลก 1 สมัย"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wangchanberta-wiki-qa-finetuned-squad
This model is a fine-tuned version of [airesearch/wangchanberta-base-wiki-20210520-spm-finetune-qa](https://huggingface.co/airesearch/wangchanberta-base-wiki-20210520-spm-finetune-qa) on the iapp_wiki_qa_squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Azizjah/autotrain-arabic_cuisine-1367052683
|
Azizjah
| 2022-09-06T15:18:01Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"ar",
"dataset:Azizjah/autotrain-data-arabic_cuisine",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-06T15:14:52Z |
---
tags:
- autotrain
- text-classification
language:
- ar
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Azizjah/autotrain-data-arabic_cuisine
co2_eq_emissions:
emissions: 0.02430968865158923
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1367052683
- CO2 Emissions (in grams): 0.0243
## Validation Metrics
- Loss: 2.302
- Accuracy: 0.439
- Macro F1: 0.133
- Micro F1: 0.439
- Weighted F1: 0.391
- Macro Precision: 0.167
- Micro Precision: 0.439
- Weighted Precision: 0.378
- Macro Recall: 0.140
- Micro Recall: 0.439
- Weighted Recall: 0.439
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Azizjah/autotrain-arabic_cuisine-1367052683
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Azizjah/autotrain-arabic_cuisine-1367052683", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Azizjah/autotrain-arabic_cuisine-1367052683", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
huggingtweets/getfactet
|
huggingtweets
| 2022-09-06T14:26:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-04T16:12:48Z |
---
language: en
thumbnail: http://www.huggingtweets.com/getfactet/1662474386697/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1440145130984390662/yv8sN87h_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">GETFact</div>
<div style="text-align: center; font-size: 14px;">@getfactet</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from GETFact.
| Data | GETFact |
| --- | --- |
| Tweets downloaded | 288 |
| Retweets | 8 |
| Short tweets | 64 |
| Tweets kept | 216 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2e780yfa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @getfactet's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vceqvxdg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vceqvxdg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/getfactet')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jmstadt/tire-types
|
jmstadt
| 2022-09-06T13:38:54Z | 197 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-06T13:38:43Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: tire-types
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7230769395828247
---
# tire-types
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### all-terrain tire

#### competition tire

#### passenger tire

|
rahulmallah/autotrain-emotion-detection-1366352626
|
rahulmallah
| 2022-09-06T13:18:36Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:rahulmallah/autotrain-data-emotion-detection",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-06T13:14:23Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- rahulmallah/autotrain-data-emotion-detection
co2_eq_emissions:
emissions: 0.037160667072201545
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1366352626
- CO2 Emissions (in grams): 0.0372
## Validation Metrics
- Loss: 1.772
- Accuracy: 0.394
- Macro F1: 0.197
- Micro F1: 0.394
- Weighted F1: 0.351
- Macro Precision: 0.217
- Micro Precision: 0.394
- Weighted Precision: 0.345
- Macro Recall: 0.213
- Micro Recall: 0.394
- Weighted Recall: 0.394
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/rahulmallah/autotrain-emotion-detection-1366352626
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("rahulmallah/autotrain-emotion-detection-1366352626", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("rahulmallah/autotrain-emotion-detection-1366352626", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
shed-e/bert-finetuned-squad
|
shed-e
| 2022-09-06T12:23:32Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-06T10:01:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mrm8488/electricidad-base-finetuned-sst2-es
|
mrm8488
| 2022-09-06T12:09:35Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"sste",
"dataset:sst2-es-mt",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-03T15:55:00Z |
---
languages:
-es
tags:
- generated_from_trainer
- sste
datasets:
- sst2-es-mt
metrics:
- accuracy
- f1
model-index:
- name: electricidad-base-finetuned-sst2-es
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sst2-es-mt
type: sst2-es-mt
config: sst2
split: train
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9013761467889908
- name: F1
type: f1
value: 0.9033707865168539
widget:
- text: "La verdad es que no tengo una opinión sólida al respecto."
- text: "Me parece muy interesante poder colaborar en este proyecto."
- text: "El gobierno actual no lo está haciendo mal, pero debe mejorar."
- text: "Esperaba mucho más por el precio que he pagado, la verdad..."
- text: "El proyecto BERTIN tiene cosas interesantes, pero tienen que trabajar más duro si quieren llegar lejos."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Electricidad (base) fine-tuned on sst2-es-mt for Spanish Sentiment Analysis 👍👎
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the **sst2-es-mt** [dataset](https://huggingface.co/datasets/sst2-es-mt). A dataset created using Neural Machine Translation on original [SST2](https://huggingface.co/datasets/sst2) (English) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4377
- Accuracy: 0.9014
- F1: 0.9034
## Usage
```py
from transformers import pipeline
model_ckpt = "mrm8488/electricidad-base-finetuned-sst2-es"
classifier = pipeline("sentiment-analysis", model=model_ckpt)
classifier("Here your text in Spanish!")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.2521 | 1.0 | 2105 | 0.2837 | 0.9002 | 0.9019 |
| 0.1694 | 2.0 | 4210 | 0.3175 | 0.8933 | 0.8920 |
| 0.1245 | 3.0 | 6315 | 0.3606 | 0.8945 | 0.8987 |
| 0.0934 | 4.0 | 8420 | 0.4419 | 0.9002 | 0.9037 |
| 0.069 | 5.0 | 10525 | 0.4377 | 0.9014 | 0.9034 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mrm8488/electricidad-small-finetuned-sst2-es
|
mrm8488
| 2022-09-06T12:08:55Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:sst2-es-mt",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-03T15:37:30Z |
---
tags:
- generated_from_trainer
datasets:
- sst2-es-mt
metrics:
- accuracy
- f1
model-index:
- name: electricidad-small-finetuned-sst2-es
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sst2-es-mt
type: sst2-es-mt
config: sst2
split: train
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8795871559633027
- name: F1
type: f1
value: 0.88
widget:
- text: "La verdad es que no tengo una opinión sólida al respecto."
- text: "Me parece muy interesante poder colaborar en este proyecto."
- text: "El gobierno actual no lo está haciendo mal, pero debe mejorar."
- text: "Esperaba mucho más por el precio que he pagado, la verdad..."
- text: "El proyecto BERTIN tiene cosas interesantes, pero tienen que trabajar más duro si quieren llegar lejos."
---
# Electricidad (small) fine-tuned on sst2-es-mt for Spanish Sentiment Analysis 👍👎
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on the **sst2-es-mt** [dataset](https://huggingface.co/datasets/sst2-es-mt). A dataset created using Neural Machine Translation on original [SST2](https://huggingface.co/datasets//sst2) (English) dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.879
- F1: 0.88
## Usage
```py
from transformers import pipeline
model_ckpt = "mrm8488/electricidad-small-finetuned-sst2-es"
classifier = pipeline("sentiment-analysis", model=model_ckpt)
classifier("Here your text in Spanish!")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.676560994488171e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3272 | 1.0 | 2105 | 0.3418 | 0.8567 | 0.8489 |
| 0.2394 | 2.0 | 4210 | 0.3391 | 0.8796 | 0.88 |
| 0.192 | 3.0 | 6315 | 0.3644 | 0.8761 | 0.8770 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Dazzid/distilbert-base-uncased-finetuned-emotion
|
Dazzid
| 2022-09-06T11:46:54Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-06T10:21:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9280684459516919
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2096
- Accuracy: 0.928
- F1: 0.9281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8334 | 1.0 | 250 | 0.3126 | 0.9055 | 0.9022 |
| 0.2459 | 2.0 | 500 | 0.2096 | 0.928 | 0.9281 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Hoax0930/kftt
|
Hoax0930
| 2022-09-06T11:06:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-09-02T07:09:06Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: kftt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kftt
This model is a fine-tuned version of [Helsinki-NLP/opus-tatoeba-en-ja](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-ja) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8809
- Bleu: 15.2565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
abdouaziiz/wav2vec2-xls-r-300m-wolof-lm
|
abdouaziiz
| 2022-09-06T10:35:32Z | 105 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"asr",
"wolof",
"wo",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- automatic-speech-recognition
- asr
- pytorch
- wav2vec2
- wolof
- wo
model-index:
- name: wav2vec2-xls-r-300m-wolof-lm
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test WER
type: wer
value: 21.25
- name: Validation Loss
type: Loss
value: 0.36
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-wolof-lm
Wolof is a language spoken in Senegal and neighbouring countries, this language is not too well represented, there are few resources in the field of Text en speech
In this sense we aim to bring our contribution to this, it is in this sense that enters this repo.
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) ,with a language model that is fine-tuned with the largest available speech dataset of the [ALFFA_PUBLIC](https://github.com/besacier/ALFFA_PUBLIC/tree/master/ASR/WOLOF)
It achieves the following results on the evaluation set:
- Loss: 0.367826
- Wer: 0.212565
## Model description
The duration of the training data is 16.8 hours, which we have divided into 10,000 audio files for the training and 3,339 for the test.
## Training and evaluation data
We eval the model at every 1500 step , and log it . and save at every 33340 step
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-4
- train_batch_size: 3
- eval_batch_size : 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10.0
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:-------:|:-------------:|:---------------:|:------:|
| 1500 | 2.854200 |0.642243 |0.543964 |
| 3000 | 0.599200 | 0.468138 | 0.429549|
| 4500 | 0.468300 | 0.433436 | 0.405644|
| 6000 | 0.427000 | 0.384873 | 0.344150|
| 7500 | 0.377000 | 0.374003 | 0.323892|
| 9000 | 0.337000 | 0.363674 | 0.306189|
| 10500 | 0.302400 | 0.349884 |0 .283908 |
| 12000 | 0.264100 | 0.344104 |0.277120|
| 13500 |0 .254000 |0.341820 |0.271316|
| 15000 | 0.208400| 0.326502 | 0.260695|
| 16500 | 0.203500| 0.326209 | 0.250313|
| 18000 |0.159800 |0.323539 | 0.239851|
| 19500 | 0.158200 | 0.310694 | 0.230028|
| 21000 | 0.132800 | 0.338318 | 0.229283|
| 22500 | 0.112800 | 0.336765 | 0.224145|
| 24000 | 0.103600 | 0.350208 | 0.227073 |
| 25500 | 0.091400 | 0.353609 | 0.221589 |
| 27000 | 0.084400 | 0.367826 | 0.212565 |
## Usage
The model can be used directly as follows:
```python
import librosa
import warnings
from transformers import AutoProcessor, AutoModelForCTC
from datasets import Dataset, DatasetDict
from datasets import load_metric
wer_metric = load_metric("wer")
wolof = pd.read_csv('Test.csv') # wolof contains the columns of file , and transcription
wolof = DatasetDict({'test': Dataset.from_pandas(wolof)})
chars_to_ignore_regex = '[\"\?\.\!\-\;\:\(\)\,]'
def remove_special_characters(batch):
batch["transcription"] = re.sub(chars_to_ignore_regex, '', batch["transcription"]).lower() + " "
return batch
wolof = wolof.map(remove_special_characters)
processor = AutoProcessor.from_pretrained("abdouaziiz/wav2vec2-xls-r-300m-wolof-lm")
model = AutoModelForCTC.from_pretrained("abdouaziiz/wav2vec2-xls-r-300m-wolof-lm")
warnings.filterwarnings("ignore")
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["file"], sr = 16000)
batch["speech"] = speech_array.astype('float16')
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["transcription"]
return batch
wolof = wolof.map(speech_file_to_array_fn, remove_columns=wolof.column_names["test"], num_proc=1)
def map_to_result(batch):
model.to("cuda")
input_values = processor(
batch["speech"],
sampling_rate=batch["sampling_rate"],
return_tensors="pt"
).input_values.to("cuda")
with torch.no_grad():
logits = model(input_values).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_str"] = processor.batch_decode(pred_ids)[0]
return batch
results = wolof["test"].map(map_to_result)
print("Test WER: {:.3f}".format(wer_metric.compute(predictions=results["pred_str"], references=results["transcription"])))
```
## PS:
The results obtained can be improved by using :
- Wav2vec2 + language model .
- Build a Spellcheker from the text of the data
- Sentence Edit Distance
|
DrishtiSharma/finetuned-SwinT-Indian-Food-Classification-v3
|
DrishtiSharma
| 2022-09-06T10:06:44Z | 221 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-03T12:16:00Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: finetuned-SwinT-Indian-Food-Classification-v3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: Indian-Food-Images
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9436769394261424
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-SwinT-Indian-Food-Classification-v3
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the Indian-Food-Images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2910
- Accuracy: 0.9437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9511 | 0.3 | 100 | 0.6092 | 0.8172 |
| 0.6214 | 0.6 | 200 | 0.4406 | 0.8672 |
| 0.7355 | 0.9 | 300 | 0.3665 | 0.8927 |
| 0.6078 | 1.2 | 400 | 0.3285 | 0.9065 |
| 0.439 | 1.5 | 500 | 0.3855 | 0.8916 |
| 0.3644 | 1.8 | 600 | 0.4082 | 0.8969 |
| 0.4748 | 2.1 | 700 | 0.3496 | 0.9022 |
| 0.3966 | 2.4 | 800 | 0.3626 | 0.8905 |
| 0.5799 | 2.7 | 900 | 0.4833 | 0.8767 |
| 0.2995 | 3.0 | 1000 | 0.3387 | 0.9044 |
| 0.3152 | 3.3 | 1100 | 0.3739 | 0.9097 |
| 0.3284 | 3.6 | 1200 | 0.4217 | 0.8916 |
| 0.3631 | 3.9 | 1300 | 0.4118 | 0.9044 |
| 0.219 | 4.2 | 1400 | 0.3721 | 0.9139 |
| 0.2874 | 4.5 | 1500 | 0.3030 | 0.9288 |
| 0.2819 | 4.8 | 1600 | 0.4056 | 0.9150 |
| 0.1755 | 5.11 | 1700 | 0.4039 | 0.9097 |
| 0.2462 | 5.41 | 1800 | 0.3550 | 0.9118 |
| 0.1737 | 5.71 | 1900 | 0.3444 | 0.9150 |
| 0.174 | 6.01 | 2000 | 0.3667 | 0.9160 |
| 0.1536 | 6.31 | 2100 | 0.3301 | 0.9288 |
| 0.0911 | 6.61 | 2200 | 0.3390 | 0.9299 |
| 0.0907 | 6.91 | 2300 | 0.2923 | 0.9288 |
| 0.0921 | 7.21 | 2400 | 0.3502 | 0.9256 |
| 0.1662 | 7.51 | 2500 | 0.3197 | 0.9341 |
| 0.0607 | 7.81 | 2600 | 0.3092 | 0.9362 |
| 0.111 | 8.11 | 2700 | 0.3146 | 0.9394 |
| 0.0588 | 8.41 | 2800 | 0.3069 | 0.9341 |
| 0.131 | 8.71 | 2900 | 0.2971 | 0.9405 |
| 0.1903 | 9.01 | 3000 | 0.3078 | 0.9384 |
| 0.2116 | 9.31 | 3100 | 0.3112 | 0.9341 |
| 0.1415 | 9.61 | 3200 | 0.2956 | 0.9405 |
| 0.1106 | 9.91 | 3300 | 0.2910 | 0.9437 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
burakyldrm/wav2vec2-burak
|
burakyldrm
| 2022-09-06T09:10:06Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-09-05T22:06:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-burak
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-burak
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4954
- Wer: 0.3969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8973 | 7.02 | 400 | 0.7101 | 0.7295 |
| 0.3118 | 14.03 | 800 | 0.5377 | 0.5062 |
| 0.1272 | 21.05 | 1200 | 0.5462 | 0.4296 |
| 0.0723 | 28.07 | 1600 | 0.4954 | 0.3969 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
nithiroj/testpyramidsrnd
|
nithiroj
| 2022-09-06T09:09:04Z | 16 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-09-06T09:08:56Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: nithiroj/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
huggingtweets/anandmahindra-opensea-rs5_eth
|
huggingtweets
| 2022-09-06T08:05:24Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-06T08:05:17Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1519250334837010432/tl8b_5l4_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1544105652330631168/ZuvjfGkT_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1554326129053671424/2QQjcfK8_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">𝕽́͝𝕾͋̀͝5̓̐͝ & OpenSea & anand mahindra</div>
<div style="text-align: center; font-size: 14px;">@anandmahindra-opensea-rs5_eth</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 𝕽́͝𝕾͋̀͝5̓̐͝ & OpenSea & anand mahindra.
| Data | 𝕽́͝𝕾͋̀͝5̓̐͝ | OpenSea | anand mahindra |
| --- | --- | --- | --- |
| Tweets downloaded | 3249 | 3241 | 3240 |
| Retweets | 0 | 1720 | 705 |
| Short tweets | 1994 | 319 | 177 |
| Tweets kept | 1255 | 1202 | 2358 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1s8dmqks/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @anandmahindra-opensea-rs5_eth's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5ltilq18) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5ltilq18/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/anandmahindra-opensea-rs5_eth')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
clementchadebec/reproduced_svae
|
clementchadebec
| 2022-09-06T07:23:41Z | 0 | 1 |
pythae
|
[
"pythae",
"reproducibility",
"en",
"license:apache-2.0",
"region:us"
] | null | 2022-08-19T19:51:40Z |
---
language: en
tags:
- pythae
- reproducibility
license: apache-2.0
---
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from pythae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_svae")
```
## Reproducibility
This trained model reproduces the results of Table 1 in [1].
| Model | Dataset | Metric | Obtained value | Reference value |
|:---:|:---:|:---:|:---:|:---:|
| SVAE | Dyn. Binarized MNIST | NLL (500 IS) | 93.13 (0.01) | 93.16 (0.31) |
[1] Tim R Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M Tomczak. Hyperspherical variational auto-encoders. In 34th Conference on Uncertainty in Artificial Intelligence 2018, UAI 2018, pages 856–865. Association For Uncertainty in Artificial Intelligence (AUAI), 2018.
|
unfinity/dqn-SpaceInvadersNoFrameskip-v4
|
unfinity
| 2022-09-06T07:00:55Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-06T07:00:21Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 559.00 +/- 117.55
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga unfinity -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga unfinity
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Sameen53/training_45k_V2
|
Sameen53
| 2022-09-06T06:17:17Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-09-06T06:13:59Z |
---
tags:
- generated_from_trainer
model-index:
- name: training_45k_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training_45k_V2
This model is a fine-tuned version of [Sameen53/training_45k](https://huggingface.co/Sameen53/training_45k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.1673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2583 | 1.26 | 1500 | inf | 0.1660 |
| 0.2522 | 2.51 | 3000 | inf | 0.1625 |
| 0.2427 | 3.77 | 4500 | inf | 0.1665 |
| 0.2333 | 5.02 | 6000 | inf | 0.1629 |
| 0.2692 | 6.28 | 7500 | inf | 0.1673 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
aaazzzz/autotrain-cuisine_classification-1361652530
|
aaazzzz
| 2022-09-06T05:55:22Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:aaazzzz/autotrain-data-cuisine_classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-06T04:30:29Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aaazzzz/autotrain-data-cuisine_classification
co2_eq_emissions:
emissions: 181.03886827858415
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1361652530
- CO2 Emissions (in grams): 181.0389
## Validation Metrics
- Loss: 0.920
- Accuracy: 0.731
- Macro F1: 0.669
- Micro F1: 0.731
- Weighted F1: 0.726
- Macro Precision: 0.774
- Micro Precision: 0.731
- Weighted Precision: 0.738
- Macro Recall: 0.623
- Micro Recall: 0.731
- Weighted Recall: 0.731
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/aaazzzz/autotrain-cuisine_classification-1361652530
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("aaazzzz/autotrain-cuisine_classification-1361652530", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("aaazzzz/autotrain-cuisine_classification-1361652530", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Mcy/bert-base-uncased-finetuned-classification
|
Mcy
| 2022-09-06T04:59:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-30T14:04:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-classification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 38.9115
- Mse: 38.9115
- Mae: 4.5330
- R2: 0.7802
- Accuracy: 0.1620
- Msev: 0.0257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy | Msev |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:--------:|:------:|
| 12.4524 | 1.0 | 5215 | 43.9797 | 43.9797 | 4.8194 | 0.7515 | 0.1693 | 0.0227 |
| 4.393 | 2.0 | 10430 | 39.2333 | 39.2333 | 4.6028 | 0.7783 | 0.1737 | 0.0255 |
| 2.424 | 3.0 | 15645 | 41.3763 | 41.3763 | 4.6597 | 0.7662 | 0.1620 | 0.0242 |
| 1.781 | 4.0 | 20860 | 39.4309 | 39.4309 | 4.5960 | 0.7772 | 0.1767 | 0.0254 |
| 1.3608 | 5.0 | 26075 | 38.9115 | 38.9115 | 4.5330 | 0.7802 | 0.1620 | 0.0257 |
| 1.2014 | 6.0 | 31290 | 39.7403 | 39.7403 | 4.5850 | 0.7755 | 0.1716 | 0.0252 |
| 1.0742 | 7.0 | 36505 | 40.4495 | 40.4495 | 4.6133 | 0.7715 | 0.1685 | 0.0247 |
| 0.837 | 8.0 | 41720 | 39.5864 | 39.5864 | 4.5630 | 0.7763 | 0.1620 | 0.0253 |
| 0.8054 | 9.0 | 46935 | 39.9482 | 39.9482 | 4.5839 | 0.7743 | 0.1569 | 0.0250 |
| 0.8085 | 10.0 | 52150 | 39.5685 | 39.5685 | 4.5669 | 0.7764 | 0.1573 | 0.0253 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
vihu/ppo-LunarLander-v2
|
vihu
| 2022-09-06T04:47:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-06T04:47:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 261.03 +/- 22.10
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
neuralspace/autotrain-ecomm1.8-1360552485
|
neuralspace
| 2022-09-06T04:02:09Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:neuralspace/autotrain-data-ecomm1.8",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-06T03:56:18Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- neuralspace/autotrain-data-ecomm1.8
co2_eq_emissions:
emissions: 0.034797737604122594
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1360552485
- CO2 Emissions (in grams): 0.0348
## Validation Metrics
- Loss: 0.539
- Accuracy: 0.914
- Macro F1: 0.903
- Micro F1: 0.914
- Weighted F1: 0.907
- Macro Precision: 0.927
- Micro Precision: 0.914
- Weighted Precision: 0.928
- Macro Recall: 0.907
- Micro Recall: 0.914
- Weighted Recall: 0.914
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/neuralspace/autotrain-ecomm1.8-1360552485
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("neuralspace/autotrain-ecomm1.8-1360552485", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("neuralspace/autotrain-ecomm1.8-1360552485", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
swtx/Erlangshen-Roberta-110M-POI
|
swtx
| 2022-09-06T03:10:44Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-09-06T03:00:05Z |
---
license: apache-2.0
---
tags:
- bert
- NLU
- NLI
inference: true
widget:
- text: "湖北省黄冈市麻城市中国中部(麻城)石材循环经济产业园厦门路麻城盈泰环保科技有限公司[SEP]黄冈市麻城市中国中部石材循环经济产业园厦门路麻城盈泰环保科技有限公司"
---
# Erlangshen-Roberta-110M-POI, model (Chinese).
We add POI datasets, with a total of 5000000 samples. Our model is mainly based on [roberta](https://github.com/IDEA-CCNL/Erlangshen-Roberta-110M-Similarity)
## Usage
```python
from transformers import BertForSequenceClassification
from transformers import BertTokenizer
import torch
tokenizer=BertTokenizer.from_pretrained('swtx/Erlangshen-Roberta-110M-POI')
model=BertForSequenceClassification.from_pretrained('swtx/Erlangshen-Roberta-110M-POI')
texta='湖北省黄冈市麻城市中国中部(麻城)石材循环经济产业园厦门路麻城盈泰环保科技有限公司'
textb='黄冈市麻城市中国中部石材循环经济产业园厦门路麻城盈泰环保科技有限公司'
output=model(torch.tensor([tokenizer.encode(texta,textb)]))
print(torch.nn.functional.softmax(output.logits,dim=-1))
```
|
marvind434/swin-tiny-patch4-window7-224-finetuned-eurosat
|
marvind434
| 2022-09-06T03:04:11Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-08-24T05:20:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3026
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 1.0940 | 0.25 |
| No log | 2.0 | 2 | 0.9836 | 0.25 |
| No log | 3.0 | 3 | 0.7624 | 0.25 |
| No log | 4.0 | 4 | 0.6527 | 0.5 |
| No log | 5.0 | 5 | 0.5697 | 0.75 |
| No log | 6.0 | 6 | 0.5167 | 1.0 |
| No log | 7.0 | 7 | 0.4898 | 0.75 |
| No log | 8.0 | 8 | 0.4572 | 0.75 |
| No log | 9.0 | 9 | 0.4286 | 0.75 |
| 0.299 | 10.0 | 10 | 0.3976 | 0.75 |
| 0.299 | 11.0 | 11 | 0.3678 | 1.0 |
| 0.299 | 12.0 | 12 | 0.3531 | 1.0 |
| 0.299 | 13.0 | 13 | 0.3384 | 1.0 |
| 0.299 | 14.0 | 14 | 0.3264 | 1.0 |
| 0.299 | 15.0 | 15 | 0.3188 | 1.0 |
| 0.299 | 16.0 | 16 | 0.3114 | 1.0 |
| 0.299 | 17.0 | 17 | 0.3083 | 1.0 |
| 0.299 | 18.0 | 18 | 0.3071 | 1.0 |
| 0.299 | 19.0 | 19 | 0.3041 | 1.0 |
| 0.2051 | 20.0 | 20 | 0.3026 | 1.0 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
RohanK447/swin-tiny-patch4-window7-224-finetuned-vosap
|
RohanK447
| 2022-09-06T03:02:08Z | 230 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-05T21:29:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-vosap
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.75
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-vosap
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4894
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.4894 | 0.75 |
| No log | 2.0 | 2 | 0.5365 | 0.5 |
| No log | 3.0 | 3 | 0.6957 | 0.5 |
| No log | 4.0 | 4 | 0.6781 | 0.5 |
| No log | 5.0 | 5 | 0.5617 | 0.5 |
| No log | 6.0 | 6 | 0.4461 | 0.75 |
| No log | 7.0 | 7 | 0.3368 | 0.75 |
| No log | 8.0 | 8 | 0.3289 | 0.75 |
| No log | 9.0 | 9 | 0.3642 | 0.75 |
| 0.0539 | 10.0 | 10 | 0.4334 | 0.75 |
| 0.0539 | 11.0 | 11 | 0.5582 | 0.5 |
| 0.0539 | 12.0 | 12 | 0.6676 | 0.5 |
| 0.0539 | 13.0 | 13 | 0.7586 | 0.5 |
| 0.0539 | 14.0 | 14 | 0.7937 | 0.5 |
| 0.0539 | 15.0 | 15 | 0.7986 | 0.5 |
| 0.0539 | 16.0 | 16 | 0.7619 | 0.5 |
| 0.0539 | 17.0 | 17 | 0.7134 | 0.5 |
| 0.0539 | 18.0 | 18 | 0.6725 | 0.5 |
| 0.0539 | 19.0 | 19 | 0.6390 | 0.5 |
| 0.0297 | 20.0 | 20 | 0.6222 | 0.5 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
rajistics/finetuned-indian-food
|
rajistics
| 2022-09-06T02:49:44Z | 255 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-15T18:47:54Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: finetuned-indian-food
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: indian_food_images
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9330499468650372
widget:
- src: https://huggingface.co/rajistics/finetuned-indian-food/resolve/main/003.jpg
example_title: fried_rice
- src: https://huggingface.co/rajistics/finetuned-indian-food/resolve/main/126.jpg
example_title: paani_puri
- src: https://huggingface.co/rajistics/finetuned-indian-food/resolve/main/401.jpg
example_title: chapati
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-food
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2632
- Accuracy: 0.9330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1794 | 0.3 | 100 | 0.9208 | 0.8565 |
| 0.6513 | 0.6 | 200 | 0.5410 | 0.8842 |
| 0.5904 | 0.9 | 300 | 0.4978 | 0.8799 |
| 0.4461 | 1.2 | 400 | 0.3669 | 0.9192 |
| 0.5633 | 1.5 | 500 | 0.4340 | 0.8842 |
| 0.2489 | 1.8 | 600 | 0.3355 | 0.9171 |
| 0.3171 | 2.1 | 700 | 0.3286 | 0.9192 |
| 0.3785 | 2.4 | 800 | 0.3232 | 0.9171 |
| 0.2278 | 2.7 | 900 | 0.3338 | 0.9192 |
| 0.0894 | 3.0 | 1000 | 0.2870 | 0.9245 |
| 0.2092 | 3.3 | 1100 | 0.2884 | 0.9288 |
| 0.1466 | 3.6 | 1200 | 0.2673 | 0.9320 |
| 0.1789 | 3.9 | 1300 | 0.2632 | 0.9330 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Imene/vit-base-patch16-384-wi3
|
Imene
| 2022-09-06T00:06:23Z | 81 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-05T18:53:02Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Imene/vit-base-patch16-384-wi3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Imene/vit-base-patch16-384-wi3
This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2020
- Train Accuracy: 0.9984
- Train Top-3-accuracy: 0.9997
- Validation Loss: 1.4297
- Validation Accuracy: 0.6195
- Validation Top-3-accuracy: 0.8298
- Epoch: 11
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 3.6575 | 0.0902 | 0.1945 | 3.1772 | 0.2028 | 0.3980 | 0 |
| 2.5870 | 0.3473 | 0.6048 | 2.3845 | 0.3717 | 0.6208 | 1 |
| 1.8813 | 0.5553 | 0.7895 | 2.0262 | 0.4431 | 0.7196 | 2 |
| 1.4326 | 0.6815 | 0.8754 | 1.8856 | 0.4793 | 0.7384 | 3 |
| 1.0572 | 0.7989 | 0.9439 | 1.6570 | 0.5369 | 0.7960 | 4 |
| 0.7740 | 0.8838 | 0.9749 | 1.6103 | 0.5557 | 0.7960 | 5 |
| 0.5593 | 0.9417 | 0.9900 | 1.5303 | 0.5695 | 0.8173 | 6 |
| 0.4151 | 0.9709 | 0.9975 | 1.4939 | 0.5795 | 0.8185 | 7 |
| 0.3176 | 0.9884 | 0.9978 | 1.4553 | 0.5832 | 0.8248 | 8 |
| 0.2582 | 0.9950 | 0.9991 | 1.4500 | 0.6020 | 0.8248 | 9 |
| 0.2222 | 0.9978 | 0.9994 | 1.4315 | 0.6108 | 0.8310 | 10 |
| 0.2020 | 0.9984 | 0.9997 | 1.4297 | 0.6195 | 0.8298 | 11 |
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
julien-c/hotdog-not-hotdog
|
julien-c
| 2022-09-05T21:30:21Z | 1,778 | 28 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"coreml",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- huggingpics
metrics:
- accuracy
model-index:
- name: hotdog-not-hotdog
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.824999988079071
---
# hotdog-not-hotdog
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### hot dog

#### not hot dog

|
curt-tigges/Reinforce-Cartpole
|
curt-tigges
| 2022-09-05T20:40:02Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-05T20:39:17Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 72.60 +/- 13.69
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
wonkwonlee/distilbert-base-uncased-finetuned-cola
|
wonkwonlee
| 2022-09-05T19:37:47Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-12T18:42:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5474713423103301
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5263
- Matthews Correlation: 0.5475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5222 | 1.0 | 535 | 0.5384 | 0.4304 |
| 0.3494 | 2.0 | 1070 | 0.5128 | 0.4975 |
| 0.2381 | 3.0 | 1605 | 0.5263 | 0.5475 |
| 0.1753 | 4.0 | 2140 | 0.7498 | 0.5354 |
| 0.1243 | 5.0 | 2675 | 0.8013 | 0.5414 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cpu
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Guruji108/xlm-roberta-base-finetuned-panx-fr
|
Guruji108
| 2022-09-05T19:16:42Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-05T18:59:16Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8299296953465015
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2848
- F1: 0.8299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5989 | 1.0 | 191 | 0.3383 | 0.7928 |
| 0.2617 | 2.0 | 382 | 0.2966 | 0.8318 |
| 0.1672 | 3.0 | 573 | 0.2848 | 0.8299 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rlpeter70/xlm-roberta-base-finetuned-panx-all
|
rlpeter70
| 2022-09-05T18:53:53Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-05T18:25:20Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1745
- F1: 0.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3055 | 1.0 | 835 | 0.1842 | 0.8099 |
| 0.1561 | 2.0 | 1670 | 0.1711 | 0.8452 |
| 0.1016 | 3.0 | 2505 | 0.1745 | 0.8505 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rlpeter70/xlm-roberta-base-finetuned-panx-it
|
rlpeter70
| 2022-09-05T18:09:26Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-05T17:53:12Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8124233755619126
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2630
- F1: 0.8124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8193 | 1.0 | 70 | 0.3200 | 0.7356 |
| 0.2773 | 2.0 | 140 | 0.2841 | 0.7882 |
| 0.1807 | 3.0 | 210 | 0.2630 | 0.8124 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rlpeter70/xlm-roberta-base-finetuned-panx-fr
|
rlpeter70
| 2022-09-05T17:52:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-05T17:35:29Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8346456692913387
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2763
- F1: 0.8346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5779 | 1.0 | 191 | 0.3701 | 0.7701 |
| 0.2735 | 2.0 | 382 | 0.2908 | 0.8254 |
| 0.1769 | 3.0 | 573 | 0.2763 | 0.8346 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
model-attribution-challenge/distilgpt2
|
model-attribution-challenge
| 2022-09-05T17:45:18Z | 166 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"coreml",
"gpt2",
"text-generation",
"exbert",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:2201.08542",
"arxiv:2203.12574",
"arxiv:1910.09700",
"arxiv:1503.02531",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-26T13:34:09Z |
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- openwebtext
model-index:
- name: distilgpt2
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: wikitext
name: WikiText-103
metrics:
- type: perplexity
name: Perplexity
value: 21.1
co2_eq_emissions: 149200
---
# DistilGPT2
DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of [GPT-2](https://huggingface.co/gpt2).
## Model Details
- **Developed by:** Hugging Face
- **Model type:** Transformer-based Language Model
- **Language:** English
- **License:** Apache 2.0
- **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2.
- **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/).
## Uses, Limitations and Risks
#### Limitations and Risks
<details>
<summary>Click to expand</summary>
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
As the developers of GPT-2 (OpenAI) note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md), “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example:
- [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
- [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
- [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(48)
>>> generator("The White man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': "The White man worked as a salesman at a McDonald's restaurant called Kia at the time of the"},
{'generated_text': 'The White man worked as a contractor in the Army in the late 1990s. He became a "'},
{'generated_text': 'The White man worked as a police spokesman to the US Navy in the 1930s.'}]
>>> set_seed(48)
>>> generator("The Black man worked as a", max_length=20, num_return_sequences=3)
[{'generated_text': 'The Black man worked as a shop assistant for an hour at Wal-Mart at Wal-Mart in'},
{'generated_text': 'The Black man worked as a waiter in the hotel when he was assaulted when he got out of a'},
{'generated_text': 'The Black man worked as a police spokesman four months ago...'}]
```
</details>
#### Potential Uses
Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model.
The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including:
> - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)*
> - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.*
> - *Entertainment: Creation of games, chat bots, and amusing generations.*
Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser.
#### Out-of-scope Uses
OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case.
### How to Get Started with the Model
<details>
<summary>Click to expand</summary>
*Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.*
Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='distilgpt2')
>>> set_seed(42)
>>> generator("Hello, I’m a language model", max_length=20, num_return_sequences=5)
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
[{'generated_text': "Hello, I'm a language model, I'm a language model. In my previous post I've"},
{'generated_text': "Hello, I'm a language model, and I'd love to hear what you think about it."},
{'generated_text': "Hello, I'm a language model, but I don't get much of a connection anymore, so"},
{'generated_text': "Hello, I'm a language model, a functional language... It's not an example, and that"},
{'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = GPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
And in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
model = TFGPT2Model.from_pretrained('distilgpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
</details>
## Training Data
DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText.
## Training Procedure
The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108).
## Evaluation Results
The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set).
## Environmental Impact
*Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.*
- **Hardware Type:** 8 16GB V100
- **Hours used:** 168 (1 week)
- **Cloud Provider:** Azure
- **Compute Region:** unavailable, assumed East US for calculations
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2
## Citation
```bibtex
@inproceedings{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
booktitle={NeurIPS EMC^2 Workshop},
year={2019}
}
```
## Glossary
- <a name="knowledge-distillation">**Knowledge Distillation**</a>: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531).
<a href="https://huggingface.co/exbert/?model=distilgpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
rlpeter70/xlm-roberta-base-finetuned-panx-de-fr
|
rlpeter70
| 2022-09-05T17:34:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-05T17:01:55Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/suppernickbroth
|
huggingtweets
| 2022-09-05T17:19:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-05T17:11:57Z |
---
language: en
thumbnail: http://www.huggingtweets.com/suppernickbroth/1662398278638/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1564658774962585600/j9_gW3wp_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">nicky turner</div>
<div style="text-align: center; font-size: 14px;">@suppernickbroth</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from nicky turner.
| Data | nicky turner |
| --- | --- |
| Tweets downloaded | 2538 |
| Retweets | 960 |
| Short tweets | 429 |
| Tweets kept | 1149 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fdsli1w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @suppernickbroth's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/n9y3a7fd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/n9y3a7fd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/suppernickbroth')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/mrquinnzard
|
huggingtweets
| 2022-09-05T17:08:52Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-21T00:18:41Z |
---
language: en
thumbnail: http://www.huggingtweets.com/mrquinnzard/1662397648914/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1566448713224142849/MA1qlIiY_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MrQuinnzard X</div>
<div style="text-align: center; font-size: 14px;">@mrquinnzard</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MrQuinnzard X.
| Data | MrQuinnzard X |
| --- | --- |
| Tweets downloaded | 953 |
| Retweets | 64 |
| Short tweets | 159 |
| Tweets kept | 730 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/xz27mo8e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mrquinnzard's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2n2ggtun) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2n2ggtun/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mrquinnzard')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rlpeter70/xlm-roberta-base-finetuned-panx-de
|
rlpeter70
| 2022-09-05T16:51:52Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-05T16:28:24Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
peter2000/bmz_topics_
|
peter2000
| 2022-09-05T16:39:06Z | 1 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-09-05T16:38:42Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# peter2000/bmz_topics_
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('peter2000/bmz_topics_')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('peter2000/bmz_topics_')
model = AutoModel.from_pretrained('peter2000/bmz_topics_')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=peter2000/bmz_topics_)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 36 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.dataloader._InfiniteConstantSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchHardTripletLoss.BatchHardTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 720,
"warmup_steps": 72,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ChayanM/Favourite_Foods
|
ChayanM
| 2022-09-05T16:22:26Z | 287 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-05T16:22:10Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Favourite_Foods
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9553571343421936
---
# Favourite_Foods
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Chocolate

#### Cookies

#### Egg

#### Ice-cream

#### Vegetable

|
mio/Artoria
|
mio
| 2022-09-05T16:10:54Z | 15 | 14 |
espnet
|
[
"espnet",
"audio",
"text-to-speech",
"jp",
"dataset:fate",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
text-to-speech
| 2022-09-05T16:10:08Z |
---
tags:
- espnet
- audio
- text-to-speech
language: jp
datasets:
- fate
license: cc-by-4.0
---
## ESPnet2 TTS model
### `mio/Artoria`
This model was trained by mio using fate recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 49d18064f22b7508ff24a7fa70c470a65f08f1be
pip install -e .
cd egs2/fate/tts1
./run.sh --skip_data_prep false --skip_train true --download_model mio/Artoria
```
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/finetune_vits.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/22k/tts_fate_saber_vits_finetune_from_jsut
ngpu: 1
seed: 777
num_workers: 4
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 46762
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 10
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- total_count
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: false
create_graph_in_tensorboard: false
use_wandb: true
wandb_project: fate
wandb_id: null
wandb_entity: null
wandb_name: vits_train_saber
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- downloads/f3698edf589206588f58f5ec837fa516/exp/tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause/train.total_count.ave_10best.pth:tts:tts
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 5000000
valid_batch_bins: null
train_shape_file:
- exp/22k/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/train/text_shape.phn
- exp/22k/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/train/speech_shape
valid_shape_file:
- exp/22k/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/valid/text_shape.phn
- exp/22k/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/22k/raw/train/text
- text
- text
- - dump/22k/raw/train/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/22k/raw/dev/text
- text
- text
- - dump/22k/raw/dev/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0001
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.999875
optim2: adamw
optim2_conf:
lr: 0.0001
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.999875
generator_first: false
token_list:
- <blank>
- <unk>
- '1'
- '2'
- '0'
- '3'
- '4'
- '-1'
- '5'
- a
- o
- '-2'
- i
- '-3'
- u
- e
- k
- n
- t
- '6'
- r
- '-4'
- s
- N
- m
- pau
- '7'
- sh
- d
- g
- w
- '8'
- U
- '-5'
- I
- cl
- h
- y
- b
- '9'
- j
- ts
- ch
- '-6'
- z
- p
- '-7'
- f
- ky
- ry
- '-8'
- gy
- '-9'
- hy
- ny
- '-10'
- by
- my
- '-11'
- '-12'
- '-13'
- py
- '-14'
- '-15'
- v
- '10'
- '-16'
- '-17'
- '11'
- '-21'
- '-20'
- '12'
- '-19'
- '13'
- '-18'
- '14'
- dy
- '15'
- ty
- '-22'
- '16'
- '18'
- '19'
- '17'
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: jaconv
g2p: pyopenjtalk_accent_with_pause
feats_extract: linear_spectrogram
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
normalize: null
normalize_conf: {}
tts: vits
tts_conf:
generator_type: vits_generator
generator_params:
hidden_channels: 192
spks: -1
global_channels: -1
segment_size: 32
text_encoder_attention_heads: 2
text_encoder_ffn_expand: 4
text_encoder_blocks: 6
text_encoder_positionwise_layer_type: conv1d
text_encoder_positionwise_conv_kernel_size: 3
text_encoder_positional_encoding_layer_type: rel_pos
text_encoder_self_attention_layer_type: rel_selfattn
text_encoder_activation_type: swish
text_encoder_normalize_before: true
text_encoder_dropout_rate: 0.1
text_encoder_positional_dropout_rate: 0.0
text_encoder_attention_dropout_rate: 0.1
use_macaron_style_in_text_encoder: true
use_conformer_conv_in_text_encoder: false
text_encoder_conformer_kernel_size: -1
decoder_kernel_size: 7
decoder_channels: 512
decoder_upsample_scales:
- 8
- 8
- 2
- 2
decoder_upsample_kernel_sizes:
- 16
- 16
- 4
- 4
decoder_resblock_kernel_sizes:
- 3
- 7
- 11
decoder_resblock_dilations:
- - 1
- 3
- 5
- - 1
- 3
- 5
- - 1
- 3
- 5
use_weight_norm_in_decoder: true
posterior_encoder_kernel_size: 5
posterior_encoder_layers: 16
posterior_encoder_stacks: 1
posterior_encoder_base_dilation: 1
posterior_encoder_dropout_rate: 0.0
use_weight_norm_in_posterior_encoder: true
flow_flows: 4
flow_kernel_size: 5
flow_base_dilation: 1
flow_layers: 4
flow_dropout_rate: 0.0
use_weight_norm_in_flow: true
use_only_mean_in_flow: true
stochastic_duration_predictor_kernel_size: 3
stochastic_duration_predictor_dropout_rate: 0.5
stochastic_duration_predictor_flows: 4
stochastic_duration_predictor_dds_conv_layers: 3
vocabs: 85
aux_channels: 513
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: true
downsample_scales:
- 2
- 2
- 4
- 4
- 1
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
follow_official_norm: false
periods:
- 2
- 3
- 5
- 7
- 11
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 5
- 3
channels: 32
downsample_scales:
- 3
- 3
- 3
- 3
- 1
max_downsample_channels: 1024
bias: true
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
mel_loss_params:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
window: hann
n_mels: 80
fmin: 0
fmax: null
log_base: null
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
lambda_dur: 1.0
lambda_kl: 1.0
sampling_rate: 22050
cache_generator_outputs: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202207'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
peterdudfield/power_perceiver
|
peterdudfield
| 2022-09-05T15:54:37Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"nowcasting",
"forecasting",
"timeseries",
"remote-sensing",
"gan",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-09-05T15:54:22Z |
---
license: mit
tags:
- nowcasting
- forecasting
- timeseries
- remote-sensing
- gan
---
# FullModel
## Model description
[More information needed]
## Intended uses & limitations
[More information needed]
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
[More information needed]
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
jenniferjane/ner_trainer
|
jenniferjane
| 2022-09-05T14:53:19Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-05T13:55:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner_trainer
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9231450719822812
- name: Recall
type: recall
value: 0.9325427900212552
- name: F1
type: f1
value: 0.9278201346763871
- name: Accuracy
type: accuracy
value: 0.9830333455128918
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_trainer
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1069
- Precision: 0.9231
- Recall: 0.9325
- F1: 0.9278
- Accuracy: 0.9830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0029 | 1.0 | 1756 | 0.1069 | 0.9231 | 0.9325 | 0.9278 | 0.9830 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BartekK/distilHerBERT-base-cased
|
BartekK
| 2022-09-05T14:03:29Z | 110 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"distilherbert",
"pl",
"arxiv:1910.01108",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-16T16:33:31Z |
---
language: pl
tags:
- distilherbert
---
## distilHerBERT
distilHerBERT-base is a BERT-based Language Model trained on Polish subset of [cc100](https://huggingface.co/datasets/cc100) dataset using Masked Language Modelling (MLM) and [distillation procedure](https://arxiv.org/abs/1910.01108) from model [HerBERT](https://huggingface.co/allegro/herbert-base-cased) with dynamic masking of whole words.
We provide one of the models (S4) described in the report from final project on the subject of (Deep) Natural Language Processing, which was carried out at MIMUW in 2021/2022: [Distillation_of_HerBERT](https://github.com/BartekKrzepkowski/DistilHerBERT-base_vol2/blob/master/report/Final_Report___Distillation_of_HerBERT.pdf).
The model was trained using fp16 and the data parallelism method (ZeRO Stage 2), using the deep learning optimization library - DeepSpeed.
Model training and experiments were conducted with transformers in version 4.20.1.
## Tokenizer
The training dataset was tokenized into subwords using a character level byte-pair encoding (``CharBPETokenizer``) with
a vocabulary size of 50k tokens. The tokenizer itself was trained with a [tokenizers](https://github.com/huggingface/tokenizers) library.
We kindly encourage you to use the ``Fast`` version of the tokenizer, namely ``HerbertTokenizerFast``.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("BartekK/distilHerBERT-base-cased")
model = AutoModelForMaskedLM.from_pretrained("BartekK/distilHerBERT-base-cased")
output = model(
**tokenizer.batch_encode_plus(
[
(
"A potem szedł środkiem drogi w kurzawie, bo zamiatał nogami, ślepy dziad prowadzony przez tłustego kundla na sznurku.",
"A potem leciał od lasu chłopak z butelką, ale ten ujrzawszy księdza przy drodze okrążył go z dala i biegł na przełaj pól do karczmy."
)
],
padding='longest',
add_special_tokens=True,
return_tensors='pt'
)
)
```
## Acknowledgements
We want to thank <br>
Spyridon Mouselinos - for suggesting literature to help with the project <br>
and <br>
Piotr Rybak - for sharing information on training the HerBERT models
## Authors
The model was trained by:
Bartłomiej Krzepkowski, <br>
Dominika Bankiewicz, <br>
Rafał Michaluk, <br>
Jacek Ciszewski.
If you have questions please contact me: <a href="mailto:bartekkrzepkowski@gmail.com">bartekkrzepkowski@gmail.com</a>
The code can be found here: [distilHerBERT-base repo](https://github.com/BartekKrzepkowski/DistilHerBERT-base_vol2/tree/master).
|
falkne/storytelling-regulationroom-en
|
falkne
| 2022-09-05T13:29:16Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-04T19:03:43Z |
# Storytelling Classifier (trained on regulationroom) (storytelling-regulationroom-en)
This model is fine-tuning the [BERT base model](https://huggingface.co/bert-base-uncased) on the task of predicting whether a comment or argument contains *storytelling* or not (binary classification). The task together with the dataset was introduced in [this paper](https://aclanthology.org/2022.acl-long.379/).
The original corpus and annotation was introduced in [this paper](https://aclanthology.org/L18-1257.pdf).
## Model description
|
falkne/storytelling-europolis-en
|
falkne
| 2022-09-05T13:23:52Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-04T19:07:58Z |
# Storytelling Classifier (trained on europolis) (storytelling-europolis-en)
This model is fine-tuning the [BERT base model](https://huggingface.co/bert-base-uncased) on the task of predicting whether a comment or argument contains *storytelling* or not (binary classification). The task together with the dataset was introduced in [this paper](https://aclanthology.org/2022.acl-long.379/).
The original corpus together with the annotation is called the [Europarl corpus](https://aclanthology.org/2005.mtsummit-papers.11.pdf).
## Model description
The language is English (en).
|
GItaf/bert-base-uncased-finetuned-mbti-0905
|
GItaf
| 2022-09-05T13:17:53Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-05T12:35:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-mbti-0905
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mbti-0905
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Osaleh/sagemaker-bert-mini-arabic-ArSAS
|
Osaleh
| 2022-09-05T13:15:39Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-05T13:14:40Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sagemaker-bert-mini-arabic-ArSAS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-bert-mini-arabic-ArSAS
This model is a fine-tuned version of [asafaya/bert-mini-arabic](https://huggingface.co/asafaya/bert-mini-arabic) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3163
- Accuracy: 0.8771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 291 | 0.3761 | 0.8368 |
| 0.4722 | 2.0 | 582 | 0.3163 | 0.8771 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dhansmair/flamingo-tiny
|
dhansmair
| 2022-09-05T12:45:16Z | 57 | 5 |
transformers
|
[
"transformers",
"pytorch",
"image-to-text",
"image-captioning",
"en",
"dataset:conceptual_captions",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2022-08-23T07:58:45Z |
---
language:
- en
tags:
- image-to-text
- image-captioning
license: apache-2.0
datasets:
- conceptual_captions
---
Flamingo Model (tiny version) pretrained on Image Captioning on the Conceptual Captions (3M) dataset.
Source Code: https://github.com/dhansmair/flamingo-mini
Demo Space: https://huggingface.co/spaces/dhansmair/flamingo-tiny-cap
Flamingo-mini: https://huggingface.co/spaces/dhansmair/flamingo-mini-cap
|
Osaleh/sagemaker-bert-base-arabic-ar-SAS
|
Osaleh
| 2022-09-05T12:44:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-05T12:06:14Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sagemaker-bert-base-arabic-ar-SAS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-bert-base-arabic-ar-SAS
This model is a fine-tuned version of [asafaya/bert-base-arabic](https://huggingface.co/asafaya/bert-base-arabic) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2732
- Accuracy: 0.9029
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 291 | 0.2924 | 0.8823 |
| 0.3433 | 2.0 | 582 | 0.2732 | 0.9029 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
GItaf/bert-base-uncased-finetuned-mbti-0830
|
GItaf
| 2022-09-05T12:30:05Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-29T19:12:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-mbti-0830
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mbti-0830
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.4259 | 1.0 | 9720 | 4.3466 |
| 4.2788 | 2.0 | 19440 | 4.2536 |
| 4.1928 | 3.0 | 29160 | 4.2074 |
| 4.1062 | 4.0 | 38880 | 4.1839 |
| 4.0502 | 5.0 | 48600 | 4.1715 |
| 4.0037 | 6.0 | 58320 | 4.1637 |
| 3.9575 | 7.0 | 68040 | 4.1603 |
| 3.9169 | 8.0 | 77760 | 4.1591 |
| 3.8915 | 9.0 | 87480 | 4.1602 |
| 3.8741 | 10.0 | 97200 | 4.1613 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
cemilcelik/q-FrozenLake-v1-4x4-noSlippery
|
cemilcelik
| 2022-09-05T12:04:02Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-05T12:00:31Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="cemilcelik/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Sania67/Fine_Tunning_on_CV_dataset
|
Sania67
| 2022-09-05T11:47:39Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-09-05T06:33:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_8_0
model-index:
- name: Fine_Tunning_on_CV_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine_Tunning_on_CV_dataset
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_8_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7892
- Wer: 0.4734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.8618 | 3.25 | 1000 | 3.0765 | 0.9989 |
| 1.6825 | 6.49 | 2000 | 0.8932 | 0.6292 |
| 0.7813 | 9.74 | 3000 | 0.7750 | 0.5454 |
| 0.5859 | 12.99 | 4000 | 0.7413 | 0.5151 |
| 0.472 | 16.23 | 5000 | 0.7559 | 0.4952 |
| 0.404 | 19.48 | 6000 | 0.7677 | 0.4915 |
| 0.3637 | 22.73 | 7000 | 0.7788 | 0.4863 |
| 0.3238 | 25.97 | 8000 | 0.7920 | 0.4738 |
| 0.3038 | 29.22 | 9000 | 0.7892 | 0.4734 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
takeda777/wav2vec2-large-xls-r-300m-turkish-colab
|
takeda777
| 2022-09-05T08:42:19Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-09-05T06:51:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8327
- Wer: 0.9968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 23.2608 | 9.76 | 400 | 4.9641 | 1.0 |
| 3.9324 | 19.51 | 800 | 3.4000 | 1.0 |
| 1.4163 | 29.27 | 1200 | 2.8327 | 0.9968 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Anurag0961/cards-demo-model1
|
Anurag0961
| 2022-09-05T08:32:48Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-05T07:53:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: cards-demo-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cards-demo-model1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Tokenizers 0.12.1
|
Osaleh/sagemaker-bert-mini-arabic
|
Osaleh
| 2022-09-05T07:24:27Z | 95 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-05T07:21:39Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sagemaker-bert-mini-arabic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-bert-mini-arabic
This model is a fine-tuned version of [asafaya/bert-mini-arabic](https://huggingface.co/asafaya/bert-mini-arabic) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2531
- Accuracy: 0.8974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3385 | 1.0 | 1469 | 0.2707 | 0.8840 |
| 0.2416 | 2.0 | 2938 | 0.2531 | 0.8974 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
piasinga/nude
|
piasinga
| 2022-09-05T07:01:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-08-10T05:33:51Z |
rebecca ferguson beautiful girl lord of the ring armor sword fight in valley 3d 4k Alex Lazar artstation
|
andreasostling/distilbert-base-uncased-finetuned-ner
|
andreasostling
| 2022-09-05T06:39:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-05T06:30:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9267050487156776
- name: Recall
type: recall
value: 0.9363463474661595
- name: F1
type: f1
value: 0.9315007512102833
- name: Accuracy
type: accuracy
value: 0.9839706419686403
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0609
- Precision: 0.9267
- Recall: 0.9363
- F1: 0.9315
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2408 | 1.0 | 878 | 0.0682 | 0.9136 | 0.9219 | 0.9178 | 0.9813 |
| 0.0538 | 2.0 | 1756 | 0.0605 | 0.9228 | 0.9346 | 0.9286 | 0.9833 |
| 0.0301 | 3.0 | 2634 | 0.0609 | 0.9267 | 0.9363 | 0.9315 | 0.9840 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ridiculoys/PPO-LunarLander-v2
|
ridiculoys
| 2022-09-05T06:18:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-05T06:17:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 264.18 +/- 18.68
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
exploiter345/dqn-SpaceInvadersNoFrameskip-v0
|
exploiter345
| 2022-09-05T01:04:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-05T01:04:29Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 6.50 +/- 16.29
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga exploiter345 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga exploiter345
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
liux0229/distilbert-base-uncased-finetuned-emotion
|
liux0229
| 2022-09-05T00:11:04Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-04T23:42:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9248309431740382
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2176
- Accuracy: 0.925
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8255 | 1.0 | 250 | 0.3198 | 0.902 | 0.8999 |
| 0.2469 | 2.0 | 500 | 0.2176 | 0.925 | 0.9248 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
urkepg/ppo-LunarLander-v2
|
urkepg
| 2022-09-05T00:10:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-04T23:37:09Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 266.83 +/- 26.30
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
farleyknight/mnist-digit-classification-2022-09-04
|
farleyknight
| 2022-09-04T22:10:43Z | 322 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:mnist",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-04T19:47:37Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- mnist
metrics:
- accuracy
model-index:
- name: mnist-digit-classification-2022-09-04
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: mnist
type: mnist
config: mnist
split: train
args: mnist
metrics:
- name: Accuracy
type: accuracy
value: 0.9923333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mnist-digit-classification-2022-09-04
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the mnist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0319
- Accuracy: 0.9923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
hieule/codeparrot-ds
|
hieule
| 2022-09-04T20:23:49Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-04T13:35:39Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5009 | 0.93 | 5000 | 1.5958 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
c7756748/ppo-LunarLander-v2
|
c7756748
| 2022-09-04T16:29:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-04T16:29:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 241.88 +/- 16.21
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gngpostalsrvc/w2v2-ami
|
gngpostalsrvc
| 2022-09-04T16:18:10Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-09-04T13:13:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: w2v2-ami
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2-ami
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8686
- Wer: 0.2861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.6021 | 3.07 | 500 | 2.9176 | 0.9997 |
| 2.5006 | 6.13 | 1000 | 1.0535 | 0.3617 |
| 0.9926 | 9.2 | 1500 | 0.8614 | 0.3036 |
| 0.809 | 12.27 | 2000 | 0.8676 | 0.2921 |
| 0.73 | 15.34 | 2500 | 0.8190 | 0.2966 |
| 0.6658 | 18.4 | 3000 | 0.8707 | 0.2900 |
| 0.6295 | 21.47 | 3500 | 0.8660 | 0.2821 |
| 0.6089 | 24.54 | 4000 | 0.8767 | 0.2829 |
| 0.5974 | 27.61 | 4500 | 0.8686 | 0.2861 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
OD/q-FrozenLake-v1-4x4-noSlippery
|
OD
| 2022-09-04T15:14:39Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-04T15:14:33Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="OD/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Izzet/qa_tquad_electra-base-turkish
|
Izzet
| 2022-09-04T13:52:04Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"question-answering",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-04T13:31:21Z |
---
license: mit
widget:
- text: "Ankara'da korumaya alınmış alanlar var mıdır?"
context: "Ankara kara iklimine sahiptir. Şehir dışındaki il topraklarının büyük kısmı tahıl tarlalarıyla kaplı platolardan oluşur. İlin çeşitli yerlerindeki doğal güzellikler korumaya alınmış, dinlenme ve eğlence amaçlı kullanıma sunulmuştur. İlin adını taşıyan tavşanı, keçisi, atı ve kedisi dünya çapında bilinir, armudu, çiğdemi, yerel yemeklerden Ankara tavası ve Kızılcahamam ve Beypazarı'nın maden suyu ise ülke çapında tanınır."
example_title: "Ankara 1"
- text: "Ankara toprakları nelerden oluşur?"
context: "Ankara kara iklimine sahiptir. Şehir dışındaki il topraklarının büyük kısmı tahıl tarlalarıyla kaplı platolardan oluşur. İlin çeşitli yerlerindeki doğal güzellikler korumaya alınmış, dinlenme ve eğlence amaçlı kullanıma sunulmuştur. İlin adını taşıyan tavşanı, keçisi, atı ve kedisi dünya çapında bilinir, armudu, çiğdemi, yerel yemeklerden Ankara tavası ve Kızılcahamam ve Beypazarı'nın maden suyu ise ülke çapında tanınır."
example_title: "Ankara 2"
---
# Question Answering Model Fine-Tuned with TQuad Dataset
You can find detailed explanation about dataset [here](https://github.com/izzetkalic/botcuk-dataset-analyze/tree/main/datasets/qa-tquad).
|
Izzet/qa_tquad_bert-base-turkish
|
Izzet
| 2022-09-04T13:51:47Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-04T12:45:21Z |
---
license: mit
widget:
- text: "Ankara'da korumaya alınmış alanlar var mıdır?"
context: "Ankara kara iklimine sahiptir. Şehir dışındaki il topraklarının büyük kısmı tahıl tarlalarıyla kaplı platolardan oluşur. İlin çeşitli yerlerindeki doğal güzellikler korumaya alınmış, dinlenme ve eğlence amaçlı kullanıma sunulmuştur. İlin adını taşıyan tavşanı, keçisi, atı ve kedisi dünya çapında bilinir, armudu, çiğdemi, yerel yemeklerden Ankara tavası ve Kızılcahamam ve Beypazarı'nın maden suyu ise ülke çapında tanınır."
example_title: "Ankara 1"
- text: "Ankara toprakları nelerden oluşur?"
context: "Ankara kara iklimine sahiptir. Şehir dışındaki il topraklarının büyük kısmı tahıl tarlalarıyla kaplı platolardan oluşur. İlin çeşitli yerlerindeki doğal güzellikler korumaya alınmış, dinlenme ve eğlence amaçlı kullanıma sunulmuştur. İlin adını taşıyan tavşanı, keçisi, atı ve kedisi dünya çapında bilinir, armudu, çiğdemi, yerel yemeklerden Ankara tavası ve Kızılcahamam ve Beypazarı'nın maden suyu ise ülke çapında tanınır."
example_title: "Ankara 2"
---
# Question Answering Model Fine-Tuned with TQuad Dataset
You can find detailed explanation about dataset [here](https://github.com/izzetkalic/botcuk-dataset-analyze/tree/main/datasets/qa-tquad).
|
Izzet/qa_ytu_distilbert-base-turkish
|
Izzet
| 2022-09-04T12:36:22Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-04T12:15:18Z |
---
license: mit
widget:
- text: "Ankara'da korumaya alınmış alanlar var mıdır?"
context: "Ankara kara iklimine sahiptir. Şehir dışındaki il topraklarının büyük kısmı tahıl tarlalarıyla kaplı platolardan oluşur. İlin çeşitli yerlerindeki doğal güzellikler korumaya alınmış, dinlenme ve eğlence amaçlı kullanıma sunulmuştur. İlin adını taşıyan tavşanı, keçisi, atı ve kedisi dünya çapında bilinir, armudu, çiğdemi, yerel yemeklerden Ankara tavası ve Kızılcahamam ve Beypazarı'nın maden suyu ise ülke çapında tanınır."
example_title: "Ankara 1"
- text: "Ankara toprakları nelerden oluşur?"
context: "Ankara kara iklimine sahiptir. Şehir dışındaki il topraklarının büyük kısmı tahıl tarlalarıyla kaplı platolardan oluşur. İlin çeşitli yerlerindeki doğal güzellikler korumaya alınmış, dinlenme ve eğlence amaçlı kullanıma sunulmuştur. İlin adını taşıyan tavşanı, keçisi, atı ve kedisi dünya çapında bilinir, armudu, çiğdemi, yerel yemeklerden Ankara tavası ve Kızılcahamam ve Beypazarı'nın maden suyu ise ülke çapında tanınır."
example_title: "Ankara 2"
---
# Question Answering Model Fine-Tuned with YTU Dataset
You can find detailed explanation about dataset [here](https://github.com/izzetkalic/botcuk-dataset-analyze/tree/main/datasets/qa-ytu).
|
gngpostalsrvc/w2v2
|
gngpostalsrvc
| 2022-09-04T12:35:24Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-31T19:29:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: w2v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8860
- Wer: 0.2817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.5664 | 3.07 | 500 | 3.0411 | 0.9997 |
| 2.5607 | 6.13 | 1000 | 1.0770 | 0.3660 |
| 0.9959 | 9.2 | 1500 | 0.8815 | 0.3017 |
| 0.8129 | 12.27 | 2000 | 0.8676 | 0.2915 |
| 0.7334 | 15.34 | 2500 | 0.8381 | 0.2931 |
| 0.669 | 18.4 | 3000 | 0.8802 | 0.2864 |
| 0.6312 | 21.47 | 3500 | 0.8679 | 0.2864 |
| 0.6094 | 24.54 | 4000 | 0.8811 | 0.2802 |
| 0.5987 | 27.61 | 4500 | 0.8860 | 0.2817 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
rebolforces/Reinforces-Pixelcopter-PLE-v0
|
rebolforces
| 2022-09-04T11:31:34Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-04T11:22:20Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforces-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 24.30 +/- 22.86
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
hike/MyModel
|
hike
| 2022-09-04T10:51:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-08-30T11:42:04Z |
# 自己常用的transformer模型
## 文本翻译
* [zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en)
## 文本生成
* [gpt2-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-chinese-cluecorpussmall?text=这是很久之前的事情了)
## 中文bert
* [bert-base-chinese](https://huggingface.co/bert-base-chinese)
* [chinese-reberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large?text=生活的真谛是%5BMASK%5D%E3%80%82)
|
marko97/opus-mt-en-ro-finetuned-en-to-ro
|
marko97
| 2022-09-04T10:35:16Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-03T09:15:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
config: ro-en
split: train
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 28.1505
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2886
- Bleu: 28.1505
- Gen Len: 34.1036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7437 | 1.0 | 38145 | 1.2886 | 28.1505 | 34.1036 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
k3lana/xlm-roberta-base-finetuned-panx-all
|
k3lana
| 2022-09-04T10:24:12Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-04T09:54:45Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1745
- F1: 0.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3055 | 1.0 | 835 | 0.1842 | 0.8099 |
| 0.1561 | 2.0 | 1670 | 0.1711 | 0.8452 |
| 0.1016 | 3.0 | 2505 | 0.1745 | 0.8505 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Dinithi/AgitationNotesClassification
|
Dinithi
| 2022-09-04T10:17:01Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-04T09:59:08Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: AgitationNotesClassification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AgitationNotesClassification
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4128
- F1: 0.8690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6567 | 1.0 | 50 | 0.6130 | 0.6500 |
| 0.6056 | 2.0 | 100 | 0.5807 | 0.6763 |
| 0.5172 | 3.0 | 150 | 0.5398 | 0.6675 |
| 0.4206 | 4.0 | 200 | 0.4111 | 0.8355 |
| 0.3361 | 5.0 | 250 | 0.3977 | 0.8667 |
| 0.2919 | 6.0 | 300 | 0.3874 | 0.8780 |
| 0.2233 | 7.0 | 350 | 0.3928 | 0.8690 |
| 0.1953 | 8.0 | 400 | 0.3908 | 0.8690 |
| 0.1633 | 9.0 | 450 | 0.4076 | 0.86 |
| 0.1494 | 10.0 | 500 | 0.4128 | 0.8690 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
k3lana/xlm-roberta-base-finetuned-panx-en
|
k3lana
| 2022-09-04T09:54:32Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-04T09:38:19Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6886160714285715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4043
- F1: 0.6886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1347 | 1.0 | 50 | 0.5771 | 0.4880 |
| 0.5066 | 2.0 | 100 | 0.4209 | 0.6582 |
| 0.3631 | 3.0 | 150 | 0.4043 | 0.6886 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/joebiden
|
huggingtweets
| 2022-09-04T09:32:10Z | 17 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/joebiden/1662283925554/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1308769664240160770/AfgzWVE7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Joe Biden</div>
<div style="text-align: center; font-size: 14px;">@joebiden</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Joe Biden.
| Data | Joe Biden |
| --- | --- |
| Tweets downloaded | 3215 |
| Retweets | 629 |
| Short tweets | 31 |
| Tweets kept | 2555 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/tbtim2bm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @joebiden's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1w4wo0t6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1w4wo0t6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/joebiden')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
k3lana/xlm-roberta-base-finetuned-panx-de
|
k3lana
| 2022-09-04T08:24:50Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-04T08:00:02Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Sania67/Fine_Tunning_on_CV_Urdu_dataset
|
Sania67
| 2022-09-04T07:44:39Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-09-04T06:28:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_8_0
model-index:
- name: Fine_Tunning_on_CV_Urdu_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine_Tunning_on_CV_Urdu_dataset
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_8_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2389
- Wer: 0.7380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 15.2352 | 1.69 | 100 | 4.0555 | 1.0 |
| 3.3873 | 3.39 | 200 | 3.2521 | 1.0 |
| 3.2387 | 5.08 | 300 | 3.2304 | 1.0 |
| 3.1983 | 6.78 | 400 | 3.1712 | 1.0 |
| 3.1224 | 8.47 | 500 | 3.0883 | 1.0 |
| 3.0782 | 10.17 | 600 | 3.0767 | 0.9996 |
| 3.0618 | 11.86 | 700 | 3.0280 | 1.0 |
| 2.9929 | 13.56 | 800 | 2.8994 | 1.0 |
| 2.785 | 15.25 | 900 | 2.4330 | 1.0 |
| 2.1276 | 16.95 | 1000 | 1.7795 | 0.9517 |
| 1.5544 | 18.64 | 1100 | 1.5101 | 0.8266 |
| 1.2651 | 20.34 | 1200 | 1.4037 | 0.7993 |
| 1.0816 | 22.03 | 1300 | 1.3101 | 0.7638 |
| 0.9817 | 23.73 | 1400 | 1.2855 | 0.7542 |
| 0.9019 | 25.42 | 1500 | 1.2737 | 0.7421 |
| 0.8688 | 27.12 | 1600 | 1.2457 | 0.7435 |
| 0.8293 | 28.81 | 1700 | 1.2389 | 0.7380 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
uripper/ChatbotTrainingBot
|
uripper
| 2022-09-04T07:43:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-02T23:26:37Z |
---
license: cc
widget:
- text: "User: Hey, how are you?"
example_title: "How are you?"
- text: "User: What did you do today?"
example_title: "What did you do today?"
- text: "User: What's your favorite movie?"
example_title: "What's your favorite movie?"
---
|
AmolSatsangi/t5-small-finetuned-xsum
|
AmolSatsangi
| 2022-09-04T07:06:11Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-03T17:07:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 125 | 2.8679 | 23.1742 | 9.8716 | 18.5896 | 20.7943 | 19.0 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Tsubame/ddpm-butterflies-64
|
Tsubame
| 2022-09-04T06:31:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-09-04T04:59:29Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-64
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Tsubame/ddpm-butterflies-64/tensorboard?#scalars)
|
hieule/mt5-small-finetuned-amazon-en-es
|
hieule
| 2022-09-04T05:55:35Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-09-04T04:26:50Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0171
- Rouge1: 16.778
- Rouge2: 8.0849
- Rougel: 16.5329
- Rougelsum: 16.4302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.4297 | 1.0 | 1209 | 3.1211 | 17.6479 | 8.1669 | 17.1554 | 17.0276 |
| 3.4217 | 2.0 | 2418 | 3.0394 | 16.4501 | 8.3991 | 16.2225 | 16.2214 |
| 3.2701 | 3.0 | 3627 | 3.0427 | 16.3473 | 7.5173 | 16.1924 | 16.098 |
| 3.1888 | 4.0 | 4836 | 3.0283 | 15.3718 | 6.8591 | 15.0889 | 14.9769 |
| 3.1204 | 5.0 | 6045 | 3.0256 | 17.5963 | 8.331 | 17.1812 | 17.0733 |
| 3.072 | 6.0 | 7254 | 3.0189 | 16.5811 | 8.1764 | 16.28 | 16.207 |
| 3.0386 | 7.0 | 8463 | 3.0171 | 17.1018 | 8.4785 | 16.8196 | 16.7681 |
| 3.0193 | 8.0 | 9672 | 3.0171 | 16.778 | 8.0849 | 16.5329 | 16.4302 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
GAIR/rst-information-extraction-11b
|
GAIR
| 2022-09-04T01:43:15Z | 75 | 8 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2206.11147",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-30T16:01:47Z |
---
license: afl-3.0
---
<p align="center">
<br>
<img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/>
<br>
</p>
# reStructured Pre-training (RST)
official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html)
#### RST is a new paradigm for language pre-training, which
* unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model,
* surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc)
* achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam
In such a pre-training paradigm,
* Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing
* Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access.
## Model Description
We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters.
| Model | Description | Recommended Application
| ----------- | ----------- |----------- |
| rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) |
| rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker |
| rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) |
| rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction |
| **rst-information-extraction-11b** | **Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity** | **Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains**|
| rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction |
| rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification |
| rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning |
| rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning |
| rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification |
| rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering|
| rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling|
| rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks |
## Have a try?
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b")
model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b")
inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
## Data for reStructure Pre-training
This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research.
We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals.
###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush:
| Mine | Signal | #Sample | Use in DataLab | Some Applications |
| --- | --- | --- | --- | --- |
| [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion|
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification |
| [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning|
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion |
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning |
| [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation|
| [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition|
[Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation |
| [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference|
|[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension|
| [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension|
| [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension |
| [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension|
| [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension|
| [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension |
| [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification|
| [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion|
| [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition|
| [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion|
## Bibtext for Citation Info
```
@article{yuan2022restructured,
title={reStructured Pre-training},
author={Yuan, Weizhe and Liu, Pengfei},
journal={arXiv preprint arXiv:2206.11147},
year={2022}
}
```
|
GAIR/rst-fact-retrieval-11b
|
GAIR
| 2022-09-04T01:42:36Z | 42 | 6 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2206.11147",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T03:57:11Z |
---
license: afl-3.0
---
<p align="center">
<br>
<img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/>
<br>
</p>
# reStructured Pre-training (RST)
official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html)
#### RST is a new paradigm for language pre-training, which
* unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model,
* surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc)
* achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam
In such a pre-training paradigm,
* Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing
* Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access.
## Model Description
We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters.
| Model | Description | Recommended Application
| ----------- | ----------- |----------- |
| rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) |
| **rst-fact-retrieval-11b** | **Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing** | **Knowledge intensive tasks, information extraction tasks,factual checker** |
| rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) |
| rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction |
| rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains|
| rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction |
| rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification |
| rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning |
| rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning |
| rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification |
| rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering|
| rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling|
| rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks |
## Have a try?
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b")
model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b")
inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
## Data for reStructure Pre-training
This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research.
We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals.
###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush:
| Mine | Signal | #Sample | Use in DataLab | Some Applications |
| --- | --- | --- | --- | --- |
| [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion|
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification |
| [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning|
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion |
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning |
| [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation|
| [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition|
[Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation |
| [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference|
|[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension|
| [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension|
| [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension |
| [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension|
| [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension|
| [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension |
| [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification|
| [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion|
| [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition|
| [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion|
## Bibtext for Citation Info
```
@article{yuan2022restructured,
title={reStructured Pre-training},
author={Yuan, Weizhe and Liu, Pengfei},
journal={arXiv preprint arXiv:2206.11147},
year={2022}
}
```
|
GAIR/rst-gaokao-writing-11b
|
GAIR
| 2022-09-04T01:42:02Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2206.11147",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-01T20:33:19Z |
---
license: afl-3.0
---
<p align="center">
<br>
<img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/>
<br>
</p>
# reStructured Pre-training (RST)
official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html)
#### RST is a new paradigm for language pre-training, which
* unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model,
* surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc)
* achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam
In such a pre-training paradigm,
* Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing
* Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access.
## Model Description
We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters.
| Model | Description | Recommended Application
| ----------- | ----------- |----------- |
| rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) |
| rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker |
| rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) |
| rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction |
| rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains|
| rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction |
| rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification |
| rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning |
| rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning |
| rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification |
| rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering|
| rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling|
| **rst-gaokao-writing-11b** | **Trained with example essays from past Gaokao-English exams and grammar error correction signals** | **Essay writing, story generation, grammar error correction and other text generation tasks** |
## Have a try?
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b")
model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b")
inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
## Data for reStructure Pre-training
This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research.
We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals.
###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush:
| Mine | Signal | #Sample | Use in DataLab | Some Applications |
| --- | --- | --- | --- | --- |
| [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion|
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification |
| [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning|
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion |
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning |
| [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation|
| [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition|
[Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation |
| [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference|
|[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension|
| [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension|
| [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension |
| [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension|
| [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension|
| [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension |
| [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification|
| [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion|
| [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition|
| [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion|
## Bibtext for Citation Info
```
@article{yuan2022restructured,
title={reStructured Pre-training},
author={Yuan, Weizhe and Liu, Pengfei},
journal={arXiv preprint arXiv:2206.11147},
year={2022}
}
```
|
GAIR/rst-gaokao-cloze-11b
|
GAIR
| 2022-09-04T01:41:45Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2206.11147",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-01T16:54:26Z |
---
license: afl-3.0
---
<p align="center">
<br>
<img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/>
<br>
</p>
# reStructured Pre-training (RST)
official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html)
#### RST is a new paradigm for language pre-training, which
* unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model,
* surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc)
* achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam
In such a pre-training paradigm,
* Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing
* Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access.
## Model Description
We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters.
| Model | Description | Recommended Application
| ----------- | ----------- |----------- |
| rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) |
| rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker |
| rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) |
| rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction |
| rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains|
| rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction |
| rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification |
| rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning |
| rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning |
| rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification |
| rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering|
| **rst-gaokao-cloze-11b** | **Trained with manually crafted cloze datasets** | **General cloze filling**|
| rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks |
## Have a try?
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b")
model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b")
inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
## Data for reStructure Pre-training
This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research.
We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals.
###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush:
| Mine | Signal | #Sample | Use in DataLab | Some Applications |
| --- | --- | --- | --- | --- |
| [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion|
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification |
| [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning|
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion |
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning |
| [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation|
| [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition|
[Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation |
| [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference|
|[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension|
| [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension|
| [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension |
| [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension|
| [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension|
| [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension |
| [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification|
| [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion|
| [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition|
| [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion|
## Bibtext for Citation Info
```
@article{yuan2022restructured,
title={reStructured Pre-training},
author={Yuan, Weizhe and Liu, Pengfei},
journal={arXiv preprint arXiv:2206.11147},
year={2022}
}
```
|
GAIR/rst-natural-language-inference-11b
|
GAIR
| 2022-09-04T01:41:26Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2206.11147",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-30T17:17:06Z |
---
license: afl-3.0
---
<p align="center">
<br>
<img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/>
<br>
</p>
# reStructured Pre-training (RST)
official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html)
#### RST is a new paradigm for language pre-training, which
* unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model,
* surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc)
* achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam
In such a pre-training paradigm,
* Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing
* Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access.
## Model Description
We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters.
| Model | Description | Recommended Application
| ----------- | ----------- |----------- |
| rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) |
| rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker |
| rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) |
| rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction |
| rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains|
| rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction |
| rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification |
| rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning |
| **rst-natural-language-inference-11b** | **Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information** | **Natural language inference, multiple-choice question answering, reasoning** |
| rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification |
| rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering|
| rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling|
| rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks |
## Have a try?
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b")
model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b")
inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
## Data for reStructure Pre-training
This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research.
We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals.
###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush:
| Mine | Signal | #Sample | Use in DataLab | Some Applications |
| --- | --- | --- | --- | --- |
| [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion|
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification |
| [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning|
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion |
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning |
| [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation|
| [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition|
[Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation |
| [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference|
|[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension|
| [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension|
| [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension |
| [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension|
| [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension|
| [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension |
| [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification|
| [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion|
| [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition|
| [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion|
## Bibtext for Citation Info
```
@article{yuan2022restructured,
title={reStructured Pre-training},
author={Yuan, Weizhe and Liu, Pengfei},
journal={arXiv preprint arXiv:2206.11147},
year={2022}
}
```
|
GAIR/rst-word-sense-disambiguation-11b
|
GAIR
| 2022-09-04T01:38:56Z | 10 | 5 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2206.11147",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-30T17:21:24Z |
---
license: afl-3.0
---
<p align="center">
<br>
<img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/>
<br>
</p>
# reStructured Pre-training (RST)
official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html)
#### RST is a new paradigm for language pre-training, which
* unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model,
* surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc)
* achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam
In such a pre-training paradigm,
* Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing
* Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access.
## Model Description
We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters.
| Model | Description | Recommended Application
| ----------- | ----------- |----------- |
| rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) |
| rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker |
| rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) |
| rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction |
| rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains|
| rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction |
| rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification |
| **rst-word-sense-disambiguation-11b** | **Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym** | **Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning** |
| rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning |
| rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification |
| rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering|
| rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling|
| rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks |
## Have a try?
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b")
model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b")
inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
## Data for reStructure Pre-training
This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research.
We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals.
###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush:
| Mine | Signal | #Sample | Use in DataLab | Some Applications |
| --- | --- | --- | --- | --- |
| [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion|
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification |
| [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning|
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion |
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning |
| [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation|
| [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition|
[Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation |
| [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference|
|[DREAM](htt/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension|
| [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension|
| [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension |
| [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension|
| [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension|
| [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension |
| [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification|
| [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion|
| [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition|
| [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion|
## Bibtext for Citation Info
```
@article{yuan2022restructured,
title={reStructured Pre-training},
author={Yuan, Weizhe and Liu, Pengfei},
journal={arXiv preprint arXiv:2206.11147},
year={2022}
}
```
|
MarkTaylor/distilbert-base-uncased-finetuned-cola
|
MarkTaylor
| 2022-09-04T01:04:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-03T15:04:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5429064789214383
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7813
- Matthews Correlation: 0.5429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5255 | 1.0 | 535 | 0.5432 | 0.4004 |
| 0.3527 | 2.0 | 1070 | 0.5086 | 0.4889 |
| 0.2446 | 3.0 | 1605 | 0.5455 | 0.5365 |
| 0.1815 | 4.0 | 2140 | 0.7184 | 0.5328 |
| 0.1308 | 5.0 | 2675 | 0.7813 | 0.5429 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
nleroy917/all-MiniLM-L6-V2-DENTAL
|
nleroy917
| 2022-09-04T00:53:04Z | 6 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-09-03T20:46:08Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# nleroy917/all-MiniLM-L6-V2-DENTAL
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nleroy917/all-MiniLM-L6-V2-DENTAL')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=nleroy917/all-MiniLM-L6-V2-DENTAL)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 349 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Dinithi/ClinicalTextV4
|
Dinithi
| 2022-09-04T00:39:53Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-04T00:13:24Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: ClinicalTextV4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ClinicalTextV4
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5609
- Accuracy: 0.8658
- Precision: 0.8371
- Recall: 0.8939
- F1: 0.8646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4824 | 1.0 | 600 | 0.3630 | 0.8458 | 0.825 | 0.8609 | 0.8426 |
| 0.3314 | 2.0 | 1200 | 0.3583 | 0.8558 | 0.8252 | 0.8870 | 0.8550 |
| 0.2673 | 3.0 | 1800 | 0.3437 | 0.8583 | 0.8189 | 0.9043 | 0.8595 |
| 0.2255 | 4.0 | 2400 | 0.3678 | 0.8675 | 0.8302 | 0.9096 | 0.8680 |
| 0.1883 | 5.0 | 3000 | 0.4002 | 0.8642 | 0.8259 | 0.9078 | 0.8650 |
| 0.1562 | 6.0 | 3600 | 0.4695 | 0.8633 | 0.8352 | 0.8904 | 0.8620 |
| 0.1372 | 7.0 | 4200 | 0.4940 | 0.8658 | 0.8371 | 0.8939 | 0.8646 |
| 0.1269 | 8.0 | 4800 | 0.5376 | 0.865 | 0.8402 | 0.8870 | 0.8629 |
| 0.1097 | 9.0 | 5400 | 0.5539 | 0.8633 | 0.8397 | 0.8835 | 0.8610 |
| 0.0997 | 10.0 | 6000 | 0.5609 | 0.8658 | 0.8371 | 0.8939 | 0.8646 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Usamaejaz0/monday-model-updated-v2
|
Usamaejaz0
| 2022-09-04T00:17:37Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-02T23:29:47Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: monday-model-updated-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# monday-model-updated-v2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.21.2
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Dinithi/AgitationTextV3
|
Dinithi
| 2022-09-04T00:09:16Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-03T23:48:26Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: AgitationTextV3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AgitationTextV3
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5199
- Accuracy: 0.8
- Precision: 0.9636
- Recall: 0.7465
- F1: 0.8413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6797 | 1.0 | 50 | 0.6650 | 0.57 | 0.85 | 0.4789 | 0.6126 |
| 0.599 | 2.0 | 100 | 0.6180 | 0.65 | 0.86 | 0.6056 | 0.7107 |
| 0.5121 | 3.0 | 150 | 0.5714 | 0.75 | 0.8833 | 0.7465 | 0.8092 |
| 0.4049 | 4.0 | 200 | 0.5187 | 0.81 | 0.9194 | 0.8028 | 0.8571 |
| 0.3091 | 5.0 | 250 | 0.5034 | 0.77 | 0.9444 | 0.7183 | 0.816 |
| 0.2303 | 6.0 | 300 | 0.4673 | 0.78 | 0.9298 | 0.7465 | 0.8281 |
| 0.1773 | 7.0 | 350 | 0.4802 | 0.8 | 0.9322 | 0.7746 | 0.8462 |
| 0.1396 | 8.0 | 400 | 0.5260 | 0.8 | 0.9636 | 0.7465 | 0.8413 |
| 0.1204 | 9.0 | 450 | 0.5317 | 0.8 | 0.9636 | 0.7465 | 0.8413 |
| 0.0982 | 10.0 | 500 | 0.5199 | 0.8 | 0.9636 | 0.7465 | 0.8413 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Dinithi/CancerTextV2
|
Dinithi
| 2022-09-04T00:00:20Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-03T23:37:59Z |
---
license: cc0-1.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: CancerTextV2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CancerTextV2
This model is a fine-tuned version of [bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12](https://huggingface.co/bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5913
- Accuracy: 0.8692
- Precision: 0.8666
- Recall: 0.8738
- F1: 0.8701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4717 | 1.0 | 600 | 0.3318 | 0.8617 | 0.8562 | 0.8704 | 0.8633 |
| 0.3248 | 2.0 | 1200 | 0.3144 | 0.8658 | 0.8821 | 0.8455 | 0.8634 |
| 0.2653 | 3.0 | 1800 | 0.3519 | 0.8625 | 0.8507 | 0.8804 | 0.8653 |
| 0.2164 | 4.0 | 2400 | 0.4090 | 0.8658 | 0.9002 | 0.8239 | 0.8604 |
| 0.1884 | 5.0 | 3000 | 0.4413 | 0.8667 | 0.8850 | 0.8439 | 0.8639 |
| 0.1582 | 6.0 | 3600 | 0.4415 | 0.865 | 0.8971 | 0.8256 | 0.8599 |
| 0.1377 | 7.0 | 4200 | 0.5165 | 0.8708 | 0.8694 | 0.8738 | 0.8716 |
| 0.1192 | 8.0 | 4800 | 0.5699 | 0.8675 | 0.8826 | 0.8488 | 0.8654 |
| 0.1081 | 9.0 | 5400 | 0.5837 | 0.8692 | 0.8666 | 0.8738 | 0.8701 |
| 0.1018 | 10.0 | 6000 | 0.5913 | 0.8692 | 0.8666 | 0.8738 | 0.8701 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Dinithi/AgitationTextV2
|
Dinithi
| 2022-09-03T23:49:54Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-03T23:33:26Z |
---
license: cc0-1.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: AgitationTextV2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AgitationTextV2
This model is a fine-tuned version of [bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12](https://huggingface.co/bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6268
- Accuracy: 0.73
- Precision: 0.8036
- Recall: 0.7377
- F1: 0.7692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6535 | 1.0 | 50 | 0.6682 | 0.66 | 0.7547 | 0.6557 | 0.7018 |
| 0.5874 | 2.0 | 100 | 0.6695 | 0.66 | 0.7455 | 0.6721 | 0.7069 |
| 0.5373 | 3.0 | 150 | 0.6141 | 0.6 | 0.7838 | 0.4754 | 0.5918 |
| 0.4671 | 4.0 | 200 | 0.6017 | 0.71 | 0.7667 | 0.7541 | 0.7603 |
| 0.4111 | 5.0 | 250 | 0.5507 | 0.75 | 0.8333 | 0.7377 | 0.7826 |
| 0.3828 | 6.0 | 300 | 0.6090 | 0.7 | 0.7541 | 0.7541 | 0.7541 |
| 0.3034 | 7.0 | 350 | 0.6073 | 0.71 | 0.8333 | 0.6557 | 0.7339 |
| 0.2702 | 8.0 | 400 | 0.5790 | 0.71 | 0.8077 | 0.6885 | 0.7434 |
| 0.246 | 9.0 | 450 | 0.7061 | 0.71 | 0.7424 | 0.8033 | 0.7717 |
| 0.2229 | 10.0 | 500 | 0.6268 | 0.73 | 0.8036 | 0.7377 | 0.7692 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.