modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 18:27:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 18:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
huggingtweets/planetmoney
|
huggingtweets
| 2022-03-19T20:19:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/planetmoney/1647721191942/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/473888336449269761/vIurMh9f_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NPR's Planet Money</div>
<div style="text-align: center; font-size: 14px;">@planetmoney</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NPR's Planet Money.
| Data | NPR's Planet Money |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 601 |
| Short tweets | 37 |
| Tweets kept | 2608 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7jiqlr8t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @planetmoney's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1t6h63jy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1t6h63jy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/planetmoney')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ronykroy/distilbert-base-uncased-finetuned-emotion
|
ronykroy
| 2022-03-19T17:55:13Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-19T17:30:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9222310284051585
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2334
- Accuracy: 0.922
- F1: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8454 | 1.0 | 250 | 0.3308 | 0.8975 | 0.8937 |
| 0.2561 | 2.0 | 500 | 0.2334 | 0.922 | 0.9222 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sanchit-gandhi/wav2vec2-2-gpt2-no-adapter-regularisation
|
sanchit-gandhi
| 2022-03-19T17:43:39Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-17T16:34:45Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7494
- Wer: 1.0532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4828 | 2.8 | 2500 | 4.0554 | 1.7873 |
| 0.8683 | 5.61 | 5000 | 2.5401 | 1.3156 |
| 0.4394 | 8.41 | 7500 | 1.7519 | 1.1129 |
| 0.0497 | 11.21 | 10000 | 1.7102 | 1.0738 |
| 0.031 | 14.01 | 12500 | 1.7395 | 1.0512 |
| 0.0508 | 16.82 | 15000 | 1.7254 | 1.0463 |
| 0.0462 | 19.62 | 17500 | 1.7494 | 1.0532 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
vinaykudari/distilGPT-ft-eli5
|
vinaykudari
| 2022-03-19T17:24:50Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-19T16:05:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilGPT-ft-eli5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilGPT-ft-eli5
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 281 | 5.8277 |
| 5.7427 | 2.0 | 562 | 5.7525 |
| 5.7427 | 3.0 | 843 | 5.7016 |
| 5.5614 | 4.0 | 1124 | 5.6593 |
| 5.5614 | 5.0 | 1405 | 5.6273 |
| 5.4408 | 6.0 | 1686 | 5.6029 |
| 5.4408 | 7.0 | 1967 | 5.5855 |
| 5.3522 | 8.0 | 2248 | 5.5739 |
| 5.2948 | 9.0 | 2529 | 5.5670 |
| 5.2948 | 10.0 | 2810 | 5.5643 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
sanchit-gandhi/wav2vec2-2-gpt2-regularisation
|
sanchit-gandhi
| 2022-03-19T17:11:48Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-17T16:34:24Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8529
- Wer: 0.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5506 | 2.8 | 2500 | 4.4928 | 1.8772 |
| 0.5145 | 5.61 | 5000 | 1.8942 | 1.1063 |
| 0.2736 | 8.41 | 7500 | 1.6550 | 1.0372 |
| 0.0807 | 11.21 | 10000 | 1.7601 | 1.0004 |
| 0.0439 | 14.01 | 12500 | 1.8014 | 1.0022 |
| 0.043 | 16.82 | 15000 | 1.8534 | 1.0097 |
| 0.0434 | 19.62 | 17500 | 1.8529 | 0.9977 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/abombayboy
|
huggingtweets
| 2022-03-19T16:13:12Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-19T15:53:28Z |
---
language: en
thumbnail: http://www.huggingtweets.com/abombayboy/1647706387106/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1465673407178043396/aYbTBRbu_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bombay Boy</div>
<div style="text-align: center; font-size: 14px;">@abombayboy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bombay Boy.
| Data | Bombay Boy |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 927 |
| Short tweets | 181 |
| Tweets kept | 2130 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3paz3q98/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @abombayboy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/331ordwj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/331ordwj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/abombayboy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Pavithra/code-parrot
|
Pavithra
| 2022-03-19T04:04:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-19T03:52:29Z |
# CodeParrot 🦜 (small)
CodeParrot 🦜 is a GPT-2 model (110M parameters) trained to generate Python code.
## Usage
You can load the CodeParrot model and tokenizer directly in `transformers`:
```Python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("lvwerra/codeparrot-small")
model = AutoModelWithLMHead.from_pretrained("lvwerra/codeparrot-small")
inputs = tokenizer("def hello_world():", return_tensors="pt")
outputs = model(**inputs)
```
or with a `pipeline`:
```Python
from transformers import pipeline
pipe = pipeline("text-generation", model="lvwerra/codeparrot-small")
outputs = pipe("def hello_world():")
```
## Training
The model was trained on the cleaned [CodeParrot 🦜 dataset](https://huggingface.co/datasets/lvwerra/codeparrot-clean) with the following settings:
|Config|Value|
|-------|-----|
|Batch size| 192 |
|Context size| 1024 |
|Training steps| 150'000|
|Gradient accumulation| 1|
|Gradient checkpointing| False|
|Learning rate| 5e-4 |
|Weight decay | 0.1 |
|Warmup steps| 2000 |
|Schedule| Cosine |
The training was executed on 16 x A100 (40GB) GPUs. This setting amounts to roughly 29 billion tokens.
## Performance
We evaluated the model on OpenAI's [HumanEval](https://huggingface.co/datasets/openai_humaneval) benchmark which consists of programming challenges:
| Metric | Value |
|-------|-----|
|pass@1 | 3.80% |
|pass@10 | 6.57% |
|pass@100 | 12.78% |
The [pass@k metric](https://huggingface.co/metrics/code_eval) tells the probability that at least one out of k generations passes the tests.
## Resources
- Dataset: [full](https://huggingface.co/datasets/lvwerra/codeparrot-clean), [train](https://huggingface.co/datasets/lvwerra/codeparrot-clean-train), [valid](https://huggingface.co/datasets/lvwerra/codeparrot-clean-valid)
- Code: [repository](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot)
- Spaces: [generation](), [highlighting]()
|
mikeadimech/bart-large-cnn-qmsum-meeting-summarization
|
mikeadimech
| 2022-03-18T19:00:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:yawnick/QMSum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-18T12:44:49Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-qmsum-meeting-summarization
results: []
datasets:
- yawnick/QMSum
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-qmsum-meeting-summarization
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7578
- Rouge1: 37.9431
- Rouge2: 10.6366
- Rougel: 25.5782
- Rougelsum: 33.0209
- Gen Len: 72.7714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 500
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
MehSatho/Tai-medium-Hermione
|
MehSatho
| 2022-03-18T18:56:41Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-18T18:45:51Z |
---
tags:
- conversational
---
|
saattrupdan/xlmr-base-texas-squad-fr
|
saattrupdan
| 2022-03-18T16:56:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlmr-base-texas-squad-fr
results: []
widget:
- text: "Comment obtenir la coagulation?"
context: "La coagulation peut être obtenue soit par action d'une enzyme, la présure, soit par fermentation provoquée par des bactéries lactiques (le lactose est alors transformé en acide lactique), soit très fréquemment par combinaison des deux méthodes précédentes, soit par chauffage associé à une acidification directe (vinaigre…). On procède ensuite à l'égouttage. On obtient alors le caillé et le lactosérum. Le lactosérum peut aussi être utilisé directement : fromage de lactosérum comme le sérac, ou par réincorporation de ses composants."
---
# TExAS-SQuAD-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the TExAS-SQuAD-fr dataset.
It achieves the following results on the evaluation set:
- Exact match: xx.xx%
- F1-score: xx.xx%
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1478 | 0.23 | 1000 | 1.8543 |
| 1.9827 | 0.46 | 2000 | 1.7643 |
| 1.8427 | 0.69 | 3000 | 1.6789 |
| 1.8372 | 0.92 | 4000 | 1.6137 |
| 1.7318 | 1.15 | 5000 | 1.6093 |
| 1.6603 | 1.38 | 6000 | 1.7157 |
| 1.6334 | 1.61 | 7000 | 1.6302 |
| 1.6716 | 1.84 | 8000 | 1.5845 |
| 1.5192 | 2.06 | 9000 | 1.6690 |
| 1.5174 | 2.29 | 10000 | 1.6669 |
| 1.4611 | 2.52 | 11000 | 1.6301 |
| 1.4648 | 2.75 | 12000 | 1.6009 |
| 1.5052 | 2.98 | 13000 | 1.6133 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3
|
mfleck/wav2vec2-large-xls-r-300m-german-with-lm
|
mfleck
| 2022-03-18T16:48:09Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-10T16:46:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-german-with-lm
results: []
---
# wav2vec2-large-xls-r-300m-german-with-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the German set of the Common Voice dataset.
It achieves a Word Error Rate of 8,8 percent on the evaluation set
## Model description
German wav2vec2-xls-r-300m trained on the full train set of Common Voice dataset with a n-gram language model.
Full code available in [my Github repository](https://github.com/MichaelFleck92/asr-wav2vec)
## Citation
Feel free to cite this work by
```
@misc{mfleck/wav2vec2-large-xls-r-300m-german-with-lm,
title={XLS-R-300 Wav2Vec2 German with language model},
author={Fleck, Michael},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mfleck/wav2vec2-large-xls-r-300m-german-with-lm}},
year={2022}
}
```
## Intended uses & limitations
Inference Usage
```python
from transformers import pipeline
pipe = pipeline(model="mfleck/wav2vec2-large-xls-r-300m-german-with-lm")
output = pipe("/path/to/file.wav",chunk_length_s=5, stride_length_s=1)
print(output["text"])
```
## Training and evaluation data
Script used for training (takes about 80 hours on a single A100 40GB)
```python
import random
import re
import json
from typing import Any, Dict, List, Optional, Union
import pandas as pd
import numpy as np
import torch
# import soundfile
from datasets import load_dataset, load_metric, Audio
from dataclasses import dataclass, field
from transformers import Wav2Vec2CTCTokenizer, Wav2Vec2FeatureExtractor, Wav2Vec2Processor, TrainingArguments, Trainer, Wav2Vec2ForCTC
'''
Most parts of this script are following the tutorial: https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
'''
common_voice_train = load_dataset("common_voice", "de", split="train+validation")
# Use train dataset with less training data
#common_voice_train = load_dataset("common_voice", "de", split="train[:3%]")
common_voice_test = load_dataset("common_voice", "de", split="test")
# Remove unused columns
common_voice_train = common_voice_train.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
common_voice_test = common_voice_test.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
# Remove batches with chars which do not exist in German
print(len(common_voice_train))
regex = "[^A-Za-zäöüÄÖÜß,?.! ]+"
common_voice_train = common_voice_train.filter(lambda example: bool(re.search(regex, example['sentence']))==False)
common_voice_test = common_voice_test.filter(lambda example: bool(re.search(regex, example['sentence']))==False)
print(len(common_voice_train))
# Remove special chars from transcripts
chars_to_remove_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']'
def remove_special_characters(batch):
batch["sentence"] = re.sub(chars_to_remove_regex, '', batch["sentence"]).lower()
return batch
common_voice_train = common_voice_train.map(remove_special_characters, num_proc=10)
common_voice_test = common_voice_test.map(remove_special_characters, num_proc=10)
# Show some random transcripts to proof that preprocessing worked as expected
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
print(str(dataset[picks]))
show_random_elements(common_voice_train.remove_columns(["path","audio"]))
# Extract all chars which exist in datasets and add wav2vek tokens
def extract_all_chars(batch):
all_text = " ".join(batch["sentence"])
vocab = list(set(all_text))
return {"vocab": [vocab], "all_text": [all_text]}
vocab_train = common_voice_train.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_train.column_names)
vocab_test = common_voice_test.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_test.column_names)
vocab_list = list(set(vocab_train["vocab"][0]) | set(vocab_test["vocab"][0]))
vocab_dict = {v: k for k, v in enumerate(sorted(vocab_list))}
vocab_dict
vocab_dict["|"] = vocab_dict[" "]
del vocab_dict[" "]
vocab_dict["[UNK]"] = len(vocab_dict)
vocab_dict["[PAD]"] = len(vocab_dict)
len(vocab_dict)
with open('vocab.json', 'w') as vocab_file:
json.dump(vocab_dict, vocab_file)
# Create tokenizer and repo at Huggingface
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("./", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
repo_name = "wav2vec2-large-xls-r-300m-german-with-lm"
tokenizer.push_to_hub(repo_name)
print("pushed to hub")
# Create feature extractor and processor
feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True)
processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
# Cast audio column
common_voice_train = common_voice_train.cast_column("audio", Audio(sampling_rate=16_000))
common_voice_test = common_voice_test.cast_column("audio", Audio(sampling_rate=16_000))
# Convert audio signal to array and 16khz sampling rate
def prepare_dataset(batch):
audio = batch["audio"]
# batched output is "un-batched"
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
# Save an audio file to check if it gets loaded correctly
# soundfile.write("/home/debian/trainnew/test.wav",batch["input_values"],audio["sampling_rate"])
batch["input_length"] = len(batch["input_values"])
with processor.as_target_processor():
batch["labels"] = processor(batch["sentence"]).input_ids
return batch
common_voice_train = common_voice_train.map(prepare_dataset, remove_columns=common_voice_train.column_names)
common_voice_test = common_voice_test.map(prepare_dataset, remove_columns=common_voice_test.column_names)
print("dataset prepared")
@dataclass
class DataCollatorCTCWithPadding:
"""
Data collator that will dynamically pad the inputs received.
Args:
processor (:class:`~transformers.Wav2Vec2Processor`)
The processor used for proccessing the data.
padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
among:
* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
maximum acceptable input length for the model if that argument is not provided.
* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
different lengths).
"""
processor: Wav2Vec2Processor
padding: Union[bool, str] = True
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lenghts and need
# different padding methods
input_features = [{"input_values": feature["input_values"]} for feature in features]
label_features = [{"input_ids": feature["labels"]} for feature in features]
batch = self.processor.pad(
input_features,
padding=self.padding,
return_tensors="pt",
)
with self.processor.as_target_processor():
labels_batch = self.processor.pad(
label_features,
padding=self.padding,
return_tensors="pt",
)
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)
# Use word error rate as metric
wer_metric = load_metric("wer")
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
# we do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
# Model and training parameters
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-xls-r-300m",
attention_dropout=0.094,
hidden_dropout=0.01,
feat_proj_dropout=0.04,
mask_time_prob=0.08,
layerdrop=0.04,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer),
)
model.freeze_feature_extractor()
training_args = TrainingArguments(
output_dir=repo_name,
group_by_length=True,
per_device_train_batch_size=32,
gradient_accumulation_steps=2,
evaluation_strategy="steps",
num_train_epochs=20,
gradient_checkpointing=True,
fp16=True,
save_steps=5000,
eval_steps=5000,
logging_steps=100,
learning_rate=1e-4,
warmup_steps=500,
save_total_limit=3,
push_to_hub=True,
)
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=common_voice_train,
eval_dataset=common_voice_test,
tokenizer=processor.feature_extractor,
)
# Start fine tuning
trainer.train()
# When done push final model to Huggingface hub
trainer.push_to_hub()
```
The model achieves a Word Error Rate of 8,8% using the following script:
```python
import argparse
import re
from typing import Dict
import torch
from datasets import Audio, Dataset, load_dataset, load_metric
from transformers import AutoFeatureExtractor, pipeline
# load dataset
dataset = load_dataset("common_voice", "de", split="test")
# use only 1% of data
#dataset = load_dataset("common_voice", "de", split="test[:1%]")
# load processor
feature_extractor = AutoFeatureExtractor.from_pretrained("mfleck/wav2vec2-large-xls-r-300m-german-with-lm")
sampling_rate = feature_extractor.sampling_rate
dataset = dataset.cast_column("audio", Audio(sampling_rate=sampling_rate))
# load eval pipeline
# device=0 means GPU, use device=-1 for CPU
asr = pipeline("automatic-speech-recognition", model="mfleck/wav2vec2-large-xls-r-300m-german-with-lm", device=0)
# Remove batches with chars which do not exist in German
regex = "[^A-Za-zäöüÄÖÜß,?.! ]+"
dataset = dataset.filter(lambda example: bool(re.search(regex, example['sentence']))==False)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']'
# map function to decode audio
def map_to_pred(batch):
prediction = asr(batch["audio"]["array"], chunk_length_s=5, stride_length_s=1)
# Print automatic generated transcript
#print(str(prediction))
batch["prediction"] = prediction["text"]
text = batch["sentence"]
batch["target"] = re.sub(chars_to_ignore_regex, "", text.lower()) + " "
return batch
# run inference on all examples
result = dataset.map(map_to_pred, remove_columns=dataset.column_names)
# load metric
wer = load_metric("wer")
cer = load_metric("cer")
# compute metrics
wer_result = wer.compute(references=result["target"], predictions=result["prediction"])
cer_result = cer.compute(references=result["target"], predictions=result["prediction"])
# print results
result_str = f"WER: {wer_result}\n" f"CER: {cer_result}"
print(result_str)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1396 | 1.42 | 5000 | 0.1449 | 0.1479 |
| 0.1169 | 2.83 | 10000 | 0.1285 | 0.1286 |
| 0.0938 | 4.25 | 15000 | 0.1277 | 0.1230 |
| 0.0924 | 5.67 | 20000 | 0.1305 | 0.1191 |
| 0.0765 | 7.09 | 25000 | 0.1256 | 0.1158 |
| 0.0749 | 8.5 | 30000 | 0.1186 | 0.1092 |
| 0.066 | 9.92 | 35000 | 0.1173 | 0.1068 |
| 0.0581 | 11.34 | 40000 | 0.1225 | 0.1030 |
| 0.0582 | 12.75 | 45000 | 0.1153 | 0.0999 |
| 0.0507 | 14.17 | 50000 | 0.1182 | 0.0971 |
| 0.0491 | 15.59 | 55000 | 0.1136 | 0.0939 |
| 0.045 | 17.01 | 60000 | 0.1140 | 0.0914 |
| 0.0395 | 18.42 | 65000 | 0.1160 | 0.0902 |
| 0.037 | 19.84 | 70000 | 0.1148 | 0.0882 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
ScandinavianMrT/distilbert-IMDB-NEG
|
ScandinavianMrT
| 2022-03-18T16:43:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-18T16:15:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-IMDB-NEG
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-IMDB-NEG
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1871
- Accuracy: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1865 | 1.0 | 2000 | 0.1871 | 0.9346 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
brad1141/bert-finetuned-comp2
|
brad1141
| 2022-03-18T16:39:56Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-18T15:44:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-comp2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-comp2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9570
- Precision: 0.5169
- Recall: 0.6765
- F1: 0.5820
- Accuracy: 0.5820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8434 | 1.0 | 934 | 0.7147 | 0.4475 | 0.6252 | 0.5096 | 0.5096 |
| 0.6307 | 2.0 | 1868 | 0.5959 | 0.5058 | 0.6536 | 0.5585 | 0.5585 |
| 0.4691 | 3.0 | 2802 | 0.6555 | 0.4761 | 0.6865 | 0.5521 | 0.5521 |
| 0.334 | 4.0 | 3736 | 0.7211 | 0.5292 | 0.6682 | 0.5863 | 0.5863 |
| 0.2326 | 5.0 | 4670 | 0.8046 | 0.4886 | 0.6865 | 0.5682 | 0.5682 |
| 0.1625 | 6.0 | 5604 | 0.8650 | 0.4972 | 0.6851 | 0.5728 | 0.5728 |
| 0.1195 | 7.0 | 6538 | 0.9570 | 0.5169 | 0.6765 | 0.5820 | 0.5820 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
facebook/wav2vec2-large-xlsr-53
|
facebook
| 2022-03-18T16:11:44Z | 557,799 | 121 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"speech",
"multilingual",
"dataset:common_voice",
"arxiv:2006.13979",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: multilingual
datasets:
- common_voice
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-XLSR-53
[Facebook's XLSR-Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information.
[Paper](https://arxiv.org/abs/2006.13979)
Authors: Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli
**Abstract**
This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latents shared across languages. The resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong individual models. Analysis shows that the latent discrete speech representations are shared across languages with increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing XLSR-53, a large model pretrained in 53 languages.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb) for more information on how to fine-tune the model.

|
TestSB3/ppo-CartPole-v1
|
TestSB3
| 2022-03-18T13:41:36Z | 0 | 0 | null |
[
"gym",
"reinforcement-learning",
"region:us"
] |
reinforcement-learning
| 2022-03-18T10:20:52Z |
---
tags:
- gym
- reinforcement-learning
---
# TestSB3/ppo-CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1 using the [rl-baselines3-zoo](https://github.com/DLR-RM/rl-baselines3-zoo) library.
## Usage (with RL-baselines3-zoo)
Just clone the [rl-baselines3-zoo](https://github.com/DLR-RM/rl-baselines3-zoo) library.
Then run:
```python
python enjoy.py --algo ppo --env CartPole-v1
```
## Evaluation Results
Mean Reward: 500.0 +/- 0.0 (300 test episodes)
## Citing the Project
To cite this repository in publications:
```
@misc{rl-zoo3,
author = {Raffin, Antonin},
title = {RL Baselines3 Zoo},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/DLR-RM/rl-baselines3-zoo}},
}
```
|
navteca/all-mpnet-base-v2
|
navteca
| 2022-03-18T11:28:42Z | 7 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
language: en
license: mit
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- sentence-transformers
---
# All MPNet base model (v2) for Semantic Search
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 384 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
willcai/wav2vec2_common_voice_accents_4
|
willcai
| 2022-03-18T11:11:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-18T01:46:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.615 | 1.28 | 400 | 0.8202 |
| 0.3778 | 2.56 | 800 | 0.1587 |
| 0.2229 | 3.85 | 1200 | 0.1027 |
| 0.1799 | 5.13 | 1600 | 0.0879 |
| 0.1617 | 6.41 | 2000 | 0.0772 |
| 0.1474 | 7.69 | 2400 | 0.0625 |
| 0.134 | 8.97 | 2800 | 0.0498 |
| 0.1213 | 10.26 | 3200 | 0.0429 |
| 0.1186 | 11.54 | 3600 | 0.0434 |
| 0.1118 | 12.82 | 4000 | 0.0312 |
| 0.1026 | 14.1 | 4400 | 0.0365 |
| 0.0951 | 15.38 | 4800 | 0.0321 |
| 0.0902 | 16.67 | 5200 | 0.0262 |
| 0.0843 | 17.95 | 5600 | 0.0208 |
| 0.0744 | 19.23 | 6000 | 0.0140 |
| 0.0718 | 20.51 | 6400 | 0.0204 |
| 0.0694 | 21.79 | 6800 | 0.0133 |
| 0.0636 | 23.08 | 7200 | 0.0104 |
| 0.0609 | 24.36 | 7600 | 0.0084 |
| 0.0559 | 25.64 | 8000 | 0.0050 |
| 0.0527 | 26.92 | 8400 | 0.0089 |
| 0.0495 | 28.21 | 8800 | 0.0058 |
| 0.0471 | 29.49 | 9200 | 0.0047 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
adamfeldmann/HILANCO-GPTX
|
adamfeldmann
| 2022-03-18T11:04:43Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-18T11:04:43Z |
---
license: apache-2.0
---
|
cammy/bart-large-cnn-100-MDS-own
|
cammy
| 2022-03-18T09:32:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-18T09:31:07Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-100-MDS-own
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-100-MDS-own
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5357
- Rouge1: 22.4039
- Rouge2: 4.681
- Rougel: 13.1526
- Rougelsum: 15.7986
- Gen Len: 70.3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 25 | 3.3375 | 25.7428 | 6.754 | 16.4131 | 19.6269 | 81.9 |
| No log | 2.0 | 50 | 3.5357 | 22.4039 | 4.681 | 13.1526 | 15.7986 | 70.3 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
taozhuoqun/tzq_pretrained
|
taozhuoqun
| 2022-03-18T08:50:37Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-18T08:50:37Z |
---
license: apache-2.0
---
|
brad1141/gpt2-finetuned-comp2
|
brad1141
| 2022-03-18T08:47:38Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-18T07:26:03Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: gpt2-finetuned-comp2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-comp2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7788
- Precision: 0.3801
- Recall: 0.6854
- F1: 0.4800
- Accuracy: 0.4800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.0962 | 1.0 | 1012 | 0.7528 | 0.3793 | 0.6109 | 0.4411 | 0.4411 |
| 0.7022 | 2.0 | 2024 | 0.6763 | 0.3992 | 0.6557 | 0.4799 | 0.4799 |
| 0.6136 | 3.0 | 3036 | 0.6751 | 0.3995 | 0.6597 | 0.4824 | 0.4824 |
| 0.5444 | 4.0 | 4048 | 0.6799 | 0.3891 | 0.6817 | 0.4854 | 0.4854 |
| 0.4846 | 5.0 | 5060 | 0.7371 | 0.4030 | 0.6701 | 0.4906 | 0.4906 |
| 0.4379 | 6.0 | 6072 | 0.7520 | 0.3956 | 0.6788 | 0.4887 | 0.4887 |
| 0.404 | 7.0 | 7084 | 0.7788 | 0.3801 | 0.6854 | 0.4800 | 0.4800 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
aaraki/bert-base-uncased-finetuned-swag
|
aaraki
| 2022-03-18T08:16:58Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-18T06:29:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5155
- Accuracy: 0.8002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6904 | 1.0 | 4597 | 0.5155 | 0.8002 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
moshew/paraphrase-mpnet-base-v2_SetFit_sst2
|
moshew
| 2022-03-18T07:53:15Z | 1 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-18T07:53:07Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# moshew/paraphrase-mpnet-base-v2_SetFit_sst2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('moshew/paraphrase-mpnet-base-v2_SetFit_sst2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_sst2')
model = AutoModel.from_pretrained('moshew/paraphrase-mpnet-base-v2_SetFit_sst2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=moshew/paraphrase-mpnet-base-v2_SetFit_sst2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8650 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
youzanai/bert-product-title-chinese
|
youzanai
| 2022-03-18T06:19:06Z | 6 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
基于有赞商品标题语料训练的bert模型。
模型示例代码参考 https://github.com/youzanai/trexpark
|
brad1141/Longformer-finetuned-norm
|
brad1141
| 2022-03-18T05:42:11Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longformer",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-18T02:29:24Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Longformer-finetuned-norm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Longformer-finetuned-norm
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8127
- Precision: 0.8429
- Recall: 0.8701
- F1: 0.8562
- Accuracy: 0.8221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8008 | 1.0 | 1012 | 0.5839 | 0.8266 | 0.8637 | 0.8447 | 0.8084 |
| 0.5168 | 2.0 | 2024 | 0.5927 | 0.7940 | 0.9102 | 0.8481 | 0.8117 |
| 0.3936 | 3.0 | 3036 | 0.5651 | 0.8476 | 0.8501 | 0.8488 | 0.8143 |
| 0.2939 | 4.0 | 4048 | 0.6411 | 0.8494 | 0.8578 | 0.8536 | 0.8204 |
| 0.2165 | 5.0 | 5060 | 0.6833 | 0.8409 | 0.8822 | 0.8611 | 0.8270 |
| 0.1561 | 6.0 | 6072 | 0.7643 | 0.8404 | 0.8810 | 0.8602 | 0.8259 |
| 0.1164 | 7.0 | 7084 | 0.8127 | 0.8429 | 0.8701 | 0.8562 | 0.8221 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
beston91/gpt2-xl-ft-logits-5k
|
beston91
| 2022-03-18T02:54:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-17T23:54:46Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-vanilla-debiased-5000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-vanilla-debiased-5000
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 27 | 6.1985 |
| No log | 1.99 | 54 | 6.4583 |
| No log | 2.99 | 81 | 6.7709 |
| No log | 3.99 | 108 | 7.0371 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/InformalToFormalLincoln26
|
BigSalmon
| 2022-03-18T02:37:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-08T22:59:35Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln26")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln26")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
|
saghar/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large-finetuned-wikitext103
|
saghar
| 2022-03-18T02:24:28Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"dataset:wikitext",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-17T21:19:53Z |
---
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: MiniLMv2-L6-H384-distilled-from-RoBERTa-Large-finetuned-wikitext103
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H384-distilled-from-RoBERTa-Large-finetuned-wikitext103
This model is a fine-tuned version of [nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.9694 | 1.0 | 3125 | 5.1757 |
| 5.2228 | 2.0 | 6250 | 4.8847 |
| 5.0653 | 3.0 | 9375 | 4.8236 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
huggingtweets/charlieykim
|
huggingtweets
| 2022-03-17T20:43:26Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/780199274046976001/ewIzqDV5_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Charlie Kim</div>
<div style="text-align: center; font-size: 14px;">@charlieykim</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Charlie Kim.
| Data | Charlie Kim |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 234 |
| Short tweets | 29 |
| Tweets kept | 2985 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ql0zb69/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @charlieykim's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/arss897f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/arss897f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/charlieykim')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
cammy/led-large-16384-arxiv-100-MDS
|
cammy
| 2022-03-17T19:09:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-17T18:49:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-large-16384-arxiv-100-MDS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-large-16384-arxiv-100-MDS
This model is a fine-tuned version of [allenai/led-large-16384-arxiv](https://huggingface.co/allenai/led-large-16384-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3897
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 512.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 25 | 3.1144 | 13.2756 | 2.6204 | 9.2686 | 10.2289 | 184.0 |
| No log | 2.0 | 50 | 3.3897 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
niksss/Hinglish-HATEBERT
|
niksss
| 2022-03-17T18:43:00Z | 13 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"license:afl-3.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-17T17:47:30Z |
---
license: afl-3.0
---
Fine-Tune it using this [nb](https://colab.research.google.com/drive/1JRmrAYR0pcEWyni_VtT4SSFxZ5adlAhS?usp=sharing)
|
swetava/distilbert-base-uncased-finetuned-emotion
|
swetava
| 2022-03-17T18:42:58Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-17T18:13:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.924792312369614
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2259
- Accuracy: 0.9245
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8432 | 1.0 | 250 | 0.3353 | 0.8975 | 0.8939 |
| 0.2571 | 2.0 | 500 | 0.2259 | 0.9245 | 0.9248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Nadav/MacSQuAD
|
Nadav
| 2022-03-17T18:20:05Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"license:afl-3.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-16T12:14:12Z |
---
license: afl-3.0
---
A MacBERTh model fine-tuned on SQuAD_v2. Hopefully, this will allow the model to perform well on QA tasks on historical texts.
Finetune parameters:
```
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
learning_rate=3e-5,
per_device_train_batch_size=64,
per_device_eval_batch_size=64,
num_train_epochs=2,
weight_decay=0.01,
lr_scheduler_type=SchedulerType.LINEAR,
warmup_ratio=0.2
)
```
Evaluation metrics on the validation set of SQuAD_v2:
```
{'exact': 49.49886296639434, 'f1': 53.9199170778635, 'total': 11873, 'HasAns_exact': 60.08771929824562, 'HasAns_f1': 68.94250598270429, 'HasAns_total': 5928, 'NoAns_exact': 38.940285954583686, 'NoAns_f1': 38.940285954583686, 'NoAns_total': 5945, 'best_exact': 50.5095595047587, 'best_exact_thresh': 0.0, 'best_f1': 51.75825524534494, 'best_f1_thresh': 0.0}
```
|
osanseviero/magma
|
osanseviero
| 2022-03-17T17:21:03Z | 0 | 4 | null |
[
"arxiv:2112.05253",
"license:mit",
"region:us"
] | null | 2022-03-17T12:59:07Z |
---
license: mit
---
# MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning
Paper: https://arxiv.org/abs/2112.05253
## Abstract
Large-scale pretraining is fast becoming the norm in Vision-Language (VL) modeling. However, prevailing VL approaches are limited by the requirement for labeled data and the use of complex multi-step pretraining objectives. We present MAGMA - a simple method for augmenting generative language models with additional modalities using adapter-based finetuning. Building on Frozen, we train a series of VL models that autoregressively generate text from arbitrary combinations of visual and textual input. The pretraining is entirely end-to-end using a single language modeling objective, simplifying optimization compared to previous approaches. Importantly, the language model weights remain unchanged during training, allowing for transfer of encyclopedic knowledge and in-context learning abilities from language pretraining. MAGMA outperforms Frozen on open-ended generative tasks, achieving state of the art results on the OKVQA benchmark and competitive results on a range of other popular VL benchmarks, while pretraining on 0.2% of the number of samples used to train SimVLM.
## Usage
```py
from magma import Magma
from huggingface_hub import hf_hub_url, cached_download
checkpoint_url = hf_hub_url(repo_id="osanseviero/magma", filename="model.pt")
checkpoint_path = cached_download(checkpoint_url)
model = Magma.from_checkpoint(
config_path = "configs/MAGMA_v1.yml",
checkpoint_path = checkpoint_path,
device = 'cuda:0'
)
```
|
newtonkwan/gpt2-xl-ft-4
|
newtonkwan
| 2022-03-17T16:38:08Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-17T15:00:34Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-4
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.96 | 15 | 3.5549 |
| No log | 1.96 | 30 | 1.4216 |
| No log | 2.96 | 45 | 1.2969 |
| No log | 3.96 | 60 | 1.2823 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 35.67070770263672
### Dataset Size
Size: 5000
|
sanchit-gandhi/wav2vec2-2-bart-debug
|
sanchit-gandhi
| 2022-03-17T16:28:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-17T14:46:58Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_AugmentedTransfer_ES
|
StivenLancheros
| 2022-03-17T14:51:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-17T13:21:45Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_AugmentedTransfer_ES
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner-CRAFT_AugmentedTransfer_ES
This model is a fine-tuned version of [StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2298
- Precision: 0.8535
- Recall: 0.8476
- F1: 0.8505
- Accuracy: 0.9705
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish (MT translated) and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Three datasets (original, augmented, MT translated CRAFT) were concatenated.
To improve F1 score the transfer learning was completed in two steps.
Using [StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES) as a base model, I finetuned once more on the original CRAFT dataset in English.
Biobert --> Augmented CRAFT --> CRAFT ES (MT translated)
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0177 | 1.0 | 1360 | 0.2318 | 0.8510 | 0.8275 | 0.8391 | 0.9684 |
| 0.0102 | 2.0 | 2720 | 0.2253 | 0.8322 | 0.8455 | 0.8388 | 0.9683 |
| 0.0039 | 3.0 | 4080 | 0.2193 | 0.8383 | 0.8451 | 0.8416 | 0.9689 |
| 0.002 | 4.0 | 5440 | 0.2298 | 0.8535 | 0.8476 | 0.8505 | 0.9705 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN
|
StivenLancheros
| 2022-03-17T14:45:49Z | 654 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-15T22:41:38Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2299
- Precision: 0.8122
- Recall: 0.8475
- F1: 0.8294
- Accuracy: 0.9661
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Both datasets (original, augmented) were concatenated.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0542 | 1.0 | 2719 | 0.1540 | 0.7834 | 0.8300 | 0.8060 | 0.9622 |
| 0.0229 | 2.0 | 5438 | 0.1920 | 0.8092 | 0.8219 | 0.8155 | 0.9644 |
| 0.0069 | 3.0 | 8157 | 0.2054 | 0.8130 | 0.8481 | 0.8302 | 0.9656 |
| 0.0023 | 4.0 | 10876 | 0.2299 | 0.8122 | 0.8475 | 0.8294 | 0.9661 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Jungwonchang/wav2vec2-large-xls-r-300m-vietnamese-colab
|
Jungwonchang
| 2022-03-17T11:55:20Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-17T11:09:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-vietnamese-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-vietnamese-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
gsarti/it5-base-informal-to-formal
|
gsarti
| 2022-03-17T09:52:48Z | 14 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"style-transfer",
"formality-style-transfer",
"it",
"dataset:yahoo/xformal_it",
"arxiv:2203.03759",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language:
- it
license: apache-2.0
tags:
- italian
- sequence-to-sequence
- style-transfer
- formality-style-transfer
datasets:
- yahoo/xformal_it
widget:
- text: "maronn qualcuno mi spieg' CHECCOSA SUCCEDE?!?!"
- text: "wellaaaaaaa, ma fraté sei proprio troppo simpatiko, grazieeee!!"
- text: "nn capisco xke tt i ragazzi lo fanno"
- text: "IT5 è SUPERMEGA BRAVISSIMO a capire tt il vernacolo italiano!!!"
metrics:
- rouge
- bertscore
model-index:
- name: it5-base-informal-to-formal
results:
- task:
type: formality-style-transfer
name: "Informal-to-formal Style Transfer"
dataset:
type: xformal_it
name: "XFORMAL (Italian Subset)"
metrics:
- type: rouge1
value: 0.583
name: "Avg. Test Rouge1"
- type: rouge2
value: 0.403
name: "Avg. Test Rouge2"
- type: rougeL
value: 0.561
name: "Avg. Test RougeL"
- type: bertscore
value: 0.641
name: "Avg. Test BERTScore"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "17g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
---
# IT5 Base for Informal-to-formal Style Transfer 🧐
This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on Informal-to-formal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
i2f = pipeline("text2text-generation", model='it5/it5-base-informal-to-formal')
i2f("nn capisco xke tt i ragazzi lo fanno")
>>> [{"generated_text": "non comprendo perché tutti i ragazzi agiscono così"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-informal-to-formal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-informal-to-formal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
|
sanchit-gandhi/wav2vec2-2-rnd-no-adapter
|
sanchit-gandhi
| 2022-03-17T06:35:21Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-15T19:50:28Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8384
- Wer: 0.1367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.2245 | 1.68 | 1500 | 6.1442 | 1.5986 |
| 5.4521 | 3.36 | 3000 | 5.4335 | 1.6439 |
| 3.3659 | 5.04 | 4500 | 3.6455 | 0.6503 |
| 1.5724 | 6.73 | 6000 | 2.3554 | 0.3386 |
| 1.4759 | 8.41 | 7500 | 1.7423 | 0.2889 |
| 1.0826 | 10.09 | 9000 | 1.3818 | 0.2209 |
| 0.6769 | 11.77 | 10500 | 1.1268 | 0.1737 |
| 0.7348 | 13.45 | 12000 | 0.9990 | 0.1575 |
| 0.5419 | 15.13 | 13500 | 0.9435 | 0.1560 |
| 0.4212 | 16.82 | 15000 | 0.8678 | 0.1405 |
| 0.3805 | 18.5 | 16500 | 0.8384 | 0.1367 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
saghar/TinyBERT_General_6L_768D-finetuned-wikitext103
|
saghar
| 2022-03-17T06:14:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:wikitext",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-16T22:46:35Z |
---
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: TinyBERT_General_6L_768D-finetuned-wikitext103
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyBERT_General_6L_768D-finetuned-wikitext103
This model is a fine-tuned version of [huawei-noah/TinyBERT_General_6L_768D](https://huggingface.co/huawei-noah/TinyBERT_General_6L_768D) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3768
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1792 | 1.0 | 3125 | 3.5465 |
| 3.6726 | 2.0 | 6250 | 3.4226 |
| 3.6065 | 3.0 | 9375 | 3.3768 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
zdepablo/distilbert-base-uncased-distilled-clinc
|
zdepablo
| 2022-03-17T02:41:19Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-17T02:34:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9474193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2587
- Accuracy: 0.9474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2192 | 1.0 | 318 | 3.1512 | 0.7519 |
| 2.3972 | 2.0 | 636 | 1.5605 | 0.8519 |
| 1.1587 | 3.0 | 954 | 0.7688 | 0.9139 |
| 0.5616 | 4.0 | 1272 | 0.4672 | 0.9319 |
| 0.3001 | 5.0 | 1590 | 0.3414 | 0.9403 |
| 0.1817 | 6.0 | 1908 | 0.2952 | 0.9432 |
| 0.1228 | 7.0 | 2226 | 0.2714 | 0.9468 |
| 0.0939 | 8.0 | 2544 | 0.2605 | 0.9465 |
| 0.0799 | 9.0 | 2862 | 0.2600 | 0.9468 |
| 0.0736 | 10.0 | 3180 | 0.2587 | 0.9474 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sanchit-gandhi/wav2vec2-2-rnd-2-layer-no-adapter
|
sanchit-gandhi
| 2022-03-17T02:23:57Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-15T19:50:51Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8365
- Wer: 0.2812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.8017 | 1.68 | 1500 | 5.7161 | 1.3220 |
| 4.5907 | 3.36 | 3000 | 4.7936 | 0.9799 |
| 3.151 | 5.04 | 4500 | 4.1610 | 0.7752 |
| 1.5166 | 6.73 | 6000 | 3.5939 | 0.5343 |
| 2.4523 | 8.41 | 7500 | 4.0013 | 0.6954 |
| 1.423 | 10.09 | 9000 | 2.6917 | 0.4476 |
| 0.7882 | 11.77 | 10500 | 2.4493 | 0.3967 |
| 1.1643 | 13.45 | 12000 | 2.0629 | 0.3234 |
| 0.5352 | 15.13 | 13500 | 2.0625 | 0.3363 |
| 0.407 | 16.82 | 15000 | 1.8378 | 0.2812 |
| 0.1162 | 18.5 | 16500 | 1.8365 | 0.2812 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Connorvr/TeachingGen
|
Connorvr
| 2022-03-17T00:14:01Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
nbroad/mrasp2-6e6d-no-mono
|
nbroad
| 2022-03-17T00:13:23Z | 6 | 2 |
transformers
|
[
"transformers",
"fsmt",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
Model not mine.
Taken from here https://github.com/PANXiao1994/mRASP2
|
newtonkwan/gpt2-xl-ft-1
|
newtonkwan
| 2022-03-16T23:52:23Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-13T18:22:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-with-non-challenging
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-with-non-challenging
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2020
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 31 | 1.5517 |
| No log | 1.99 | 62 | 1.3733 |
| No log | 2.99 | 93 | 1.4207 |
| No log | 3.99 | 124 | 1.4872 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
### Perplexity
Score: 28.26373863220215
### Dataset Size
Size: 5000
|
zdepablo/distilbert-base-uncased-finetuned-clinc
|
zdepablo
| 2022-03-16T23:33:21Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-16T23:28:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7712
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2892 | 1.0 | 318 | 3.2831 | 0.7429 |
| 2.6246 | 2.0 | 636 | 1.8742 | 0.8326 |
| 1.5444 | 3.0 | 954 | 1.1526 | 0.8939 |
| 1.0097 | 4.0 | 1272 | 0.8568 | 0.9106 |
| 0.7929 | 5.0 | 1590 | 0.7712 | 0.9174 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
GleamEyeBeast/ASCEND_Dataset_Model
|
GleamEyeBeast
| 2022-03-16T22:58:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-14T17:38:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: ASCEND_Dataset_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASCEND_Dataset_Model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9199
- Wer: 0.9540
- Cer: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 16.9063 | 1.0 | 687 | 4.7768 | 1.0 | 1.0 |
| 5.0252 | 2.0 | 1374 | 4.7004 | 1.0 | 1.0 |
| 4.9378 | 3.0 | 2061 | 4.6715 | 1.0 | 1.0 |
| 5.1468 | 4.0 | 2748 | 4.6605 | 1.0 | 1.0 |
| 4.9353 | 5.0 | 3435 | 4.6470 | 1.0 | 1.0 |
| 4.913 | 6.0 | 4122 | 4.6177 | 1.0 | 1.0 |
| 4.8034 | 7.0 | 4809 | 4.7699 | 1.0 | 1.0 |
| 4.6905 | 8.0 | 5496 | 4.3596 | 1.0 | 1.0 |
| 4.5251 | 9.0 | 6183 | 4.2670 | 1.0 | 1.0 |
| 4.4527 | 10.0 | 6870 | 4.2087 | 1.0 | 1.0 |
| 4.3731 | 11.0 | 7557 | 4.1950 | 0.9982 | 0.9997 |
| 4.3461 | 12.0 | 8244 | 4.2287 | 0.9928 | 0.9988 |
| 4.3224 | 13.0 | 8931 | 4.1565 | 0.9802 | 0.9971 |
| 4.2504 | 14.0 | 9618 | 4.1254 | 0.9619 | 0.9937 |
| 4.2196 | 15.0 | 10305 | 4.0377 | 0.9562 | 0.9913 |
| 4.1911 | 16.0 | 10992 | 4.0576 | 0.9601 | 0.9887 |
| 4.1079 | 17.0 | 11679 | 4.0630 | 0.9544 | 0.9857 |
| 4.1117 | 18.0 | 12366 | 4.0009 | 0.9558 | 0.9880 |
| 4.0324 | 19.0 | 13053 | 3.9245 | 0.9540 | 0.9877 |
| 3.9871 | 20.0 | 13740 | 3.9199 | 0.9540 | 0.9868 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Guen/guen_test_prompt_generation
|
Guen
| 2022-03-16T22:33:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-16T22:18:10Z |
A small language generation head to generate text from a prompt.
Fine-tuned on the t5-base model with the aeslc dataset.
|
newtonkwan/gpt2-xl-ft-0
|
newtonkwan
| 2022-03-16T21:58:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-16T14:26:02Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-0
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.96 | 6 | 5.1701 |
| No log | 1.96 | 12 | 4.1214 |
| No log | 2.96 | 18 | 2.5305 |
| No log | 3.96 | 24 | 2.0324 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.31455421447754
### Dataset Size
Size: 1000
|
ScandinavianMrT/gpt2_prefinetune_IMDB
|
ScandinavianMrT
| 2022-03-16T19:05:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-16T18:44:37Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2_prefinetune_IMDB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_prefinetune_IMDB
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7838 | 1.0 | 2997 | 3.6875 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
navteca/nli-deberta-v3-large
|
navteca
| 2022-03-16T19:00:26Z | 4 | 3 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"microsoft/deberta-v3-large",
"zero-shot-classification",
"en",
"dataset:multi_nli",
"dataset:snli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-16T18:53:23Z |
---
datasets:
- multi_nli
- snli
language: en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-large
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 92.20
- Accuracy on MNLI mismatched set: 90.49
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-large')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-large')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-large')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-large')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
```
|
horsbug98/Part_1_mBERT_Model_E1
|
horsbug98
| 2022-03-16T18:48:12Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:tydiqa",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-16T18:20:20Z |
---
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: debug_bert_finetuned_dutch_task2_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debug_bert_finetuned_dutch_task2_1
This model is a fine-tuned version of [henryk/bert-base-multilingual-cased-finetuned-dutch-squad2](https://huggingface.co/henryk/bert-base-multilingual-cased-finetuned-dutch-squad2) on the tydiqa secondary_task dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
DrishtiSharma/poem-gen-gpt2-small-spanish
|
DrishtiSharma
| 2022-03-16T18:46:26Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-16T17:46:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: poem-gen-gpt2-small-spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-gpt2-small-spanish
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.2121 | 1.0 | 2569 | 3.9954 |
| 4.0612 | 2.0 | 5138 | 3.9375 |
| 3.9988 | 3.0 | 7707 | 3.9229 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
nandezgarcia/distilbert-base-uncased-finetuned-squad-d5716d28
|
nandezgarcia
| 2022-03-16T18:26:49Z | 0 | 0 | null |
[
"pytorch",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2022-03-16T18:26:44Z |
---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ScandinavianMrT/distilbert-IMDB-POS
|
ScandinavianMrT
| 2022-03-16T18:15:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-16T17:42:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-IMDB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-IMDB
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1905
- Accuracy: 0.9295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1928 | 1.0 | 2000 | 0.1905 | 0.9295 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
horsbug98/Part_2_BERT_Multilingual_Dutch_Model_E1
|
horsbug98
| 2022-03-16T18:09:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:tydiqa",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-16T17:44:30Z |
---
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: debug_bert_finetuned_dutch_task2_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debug_bert_finetuned_dutch_task2_1
This model is a fine-tuned version of [henryk/bert-base-multilingual-cased-finetuned-dutch-squad2](https://huggingface.co/henryk/bert-base-multilingual-cased-finetuned-dutch-squad2) on the tydiqa secondary_task dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
anton-l/xls-r-300m-bart-base
|
anton-l
| 2022-03-16T17:27:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-16T16:36:37Z |
---
license: apache-2.0
---
|
adityavithaldas/Fashion_Category_Classifier
|
adityavithaldas
| 2022-03-16T16:05:18Z | 0 | 4 | null |
[
"license:cc-by-4.0",
"region:us"
] | null | 2022-03-16T15:59:51Z |
---
license: cc-by-4.0
---
This model uses the Deep Fashion dataset in order to create a category classifier among the 50 or so provided categories.
https://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html
This model leverages the ViT (Vision transformer), loaded with the custom dataset and the 50 odd categoes to which they are assigned. The objective here, is to expand the same and get to
a. An accuracy level of 90+ in the top 5 categorizes
b. An accuracy of 70+ overall.
In addition, we would also look forward to creating attribute extractors, to extract key attributes (primary color, checked, sleeve, collar etc) as we proceed
|
Ebtihal/AraBertMo_base_V7
|
Ebtihal
| 2022-03-16T13:27:40Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
Arabic Model AraBertMo_base_V7
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V7' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 50046| 7 | 64 | 5915 | 5h 23m 5s | 7.1381 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V7")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V7")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
|
newtonkwan/gpt2-xl-ft-with-non-challenging-0.8
|
newtonkwan
| 2022-03-16T13:27:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-16T12:05:34Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-with-non-challenging-0.8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-with-non-challenging-0.8
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 5.4443 |
| No log | 2.0 | 2 | 5.4221 |
| No log | 3.0 | 3 | 5.3779 |
| No log | 4.0 | 4 | 5.3121 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 10
|
mazenalasali/layoutlmv2-finetuned-funsd-test
|
mazenalasali
| 2022-03-16T13:02:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-16T11:54:16Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-funsd-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-funsd-test
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 2.0.0
- Tokenizers 0.11.6
|
krinal214/xlm-all
|
krinal214
| 2022-03-16T13:01:05Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:tydiqa",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-16T12:19:11Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: xlm-all-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-all-final
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4483 | 1.0 | 3381 | 0.6038 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
RobertoMCA97/xlm-roberta-base-finetuned-panx-it
|
RobertoMCA97
| 2022-03-16T12:56:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-16T12:41:09Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.822805578342904
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2323
- F1: 0.8228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8126 | 1.0 | 70 | 0.3361 | 0.7231 |
| 0.2995 | 2.0 | 140 | 0.2526 | 0.8079 |
| 0.1865 | 3.0 | 210 | 0.2323 | 0.8228 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
krinal214/xlm-3lang
|
krinal214
| 2022-03-16T12:55:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:tydiqa",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-16T12:40:10Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: xlm-eng-beng-tel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-eng-beng-tel
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2927 | 1.0 | 810 | 0.7303 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
ixa-ehu/roberta-eus-mc4-base-cased
|
ixa-ehu
| 2022-03-16T11:49:27Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"basque",
"eu",
"arxiv:2203.08111",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-16T09:56:03Z |
---
language: eu
license: cc-by-nc-4.0
tags:
- basque
- roberta
---
# Roberta-eus mc4 base cased
This is a RoBERTa model for Basque model presented in [Does corpus quality really matter for low-resource languages?](https://arxiv.org/abs/2203.08111). There are several models for Basque using the RoBERTa architecture, using different corpora:
- roberta-eus-euscrawl-base-cased: Basque RoBERTa model trained on Euscrawl, a corpus created using tailored crawling from Basque sites. EusCrawl contains 12,528k documents and 423M tokens.
- roberta-eus-euscrawl-large-cased: RoBERTa large trained on EusCrawl.
- roberta-eus-mC4-base-cased: Basque RoBERTa model trained on the Basque portion of mc4 dataset.
- roberta-eus-CC100-base-cased: Basque RoBERTa model trained on Basque portion of cc100 dataset.
The models have been tested on five different downstream tasks for Basque: Topic classification, Sentiment analysis, Stance detection, Named Entity Recognition (NER), and Question Answering (refer to the [paper](https://arxiv.org/abs/2203.08111) for more details). See summary of results below:
| Model | Topic class. | Sentiment | Stance det. | NER | QA | Average |
|----------------------------------|--------------|-----------|-------------|----------|----------|----------|
| roberta-eus-euscrawl-base-cased | 76.2 | 77.7 | 57.4 | 86.8 | 34.6 | 66.5 |
| roberta-eus-euscrawl-large-cased | **77.6** | 78.8 | 62.9 | **87.2** | **38.3** | **69.0** |
| roberta-eus-mC4-base-cased | 75.3 | **80.4** | 59.1 | 86.0 | 35.2 | 67.2 |
| roberta-eus-CC100-base-cased | 76.2 | 78.8 | **63.4** | 85.2 | 35.8 | 67.9 |
If you use any of these models, please cite the following paper:
```
@misc{artetxe2022euscrawl,
title={Does corpus quality really matter for low-resource languages?},
author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
Olatz Perez-de-Viñaspre, Aitor Soroa},
year={2022},
eprint={2203.08111},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ixa-ehu/roberta-eus-euscrawl-base-cased
|
ixa-ehu
| 2022-03-16T11:48:42Z | 14 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"basque",
"eu",
"arxiv:2203.08111",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-16T09:54:43Z |
---
language: eu
license: cc-by-nc-4.0
tags:
- basque
- roberta
---
# Roberta-eus Euscrawl base cased
This is a RoBERTa model for Basque model presented in [Does corpus quality really matter for low-resource languages?](https://arxiv.org/abs/2203.08111). There are several models for Basque using the RoBERTa architecture, which are pre-trained using different corpora:
- roberta-eus-euscrawl-base-cased: Basque RoBERTa trained on Euscrawl, a corpus created using tailored crawling from Basque sites. EusCrawl contains 12,528k documents and 423M tokens.
- roberta-eus-euscrawl-large-cased: Basque RoBERTa large trained on EusCrawl.
- roberta-eus-mC4-base-cased: Basque RoBERTa trained on the Basque portion of mc4 dataset.
- roberta-eus-CC100-base-cased: Basque RoBERTa trained on Basque portion of cc100 dataset.
The models have been tested on five different downstream tasks for Basque: Topic classification, Sentiment analysis, Stance detection, Named Entity Recognition (NER), and Question Answering (refer to the [paper](https://arxiv.org/abs/2203.08111) for more details). See summary of results below:
| Model | Topic class. | Sentiment | Stance det. | NER | QA | Average |
|----------------------------------|--------------|-----------|-------------|----------|----------|----------|
| roberta-eus-euscrawl-base-cased | 76.2 | 77.7 | 57.4 | 86.8 | 34.6 | 66.5 |
| roberta-eus-euscrawl-large-cased | **77.6** | 78.8 | 62.9 | **87.2** | **38.3** | **69.0** |
| roberta-eus-mC4-base-cased | 75.3 | **80.4** | 59.1 | 86.0 | 35.2 | 67.2 |
| roberta-eus-CC100-base-cased | 76.2 | 78.8 | **63.4** | 85.2 | 35.8 | 67.9 |
If you use any of these models, please cite the following paper:
```
@misc{artetxe2022euscrawl,
title={Does corpus quality really matter for low-resource languages?},
author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
Olatz Perez-de-Viñaspre, Aitor Soroa},
year={2022},
eprint={2203.08111},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
tae898/emoberta-base
|
tae898
| 2022-03-16T11:01:29Z | 124 | 5 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"emoberta",
"en",
"dataset:MELD",
"dataset:IEMOCAP",
"arxiv:2108.12009",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-14T20:03:08Z |
---
language: en
tags:
- emoberta
- roberta
license: mit
datasets:
- MELD
- IEMOCAP
---
Check https://github.com/tae898/erc for the details
[Watch a demo video!](https://youtu.be/qbr7fNd6J28)
# Emotion Recognition in Coversation (ERC)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on?p=emoberta-speaker-aware-emotion-recognition-in)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=emoberta-speaker-aware-emotion-recognition-in)
At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP)
## Prerequisites
1. An x86-64 Unix or Unix-like machine
1. Python 3.8 or higher
1. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python.
1. [`multimodal-datasets` repo](https://github.com/tae898/multimodal-datasets) (submodule)
1. pip install -r requirements.txt
## EmoBERTa training
First configure the hyper parameters and the dataset in `train-erc-text.yaml` and then,
In this directory run the below commands. I recommend you to run this in a virtualenv.
```sh
python train-erc-text.py
```
This will subsequently call `train-erc-text-hp.py` and `train-erc-text-full.py`.
## Results on the test split (weighted f1 scores)
| Model | | MELD | IEMOCAP |
| -------- | ------------------------------- | :-------: | :-------: |
| EmoBERTa | No past and future utterances | 63.46 | 56.09 |
| | Only past utterances | 64.55 | **68.57** |
| | Only future utterances | 64.23 | 66.56 |
| | Both past and future utterances | **65.61** | 67.42 |
| | → *without speaker names* | 65.07 | 64.02 |
Above numbers are the mean values of five random seed runs.
If you want to see more training test details, check out `./results/`
If you want to download the trained checkpoints and stuff, then [here](https://surfdrive.surf.nl/files/index.php/s/khREwk4MUI7MSnO/download) is where you can download them. It's a pretty big zip file.
## Deployment
### Huggingface
We have released our models on huggingface:
- [emoberta-base](https://huggingface.co/tae898/emoberta-base)
- [emoberta-large](https://huggingface.co/tae898/emoberta-large)
They are based on [RoBERTa-base](https://huggingface.co/roberta-base) and [RoBERTa-large](https://huggingface.co/roberta-large), respectively. They were trained on [both MELD and IEMOCAP datasets](utterance-ordered-MELD_IEMOCAP.json). Our deployed models are neither speaker-aware nor take previous utterances into account, meaning that it only classifies one utterance at a time without the speaker information (e.g., "I love you").
### Flask app
You can either run the Flask RESTful server app as a docker container or just as a python script.
1. Running the app as a docker container **(recommended)**.
There are four images. Take what you need:
- `docker run -it --rm -p 10006:10006 tae898/emoberta-base`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-base-cuda`
- `docker run -it --rm -p 10006:10006 tae898/emoberta-large`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-large-cuda`
1. Running the app in your python environment:
This method is less recommended than the docker one.
Run `pip install -r requirements-deploy.txt` first.<br>
The [`app.py`](app.py) is a flask RESTful server. The usage is below:
```console
app.py [-h] [--host HOST] [--port PORT] [--device DEVICE] [--model-type MODEL_TYPE]
```
For example:
```sh
python app.py --host 0.0.0.0 --port 10006 --device cpu --model-type emoberta-base
```
### Client
Once the app is running, you can send a text to the server. First install the necessary packages: `pip install -r requirements-client.txt`, and the run the [client.py](client.py). The usage is as below:
```console
client.py [-h] [--url-emoberta URL_EMOBERTA] --text TEXT
```
For example:
```sh
python client.py --text "Emotion recognition is so cool\!"
```
will give you:
```json
{
"neutral": 0.0049800905,
"joy": 0.96399665,
"surprise": 0.018937444,
"anger": 0.0071516023,
"sadness": 0.002021492,
"disgust": 0.001495996,
"fear": 0.0014167271
}
```
## Troubleshooting
The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
1. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
1. Run `make style && quality` in the root repo directory, to ensure code quality.
1. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
1. Push to the Branch (`git push origin feature/AmazingFeature`)
1. Open a Pull Request
## Cite our work
Check out the [paper](https://arxiv.org/abs/2108.12009).
```bibtex
@misc{kim2021emoberta,
title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa},
author={Taewoon Kim and Piek Vossen},
year={2021},
eprint={2108.12009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[](https://zenodo.org/badge/latestdoi/328375452)<br>
## Authors
- [Taewoon Kim](https://taewoonkim.com/)
## License
[MIT](https://choosealicense.com/licenses/mit/)
|
navteca/nli-deberta-v3-xsmall
|
navteca
| 2022-03-16T09:49:34Z | 18 | 1 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"microsoft/deberta-v3-xsmall",
"zero-shot-classification",
"en",
"dataset:multi_nli",
"dataset:snli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-16T09:37:56Z |
---
datasets:
- multi_nli
- snli
language: en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-xsmall
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 91.64
- Accuracy on MNLI mismatched set: 87.77
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-xsmall')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-xsmall')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-xsmall')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-xsmall')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
```
|
navteca/ms-marco-MiniLM-L-6-v2
|
navteca
| 2022-03-16T09:36:49Z | 103,091 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"license:mit",
"region:us"
] |
text-classification
| 2022-03-16T09:26:53Z |
---
language: en
license: mit
pipeline_tag: text-classification
tags:
- sentence-transformers
---
# Cross-Encoder for MS Marco
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Training Data
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
## Usage
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
aws-ai/vascl-roberta-base
|
aws-ai
| 2022-03-16T04:22:10Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-16T03:39:04Z |
---
license: apache-2.0
---
|
lijingxin/bert-base-uncased-issues-128
|
lijingxin
| 2022-03-16T03:19:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-15T15:32:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0981 | 1.0 | 291 | 1.6917 |
| 1.6493 | 2.0 | 582 | 1.4357 |
| 1.4831 | 3.0 | 873 | 1.3923 |
| 1.3957 | 4.0 | 1164 | 1.4056 |
| 1.3339 | 5.0 | 1455 | 1.1944 |
| 1.2936 | 6.0 | 1746 | 1.2888 |
| 1.2458 | 7.0 | 2037 | 1.2715 |
| 1.2004 | 8.0 | 2328 | 1.1992 |
| 1.1785 | 9.0 | 2619 | 1.1726 |
| 1.1389 | 10.0 | 2910 | 1.2157 |
| 1.1313 | 11.0 | 3201 | 1.1977 |
| 1.0935 | 12.0 | 3492 | 1.1794 |
| 1.0826 | 13.0 | 3783 | 1.2260 |
| 1.0729 | 14.0 | 4074 | 1.1549 |
| 1.0599 | 15.0 | 4365 | 1.1269 |
| 1.0538 | 16.0 | 4656 | 1.2540 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mmohamme/distilbert-base-uncased-finetuned-btc
|
mmohamme
| 2022-03-16T02:21:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-14T16:41:42Z |
### distilbert-base-uncased-finetuned-btc for PH66 Unwanted Event
- This is our initial attempt on using the transformers for BTC-PH66.
- This test file used in this model are in projects with Ids [1065, 950, 956, 2650]. The other 4 projects were not included as they resulted very low accuracy with ML models.
- The data was preprocessed to remove duplicates, and cases where the same cause-conseq has a different Unwanted Event
- Next Model: Improve the hyper-parameters of the model
|
aytugkaya/xlm-roberta-base-finetuned-panx-de
|
aytugkaya
| 2022-03-16T02:12:08Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-16T01:55:14Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8650707909251151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1474
- F1: 0.8651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2498 | 1.0 | 1049 | 0.1835 | 0.8213 |
| 0.1293 | 2.0 | 2098 | 0.1448 | 0.8481 |
| 0.0788 | 3.0 | 3147 | 0.1474 | 0.8651 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
kSaluja/roberta-finetuned-ner-without-data-sort
|
kSaluja
| 2022-03-16T01:27:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-16T00:41:56Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-finetuned-ner-without-data-sort
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-ner-without-data-sort
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0420
- Precision: 0.9914
- Recall: 0.9909
- F1: 0.9912
- Accuracy: 0.9920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.1879 | 0.9378 | 0.9414 | 0.9396 | 0.9493 |
| No log | 2.0 | 426 | 0.1038 | 0.9725 | 0.9750 | 0.9737 | 0.9751 |
| 0.4424 | 3.0 | 639 | 0.0701 | 0.9861 | 0.9851 | 0.9856 | 0.9863 |
| 0.4424 | 4.0 | 852 | 0.0637 | 0.9882 | 0.9880 | 0.9881 | 0.9880 |
| 0.0675 | 5.0 | 1065 | 0.0546 | 0.9851 | 0.9878 | 0.9865 | 0.9879 |
| 0.0675 | 6.0 | 1278 | 0.0480 | 0.9894 | 0.9904 | 0.9899 | 0.9901 |
| 0.0675 | 7.0 | 1491 | 0.0473 | 0.9919 | 0.9904 | 0.9912 | 0.9911 |
| 0.0426 | 8.0 | 1704 | 0.0441 | 0.9921 | 0.9916 | 0.9919 | 0.9921 |
| 0.0426 | 9.0 | 1917 | 0.0426 | 0.9921 | 0.9916 | 0.9919 | 0.9922 |
| 0.033 | 10.0 | 2130 | 0.0420 | 0.9914 | 0.9909 | 0.9912 | 0.9920 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
AnnaR/literature_summarizer
|
AnnaR
| 2022-03-15T23:54:39Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-15T23:47:38Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: AnnaR/literature_summarizer
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AnnaR/literature_summarizer
This model is a fine-tuned version of [sshleifer/distilbart-xsum-1-1](https://huggingface.co/sshleifer/distilbart-xsum-1-1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2180
- Validation Loss: 4.7198
- Epoch: 10
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 5300, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.1}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.6694 | 5.0234 | 0 |
| 4.9191 | 4.8161 | 1 |
| 4.5770 | 4.7170 | 2 |
| 4.3268 | 4.6571 | 3 |
| 4.1073 | 4.6296 | 4 |
| 3.9225 | 4.6279 | 5 |
| 3.7564 | 4.6288 | 6 |
| 3.5989 | 4.6731 | 7 |
| 3.4611 | 4.6767 | 8 |
| 3.3356 | 4.6934 | 9 |
| 3.2180 | 4.7198 | 10 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
responsibility-framing/predict-perception-xlmr-cause-none
|
responsibility-framing
| 2022-03-15T23:44:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T23:39:12Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-cause-none
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-cause-none
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8639
- Rmse: 1.3661
- Rmse Cause::a Spontanea, priva di un agente scatenante: 1.3661
- Mae: 1.0795
- Mae Cause::a Spontanea, priva di un agente scatenante: 1.0795
- R2: -1.7872
- R2 Cause::a Spontanea, priva di un agente scatenante: -1.7872
- Cos: -0.3043
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3501
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Spontanea, priva di un agente scatenante | Mae | Mae Cause::a Spontanea, priva di un agente scatenante | R2 | R2 Cause::a Spontanea, priva di un agente scatenante | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------------------------------------:|:------:|:-----------------------------------------------------:|:-------:|:----------------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0626 | 1.0 | 15 | 0.6787 | 0.8244 | 0.8244 | 0.7453 | 0.7453 | -0.0149 | -0.0149 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 1.0186 | 2.0 | 30 | 0.6769 | 0.8233 | 0.8233 | 0.7457 | 0.7457 | -0.0122 | -0.0122 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 1.0346 | 3.0 | 45 | 0.6812 | 0.8259 | 0.8259 | 0.7489 | 0.7489 | -0.0187 | -0.0187 | 0.0435 | 0.0 | 0.5 | 0.2515 | nan |
| 0.9481 | 4.0 | 60 | 1.0027 | 1.0020 | 1.0020 | 0.8546 | 0.8546 | -0.4994 | -0.4994 | -0.3043 | 0.0 | 0.5 | 0.2579 | nan |
| 0.8838 | 5.0 | 75 | 0.9352 | 0.9677 | 0.9677 | 0.8463 | 0.8463 | -0.3985 | -0.3985 | -0.2174 | 0.0 | 0.5 | 0.2966 | nan |
| 0.7971 | 6.0 | 90 | 0.9396 | 0.9700 | 0.9700 | 0.8608 | 0.8608 | -0.4050 | -0.4050 | -0.2174 | 0.0 | 0.5 | 0.3156 | nan |
| 0.8182 | 7.0 | 105 | 0.9485 | 0.9746 | 0.9746 | 0.8509 | 0.8509 | -0.4184 | -0.4184 | -0.1304 | 0.0 | 0.5 | 0.2788 | nan |
| 0.696 | 8.0 | 120 | 1.1396 | 1.0682 | 1.0682 | 0.9309 | 0.9309 | -0.7041 | -0.7041 | -0.1304 | 0.0 | 0.5 | 0.2899 | nan |
| 0.6337 | 9.0 | 135 | 1.3064 | 1.1437 | 1.1437 | 0.9612 | 0.9612 | -0.9536 | -0.9536 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.5308 | 10.0 | 150 | 1.2403 | 1.1144 | 1.1144 | 0.9359 | 0.9359 | -0.8547 | -0.8547 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.5226 | 11.0 | 165 | 1.3433 | 1.1597 | 1.1597 | 0.9542 | 0.9542 | -1.0087 | -1.0087 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.474 | 12.0 | 180 | 1.5321 | 1.2386 | 1.2386 | 1.0340 | 1.0340 | -1.2910 | -1.2910 | -0.3043 | 0.0 | 0.5 | 0.3205 | nan |
| 0.3899 | 13.0 | 195 | 1.6322 | 1.2784 | 1.2784 | 1.0083 | 1.0083 | -1.4408 | -1.4408 | -0.3043 | 0.0 | 0.5 | 0.3590 | nan |
| 0.3937 | 14.0 | 210 | 1.7519 | 1.3244 | 1.3244 | 1.0540 | 1.0540 | -1.6197 | -1.6197 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.4128 | 15.0 | 225 | 1.8588 | 1.3643 | 1.3643 | 1.0765 | 1.0765 | -1.7797 | -1.7797 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.3424 | 16.0 | 240 | 1.7211 | 1.3128 | 1.3128 | 1.0217 | 1.0217 | -1.5737 | -1.5737 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.3307 | 17.0 | 255 | 1.7802 | 1.3351 | 1.3351 | 1.0790 | 1.0790 | -1.6621 | -1.6621 | -0.3043 | 0.0 | 0.5 | 0.3205 | nan |
| 0.2972 | 18.0 | 270 | 1.5272 | 1.2366 | 1.2366 | 0.9945 | 0.9945 | -1.2837 | -1.2837 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2862 | 19.0 | 285 | 1.7213 | 1.3128 | 1.3128 | 1.0574 | 1.0574 | -1.5740 | -1.5740 | -0.3913 | 0.0 | 0.5 | 0.3815 | nan |
| 0.2844 | 20.0 | 300 | 1.8999 | 1.3793 | 1.3793 | 1.0930 | 1.0930 | -1.8411 | -1.8411 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2404 | 21.0 | 315 | 1.9806 | 1.4082 | 1.4082 | 1.1221 | 1.1221 | -1.9617 | -1.9617 | -0.3913 | 0.0 | 0.5 | 0.3815 | nan |
| 0.2349 | 22.0 | 330 | 1.8649 | 1.3665 | 1.3665 | 1.0953 | 1.0953 | -1.7888 | -1.7888 | -0.3913 | 0.0 | 0.5 | 0.3815 | nan |
| 0.2323 | 23.0 | 345 | 1.8256 | 1.3520 | 1.3520 | 1.0694 | 1.0694 | -1.7299 | -1.7299 | -0.3913 | 0.0 | 0.5 | 0.4018 | nan |
| 0.2217 | 24.0 | 360 | 1.9150 | 1.3847 | 1.3847 | 1.1017 | 1.1017 | -1.8636 | -1.8636 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2262 | 25.0 | 375 | 1.8536 | 1.3624 | 1.3624 | 1.0667 | 1.0667 | -1.7719 | -1.7719 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2052 | 26.0 | 390 | 1.7727 | 1.3323 | 1.3323 | 1.0475 | 1.0475 | -1.6508 | -1.6508 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2121 | 27.0 | 405 | 1.8088 | 1.3458 | 1.3458 | 1.0588 | 1.0588 | -1.7048 | -1.7048 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.1723 | 28.0 | 420 | 1.8283 | 1.3530 | 1.3530 | 1.0628 | 1.0628 | -1.7340 | -1.7340 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.1932 | 29.0 | 435 | 1.8566 | 1.3635 | 1.3635 | 1.0763 | 1.0763 | -1.7764 | -1.7764 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
| 0.2157 | 30.0 | 450 | 1.8639 | 1.3661 | 1.3661 | 1.0795 | 1.0795 | -1.7872 | -1.7872 | -0.3043 | 0.0 | 0.5 | 0.3501 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
krinal214/bert-3lang
|
krinal214
| 2022-03-15T23:30:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:tydiqa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-15T23:17:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: bert-3lang
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-3lang
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8161 | 1.0 | 905 | 0.6422 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
responsibility-framing/predict-perception-xlmr-focus-object
|
responsibility-framing
| 2022-03-15T23:23:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T23:19:04Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-focus-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-focus-object
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1927
- Rmse: 0.5495
- Rmse Focus::a Su un oggetto: 0.5495
- Mae: 0.4174
- Mae Focus::a Su un oggetto: 0.4174
- R2: 0.5721
- R2 Focus::a Su un oggetto: 0.5721
- Cos: 0.5652
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.5518
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Su un oggetto | Mae | Mae Focus::a Su un oggetto | R2 | R2 Focus::a Su un oggetto | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------:|:------:|:--------------------------:|:-------:|:-------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0316 | 1.0 | 15 | 0.6428 | 1.0035 | 1.0035 | 0.8806 | 0.8806 | -0.4272 | -0.4272 | -0.4783 | 0.0 | 0.5 | 0.5302 | nan |
| 1.0005 | 2.0 | 30 | 0.4564 | 0.8456 | 0.8456 | 0.7078 | 0.7078 | -0.0134 | -0.0134 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.9519 | 3.0 | 45 | 0.4151 | 0.8063 | 0.8063 | 0.6797 | 0.6797 | 0.0784 | 0.0784 | 0.1304 | 0.0 | 0.5 | 0.4888 | nan |
| 0.92 | 4.0 | 60 | 0.3982 | 0.7898 | 0.7898 | 0.6516 | 0.6516 | 0.1159 | 0.1159 | 0.2174 | 0.0 | 0.5 | 0.5036 | nan |
| 0.8454 | 5.0 | 75 | 0.2739 | 0.6550 | 0.6550 | 0.5292 | 0.5292 | 0.3919 | 0.3919 | 0.6522 | 0.0 | 0.5 | 0.4160 | nan |
| 0.7247 | 6.0 | 90 | 0.2413 | 0.6148 | 0.6148 | 0.5347 | 0.5347 | 0.4642 | 0.4642 | 0.4783 | 0.0 | 0.5 | 0.3453 | nan |
| 0.6055 | 7.0 | 105 | 0.3109 | 0.6978 | 0.6978 | 0.6115 | 0.6115 | 0.3098 | 0.3098 | 0.4783 | 0.0 | 0.5 | 0.4154 | nan |
| 0.5411 | 8.0 | 120 | 0.3932 | 0.7848 | 0.7848 | 0.6712 | 0.6712 | 0.1271 | 0.1271 | 0.4783 | 0.0 | 0.5 | 0.4154 | nan |
| 0.4784 | 9.0 | 135 | 0.1316 | 0.4540 | 0.4540 | 0.3750 | 0.3750 | 0.7079 | 0.7079 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.4039 | 10.0 | 150 | 0.2219 | 0.5896 | 0.5896 | 0.4954 | 0.4954 | 0.5074 | 0.5074 | 0.5652 | 0.0 | 0.5 | 0.4838 | nan |
| 0.3415 | 11.0 | 165 | 0.1935 | 0.5505 | 0.5505 | 0.4443 | 0.4443 | 0.5704 | 0.5704 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.3369 | 12.0 | 180 | 0.2118 | 0.5761 | 0.5761 | 0.4554 | 0.4554 | 0.5296 | 0.5296 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.3083 | 13.0 | 195 | 0.1928 | 0.5496 | 0.5496 | 0.4368 | 0.4368 | 0.5718 | 0.5718 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2678 | 14.0 | 210 | 0.2205 | 0.5877 | 0.5877 | 0.4472 | 0.4472 | 0.5105 | 0.5105 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2199 | 15.0 | 225 | 0.2118 | 0.5760 | 0.5760 | 0.4689 | 0.4689 | 0.5297 | 0.5297 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2238 | 16.0 | 240 | 0.2461 | 0.6209 | 0.6209 | 0.5047 | 0.5047 | 0.4537 | 0.4537 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2233 | 17.0 | 255 | 0.2307 | 0.6011 | 0.6011 | 0.4618 | 0.4618 | 0.4879 | 0.4879 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.1903 | 18.0 | 270 | 0.2207 | 0.5880 | 0.5880 | 0.4432 | 0.4432 | 0.5100 | 0.5100 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1714 | 19.0 | 285 | 0.2146 | 0.5798 | 0.5798 | 0.4368 | 0.4368 | 0.5236 | 0.5236 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.1759 | 20.0 | 300 | 0.1745 | 0.5228 | 0.5228 | 0.4152 | 0.4152 | 0.6126 | 0.6126 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.1505 | 21.0 | 315 | 0.1944 | 0.5519 | 0.5519 | 0.4170 | 0.4170 | 0.5684 | 0.5684 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.1467 | 22.0 | 330 | 0.1802 | 0.5313 | 0.5313 | 0.3910 | 0.3910 | 0.5999 | 0.5999 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1441 | 23.0 | 345 | 0.2360 | 0.6081 | 0.6081 | 0.4755 | 0.4755 | 0.4760 | 0.4760 | 0.4783 | 0.0 | 0.5 | 0.4938 | nan |
| 0.1553 | 24.0 | 360 | 0.2129 | 0.5774 | 0.5774 | 0.4539 | 0.4539 | 0.5274 | 0.5274 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
| 0.1163 | 25.0 | 375 | 0.1780 | 0.5281 | 0.5281 | 0.3952 | 0.3952 | 0.6048 | 0.6048 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1266 | 26.0 | 390 | 0.2163 | 0.5821 | 0.5821 | 0.4569 | 0.4569 | 0.5198 | 0.5198 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
| 0.1416 | 27.0 | 405 | 0.1829 | 0.5352 | 0.5352 | 0.4082 | 0.4082 | 0.5939 | 0.5939 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
| 0.1576 | 28.0 | 420 | 0.1930 | 0.5498 | 0.5498 | 0.4126 | 0.4126 | 0.5716 | 0.5716 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.118 | 29.0 | 435 | 0.2070 | 0.5694 | 0.5694 | 0.4378 | 0.4378 | 0.5405 | 0.5405 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
| 0.1179 | 30.0 | 450 | 0.1927 | 0.5495 | 0.5495 | 0.4174 | 0.4174 | 0.5721 | 0.5721 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-cause-human
|
responsibility-framing
| 2022-03-15T22:58:24Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T22:53:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-cause-human
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-cause-human
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7632
- Rmse: 1.2675
- Rmse Cause::a Causata da un essere umano: 1.2675
- Mae: 0.9299
- Mae Cause::a Causata da un essere umano: 0.9299
- R2: 0.4188
- R2 Cause::a Causata da un essere umano: 0.4188
- Cos: 0.3913
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.4082
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un essere umano | Mae | Mae Cause::a Causata da un essere umano | R2 | R2 Cause::a Causata da un essere umano | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------------------:|:------:|:---------------------------------------:|:-------:|:--------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0174 | 1.0 | 15 | 1.3796 | 1.7041 | 1.7041 | 1.3614 | 1.3614 | -0.0506 | -0.0506 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 0.9534 | 2.0 | 30 | 1.1173 | 1.5336 | 1.5336 | 1.2624 | 1.2624 | 0.1491 | 0.1491 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.8883 | 3.0 | 45 | 1.0580 | 1.4923 | 1.4923 | 1.2451 | 1.2451 | 0.1943 | 0.1943 | 0.5652 | 0.0 | 0.5 | 0.4957 | nan |
| 0.8215 | 4.0 | 60 | 1.0200 | 1.4653 | 1.4653 | 1.2087 | 1.2087 | 0.2232 | 0.2232 | 0.6522 | 0.0 | 0.5 | 0.5123 | nan |
| 0.744 | 5.0 | 75 | 1.1496 | 1.5556 | 1.5556 | 1.2573 | 1.2573 | 0.1245 | 0.1245 | 0.2174 | 0.0 | 0.5 | 0.3007 | nan |
| 0.7056 | 6.0 | 90 | 0.9641 | 1.4246 | 1.4246 | 1.1763 | 1.1763 | 0.2658 | 0.2658 | 0.4783 | 0.0 | 0.5 | 0.3619 | nan |
| 0.6136 | 7.0 | 105 | 0.8328 | 1.3240 | 1.3240 | 1.0948 | 1.0948 | 0.3658 | 0.3658 | 0.4783 | 0.0 | 0.5 | 0.3628 | nan |
| 0.5185 | 8.0 | 120 | 0.6890 | 1.2043 | 1.2043 | 1.0112 | 1.0112 | 0.4753 | 0.4753 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.5029 | 9.0 | 135 | 1.0380 | 1.4782 | 1.4782 | 1.1215 | 1.1215 | 0.2095 | 0.2095 | 0.3913 | 0.0 | 0.5 | 0.3781 | nan |
| 0.4624 | 10.0 | 150 | 1.1780 | 1.5747 | 1.5747 | 1.2852 | 1.2852 | 0.1029 | 0.1029 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.4098 | 11.0 | 165 | 0.8714 | 1.3544 | 1.3544 | 1.1388 | 1.1388 | 0.3364 | 0.3364 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.348 | 12.0 | 180 | 0.7260 | 1.2362 | 1.2362 | 0.9563 | 0.9563 | 0.4471 | 0.4471 | 0.5652 | 0.0 | 0.5 | 0.4957 | nan |
| 0.3437 | 13.0 | 195 | 0.7241 | 1.2346 | 1.2346 | 0.8998 | 0.8998 | 0.4485 | 0.4485 | 0.6522 | 0.0 | 0.5 | 0.4727 | nan |
| 0.2727 | 14.0 | 210 | 0.9070 | 1.3818 | 1.3818 | 1.1145 | 1.1145 | 0.3093 | 0.3093 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.2762 | 15.0 | 225 | 0.7280 | 1.2380 | 1.2380 | 0.9210 | 0.9210 | 0.4456 | 0.4456 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.2396 | 16.0 | 240 | 0.7921 | 1.2912 | 1.2912 | 0.9738 | 0.9738 | 0.3968 | 0.3968 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.1955 | 17.0 | 255 | 0.8368 | 1.3272 | 1.3272 | 0.9717 | 0.9717 | 0.3627 | 0.3627 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.1928 | 18.0 | 270 | 0.7782 | 1.2799 | 1.2799 | 0.9615 | 0.9615 | 0.4073 | 0.4073 | 0.3043 | 0.0 | 0.5 | 0.3768 | nan |
| 0.1893 | 19.0 | 285 | 0.7594 | 1.2644 | 1.2644 | 0.9441 | 0.9441 | 0.4216 | 0.4216 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.2111 | 20.0 | 300 | 0.7230 | 1.2336 | 1.2336 | 0.8953 | 0.8953 | 0.4494 | 0.4494 | 0.3913 | 0.0 | 0.5 | 0.3787 | nan |
| 0.193 | 21.0 | 315 | 0.7836 | 1.2843 | 1.2843 | 0.9577 | 0.9577 | 0.4033 | 0.4033 | 0.3043 | 0.0 | 0.5 | 0.3768 | nan |
| 0.1649 | 22.0 | 330 | 0.7248 | 1.2352 | 1.2352 | 0.9133 | 0.9133 | 0.4480 | 0.4480 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.2182 | 23.0 | 345 | 0.7608 | 1.2655 | 1.2655 | 0.9435 | 0.9435 | 0.4206 | 0.4206 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.1534 | 24.0 | 360 | 0.7447 | 1.2520 | 1.2520 | 0.9277 | 0.9277 | 0.4329 | 0.4329 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.1362 | 25.0 | 375 | 0.7437 | 1.2512 | 1.2512 | 0.9236 | 0.9236 | 0.4336 | 0.4336 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.1391 | 26.0 | 390 | 0.7301 | 1.2397 | 1.2397 | 0.9182 | 0.9182 | 0.4440 | 0.4440 | 0.4783 | 0.0 | 0.5 | 0.4446 | nan |
| 0.1679 | 27.0 | 405 | 0.7748 | 1.2770 | 1.2770 | 0.9619 | 0.9619 | 0.4100 | 0.4100 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.1491 | 28.0 | 420 | 0.7415 | 1.2493 | 1.2493 | 0.9097 | 0.9097 | 0.4353 | 0.4353 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.1559 | 29.0 | 435 | 0.7525 | 1.2586 | 1.2586 | 0.9189 | 0.9189 | 0.4269 | 0.4269 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
| 0.1784 | 30.0 | 450 | 0.7632 | 1.2675 | 1.2675 | 0.9299 | 0.9299 | 0.4188 | 0.4188 | 0.3913 | 0.0 | 0.5 | 0.4082 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Simply-divine/finetune_indian_asr
|
Simply-divine
| 2022-03-15T22:57:29Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-14T16:58:36Z |
---
tags:
- generated_from_trainer
model-index:
- name: finetune_indian_asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_indian_asr
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-indian-english-enm-700](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-indian-english-enm-700) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4215
- Wer: 0.3403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0566 | 3.45 | 500 | 2.9944 | 1.0 |
| 2.7241 | 6.9 | 1000 | 1.4455 | 0.7654 |
| 0.9755 | 10.34 | 1500 | 0.4299 | 0.4034 |
| 0.4624 | 13.79 | 2000 | 0.3628 | 0.3297 |
| 0.3158 | 17.24 | 2500 | 0.3835 | 0.2952 |
| 0.2604 | 20.69 | 3000 | 0.3802 | 0.2877 |
| 0.2 | 24.14 | 3500 | 0.3842 | 0.2799 |
| 1.7441 | 27.59 | 4000 | 0.4215 | 0.3403 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
responsibility-framing/predict-perception-xlmr-blame-concept
|
responsibility-framing
| 2022-03-15T22:48:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T22:43:10Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-blame-concept
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-blame-concept
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9414
- Rmse: 0.7875
- Rmse Blame::a Un concetto astratto o un'emozione: 0.7875
- Mae: 0.6165
- Mae Blame::a Un concetto astratto o un'emozione: 0.6165
- R2: 0.2291
- R2 Blame::a Un concetto astratto o un'emozione: 0.2291
- Cos: 0.1304
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3509
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Un concetto astratto o un'emozione | Mae | Mae Blame::a Un concetto astratto o un'emozione | R2 | R2 Blame::a Un concetto astratto o un'emozione | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------------------------------:|:------:|:-----------------------------------------------:|:------:|:----------------------------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0549 | 1.0 | 15 | 1.2093 | 0.8925 | 0.8925 | 0.6659 | 0.6659 | 0.0097 | 0.0097 | -0.3043 | 0.0 | 0.5 | 0.4013 | nan |
| 1.0085 | 2.0 | 30 | 1.2199 | 0.8964 | 0.8964 | 0.6494 | 0.6494 | 0.0010 | 0.0010 | -0.1304 | 0.0 | 0.5 | 0.4515 | nan |
| 1.0131 | 3.0 | 45 | 1.1798 | 0.8815 | 0.8815 | 0.6412 | 0.6412 | 0.0339 | 0.0339 | -0.2174 | 0.0 | 0.5 | 0.2402 | nan |
| 0.9931 | 4.0 | 60 | 1.1726 | 0.8788 | 0.8788 | 0.6370 | 0.6370 | 0.0397 | 0.0397 | -0.1304 | 0.0 | 0.5 | 0.2911 | nan |
| 0.9668 | 5.0 | 75 | 1.1194 | 0.8587 | 0.8587 | 0.5925 | 0.5925 | 0.0833 | 0.0833 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.8759 | 6.0 | 90 | 1.0776 | 0.8425 | 0.8425 | 0.6265 | 0.6265 | 0.1175 | 0.1175 | 0.3043 | 0.0 | 0.5 | 0.4190 | nan |
| 0.8787 | 7.0 | 105 | 1.0513 | 0.8321 | 0.8321 | 0.6087 | 0.6087 | 0.1391 | 0.1391 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.7637 | 8.0 | 120 | 1.0537 | 0.8331 | 0.8331 | 0.6265 | 0.6265 | 0.1372 | 0.1372 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.6568 | 9.0 | 135 | 0.9104 | 0.7744 | 0.7744 | 0.5887 | 0.5887 | 0.2544 | 0.2544 | 0.3043 | 0.0 | 0.5 | 0.3680 | nan |
| 0.6354 | 10.0 | 150 | 0.9055 | 0.7723 | 0.7723 | 0.6222 | 0.6222 | 0.2585 | 0.2585 | 0.1304 | 0.0 | 0.5 | 0.3987 | nan |
| 0.5107 | 11.0 | 165 | 1.0173 | 0.8186 | 0.8186 | 0.6168 | 0.6168 | 0.1669 | 0.1669 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.4598 | 12.0 | 180 | 0.9155 | 0.7765 | 0.7765 | 0.6284 | 0.6284 | 0.2503 | 0.2503 | 0.1304 | 0.0 | 0.5 | 0.3987 | nan |
| 0.3815 | 13.0 | 195 | 0.9255 | 0.7808 | 0.7808 | 0.6140 | 0.6140 | 0.2421 | 0.2421 | 0.1304 | 0.0 | 0.5 | 0.3987 | nan |
| 0.3303 | 14.0 | 210 | 0.8506 | 0.7485 | 0.7485 | 0.6076 | 0.6076 | 0.3035 | 0.3035 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2799 | 15.0 | 225 | 1.0272 | 0.8226 | 0.8226 | 0.6699 | 0.6699 | 0.1588 | 0.1588 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2998 | 16.0 | 240 | 0.9969 | 0.8103 | 0.8103 | 0.6461 | 0.6461 | 0.1836 | 0.1836 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.3131 | 17.0 | 255 | 0.9066 | 0.7727 | 0.7727 | 0.5849 | 0.5849 | 0.2576 | 0.2576 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.2234 | 18.0 | 270 | 0.8741 | 0.7588 | 0.7588 | 0.5953 | 0.5953 | 0.2842 | 0.2842 | 0.2174 | 0.0 | 0.5 | 0.3303 | nan |
| 0.2481 | 19.0 | 285 | 1.0022 | 0.8125 | 0.8125 | 0.6549 | 0.6549 | 0.1793 | 0.1793 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2333 | 20.0 | 300 | 0.9238 | 0.7801 | 0.7801 | 0.6180 | 0.6180 | 0.2435 | 0.2435 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2407 | 21.0 | 315 | 0.9868 | 0.8062 | 0.8062 | 0.6457 | 0.6457 | 0.1919 | 0.1919 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2122 | 22.0 | 330 | 0.9514 | 0.7916 | 0.7916 | 0.6204 | 0.6204 | 0.2209 | 0.2209 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2162 | 23.0 | 345 | 0.9227 | 0.7796 | 0.7796 | 0.6053 | 0.6053 | 0.2444 | 0.2444 | 0.1304 | 0.0 | 0.5 | 0.3509 | nan |
| 0.1739 | 24.0 | 360 | 0.9147 | 0.7762 | 0.7762 | 0.5979 | 0.5979 | 0.2510 | 0.2510 | 0.1304 | 0.0 | 0.5 | 0.3509 | nan |
| 0.2084 | 25.0 | 375 | 0.9645 | 0.7970 | 0.7970 | 0.6296 | 0.6296 | 0.2102 | 0.2102 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.1702 | 26.0 | 390 | 0.9587 | 0.7946 | 0.7946 | 0.6279 | 0.6279 | 0.2149 | 0.2149 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2146 | 27.0 | 405 | 0.9519 | 0.7918 | 0.7918 | 0.6273 | 0.6273 | 0.2205 | 0.2205 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.1645 | 28.0 | 420 | 0.9398 | 0.7868 | 0.7868 | 0.6181 | 0.6181 | 0.2304 | 0.2304 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.2052 | 29.0 | 435 | 0.9492 | 0.7907 | 0.7907 | 0.6228 | 0.6228 | 0.2227 | 0.2227 | 0.0435 | 0.0 | 0.5 | 0.2862 | nan |
| 0.147 | 30.0 | 450 | 0.9414 | 0.7875 | 0.7875 | 0.6165 | 0.6165 | 0.2291 | 0.2291 | 0.1304 | 0.0 | 0.5 | 0.3509 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-blame-victim
|
responsibility-framing
| 2022-03-15T22:38:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T22:33:07Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-blame-victim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-blame-victim
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1098
- Rmse: 0.6801
- Rmse Blame::a La vittima: 0.6801
- Mae: 0.5617
- Mae Blame::a La vittima: 0.5617
- R2: -1.5910
- R2 Blame::a La vittima: -1.5910
- Cos: -0.1304
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3333
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a La vittima | Mae | Mae Blame::a La vittima | R2 | R2 Blame::a La vittima | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------:|:------:|:-----------------------:|:-------:|:----------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0422 | 1.0 | 15 | 0.4952 | 0.4542 | 0.4542 | 0.4095 | 0.4095 | -0.1560 | -0.1560 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 1.0434 | 2.0 | 30 | 0.4851 | 0.4496 | 0.4496 | 0.4054 | 0.4054 | -0.1324 | -0.1324 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 1.038 | 3.0 | 45 | 0.4513 | 0.4337 | 0.4337 | 0.3885 | 0.3885 | -0.0536 | -0.0536 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 1.0151 | 4.0 | 60 | 0.4395 | 0.4280 | 0.4280 | 0.3840 | 0.3840 | -0.0262 | -0.0262 | -0.1304 | 0.0 | 0.5 | 0.2715 | nan |
| 0.9727 | 5.0 | 75 | 0.4490 | 0.4325 | 0.4325 | 0.3811 | 0.3811 | -0.0482 | -0.0482 | 0.2174 | 0.0 | 0.5 | 0.3338 | nan |
| 0.9733 | 6.0 | 90 | 0.4540 | 0.4349 | 0.4349 | 0.3860 | 0.3860 | -0.0598 | -0.0598 | -0.2174 | 0.0 | 0.5 | 0.3248 | nan |
| 0.9396 | 7.0 | 105 | 0.4501 | 0.4331 | 0.4331 | 0.3849 | 0.3849 | -0.0508 | -0.0508 | 0.0435 | 0.0 | 0.5 | 0.2609 | nan |
| 0.8759 | 8.0 | 120 | 0.4597 | 0.4377 | 0.4377 | 0.3849 | 0.3849 | -0.0731 | -0.0731 | 0.3043 | 0.0 | 0.5 | 0.3898 | nan |
| 0.8768 | 9.0 | 135 | 0.4575 | 0.4366 | 0.4366 | 0.3784 | 0.3784 | -0.0680 | -0.0680 | 0.4783 | 0.0 | 0.5 | 0.4615 | nan |
| 0.8312 | 10.0 | 150 | 0.5363 | 0.4727 | 0.4727 | 0.4071 | 0.4071 | -0.2520 | -0.2520 | -0.0435 | 0.0 | 0.5 | 0.2733 | nan |
| 0.7296 | 11.0 | 165 | 0.5291 | 0.4696 | 0.4696 | 0.4057 | 0.4057 | -0.2353 | -0.2353 | 0.3043 | 0.0 | 0.5 | 0.3898 | nan |
| 0.7941 | 12.0 | 180 | 0.5319 | 0.4708 | 0.4708 | 0.4047 | 0.4047 | -0.2417 | -0.2417 | 0.1304 | 0.0 | 0.5 | 0.3381 | nan |
| 0.6486 | 13.0 | 195 | 0.6787 | 0.5318 | 0.5318 | 0.4516 | 0.4516 | -0.5846 | -0.5846 | 0.1304 | 0.0 | 0.5 | 0.3381 | nan |
| 0.6241 | 14.0 | 210 | 1.0146 | 0.6502 | 0.6502 | 0.5580 | 0.5580 | -1.3687 | -1.3687 | -0.1304 | 0.0 | 0.5 | 0.3509 | nan |
| 0.5868 | 15.0 | 225 | 0.7164 | 0.5464 | 0.5464 | 0.4682 | 0.4682 | -0.6725 | -0.6725 | -0.0435 | 0.0 | 0.5 | 0.3333 | nan |
| 0.5305 | 16.0 | 240 | 0.9064 | 0.6146 | 0.6146 | 0.5173 | 0.5173 | -1.1161 | -1.1161 | -0.0435 | 0.0 | 0.5 | 0.3333 | nan |
| 0.495 | 17.0 | 255 | 1.3860 | 0.7600 | 0.7600 | 0.6433 | 0.6433 | -2.2358 | -2.2358 | -0.0435 | 0.0 | 0.5 | 0.2935 | nan |
| 0.566 | 18.0 | 270 | 0.7618 | 0.5634 | 0.5634 | 0.4730 | 0.4730 | -0.7785 | -0.7785 | 0.0435 | 0.0 | 0.5 | 0.3225 | nan |
| 0.4305 | 19.0 | 285 | 0.8849 | 0.6072 | 0.6072 | 0.5048 | 0.5048 | -1.0659 | -1.0659 | -0.0435 | 0.0 | 0.5 | 0.3333 | nan |
| 0.5108 | 20.0 | 300 | 0.7376 | 0.5544 | 0.5544 | 0.4716 | 0.4716 | -0.7220 | -0.7220 | 0.0435 | 0.0 | 0.5 | 0.3225 | nan |
| 0.44 | 21.0 | 315 | 1.1611 | 0.6956 | 0.6956 | 0.5921 | 0.5921 | -1.7108 | -1.7108 | -0.1304 | 0.0 | 0.5 | 0.3333 | nan |
| 0.395 | 22.0 | 330 | 1.3004 | 0.7361 | 0.7361 | 0.6078 | 0.6078 | -2.0360 | -2.0360 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.3945 | 23.0 | 345 | 0.9376 | 0.6251 | 0.6251 | 0.5272 | 0.5272 | -1.1890 | -1.1890 | -0.2174 | 0.0 | 0.5 | 0.3188 | nan |
| 0.3093 | 24.0 | 360 | 1.3586 | 0.7524 | 0.7524 | 0.6219 | 0.6219 | -2.1719 | -2.1719 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.2676 | 25.0 | 375 | 1.2200 | 0.7130 | 0.7130 | 0.5994 | 0.5994 | -1.8484 | -1.8484 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.3257 | 26.0 | 390 | 1.2235 | 0.7140 | 0.7140 | 0.5900 | 0.5900 | -1.8564 | -1.8564 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.4004 | 27.0 | 405 | 1.0978 | 0.6763 | 0.6763 | 0.5624 | 0.5624 | -1.5629 | -1.5629 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.283 | 28.0 | 420 | 1.1454 | 0.6909 | 0.6909 | 0.5697 | 0.5697 | -1.6742 | -1.6742 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.3326 | 29.0 | 435 | 1.1214 | 0.6836 | 0.6836 | 0.5646 | 0.5646 | -1.6181 | -1.6181 | -0.1304 | 0.0 | 0.5 | 0.3333 | nan |
| 0.2632 | 30.0 | 450 | 1.1098 | 0.6801 | 0.6801 | 0.5617 | 0.5617 | -1.5910 | -1.5910 | -0.1304 | 0.0 | 0.5 | 0.3333 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
krinal214/bert-all
|
krinal214
| 2022-03-15T21:02:20Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:tydiqa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-15T19:38:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: bert-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-all
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1556 | 1.0 | 3552 | 0.5985 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
huggingtweets/theshiftnews
|
huggingtweets
| 2022-03-15T20:56:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-15T20:56:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/theshiftnews/1647377809961/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1318831968352612352/blMpdUu4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Shift News</div>
<div style="text-align: center; font-size: 14px;">@theshiftnews</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Shift News.
| Data | The Shift News |
| --- | --- |
| Tweets downloaded | 3216 |
| Retweets | 446 |
| Short tweets | 43 |
| Tweets kept | 2727 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1k4siv5q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theshiftnews's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2cedhhrz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2cedhhrz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/theshiftnews')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/hampshireomen
|
huggingtweets
| 2022-03-15T20:52:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/hampshireomen/1647377480803/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1111434706745069575/7L1hshMt_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">the omen is cringe tbh</div>
<div style="text-align: center; font-size: 14px;">@hampshireomen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from the omen is cringe tbh.
| Data | the omen is cringe tbh |
| --- | --- |
| Tweets downloaded | 1462 |
| Retweets | 68 |
| Short tweets | 109 |
| Tweets kept | 1285 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1792rc86/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hampshireomen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1y440us5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1y440us5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hampshireomen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Rustem/distilroberta-base-trainedmodel
|
Rustem
| 2022-03-15T19:32:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-15T19:28:05Z |
---
license: apache-2.0
---
|
Ebtihal/AraBertMo_base_V4
|
Ebtihal
| 2022-03-15T19:13:24Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"Fill-Mask",
"ar",
"dataset:OSCAR",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V4' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 40032| 4 | 64 | 2500 | 5h 10m 20s | 7.6544 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V4")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V4")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
|
Ebtihal/AraBertMo_base_V5
|
Ebtihal
| 2022-03-15T19:12:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"Fill-Mask",
"ar",
"dataset:OSCAR",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V5' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 50046| 5 | 64 | 3910 | 6h 49m 59s | 7.4599 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V5")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V5")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
|
bitsanlp/Homophobia-Transphobia-v2-mBERT-EDA
|
bitsanlp
| 2022-03-15T17:31:42Z | 11 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T16:43:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Homophobia-Transphobia-v2-mBERT-EDA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Homophobia-Transphobia-v2-mBERT-EDA
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5401
- Accuracy: 0.9317
- F1: 0.4498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1699 | 1.0 | 189 | 0.4125 | 0.9229 | 0.4634 |
| 0.0387 | 2.0 | 378 | 0.4658 | 0.9229 | 0.3689 |
| 0.0148 | 3.0 | 567 | 0.5250 | 0.9355 | 0.4376 |
| 0.0005 | 4.0 | 756 | 0.5336 | 0.9317 | 0.4531 |
| 0.0016 | 5.0 | 945 | 0.5401 | 0.9317 | 0.4498 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
bitsanlp/H-T_v2_mBERT_with_EDA
|
bitsanlp
| 2022-03-15T16:34:36Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-15T16:34:36Z |
---
license: apache-2.0
---
|
smartiros/BERT_for_sentiment_5k_2pcs_sampled_airlines_tweets
|
smartiros
| 2022-03-15T16:27:13Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T16:26:59Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tmpny35efxx
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tmpny35efxx
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1996
- Train Accuracy: 0.9348
- Validation Loss: 0.8523
- Validation Accuracy: 0.7633
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5865 | 0.7626 | 0.5505 | 0.8010 | 0 |
| 0.1996 | 0.9348 | 0.8523 | 0.7633 | 1 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Tokenizers 0.11.6
|
Neulvo/bert-finetuned-ner
|
Neulvo
| 2022-03-15T15:50:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-15T15:26:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9357509521443947
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9433269343126617
- name: Accuracy
type: accuracy
value: 0.9861953258374051
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0793
- Precision: 0.9358
- Recall: 0.9510
- F1: 0.9433
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0247 | 1.0 | 1756 | 0.0798 | 0.9269 | 0.9435 | 0.9351 | 0.9840 |
| 0.0136 | 2.0 | 3512 | 0.0776 | 0.9309 | 0.9495 | 0.9401 | 0.9857 |
| 0.0097 | 3.0 | 5268 | 0.0793 | 0.9358 | 0.9510 | 0.9433 | 0.9862 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
torbenal/MiniLMv2-L6-H384-RoBERTa-Large
|
torbenal
| 2022-03-15T15:30:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-15T15:01:00Z |
# MiniLM v2
Microsoft's MiniLM v2 L6 H384 distilled from RoBERTa-Large \
Found [here](https://github.com/microsoft/unilm/tree/master/minilm)
|
public-data/StyleSwin
|
public-data
| 2022-03-15T14:39:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-15T14:29:57Z |
# StyleSwin
- Repo: https://github.com/microsoft/StyleSwin
- https://drive.google.com/file/d/1OjYZ1zEWGNdiv0RFKv7KhXRmYko72LjO/view?usp=sharing
- https://drive.google.com/file/d/1HF0wFNuz1WFrqGEbPhOXjL4QrY05Zu_m/view?usp=sharing
- https://drive.google.com/file/d/1YtIJOgLFfkaMI_KL2gBQNABFb1cwOzvM/view?usp=sharing
- https://drive.google.com/file/d/17-ILwzLBoHq4HTdAPeaCug7iBvxKWkvp/view?usp=sharing
- https://drive.google.com/file/d/1y3wkykjvCbteTaGTRF8EedkG-N1Z8jFf/view?usp=sharing
|
mansidw/finetuning-sentiment-model-12000-samples
|
mansidw
| 2022-03-15T09:40:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-14T19:40:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ag_news
model-index:
- name:
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-12000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Cedille/fr-boris
|
Cedille
| 2022-03-15T08:36:54Z | 2,990 | 39 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"causal-lm",
"fr",
"dataset:c4",
"arxiv:2202.03371",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
language: fr
license: mit
tags:
- pytorch
- causal-lm
datasets:
- c4
---
# Cedille AI
Cedille is a project to bring large language models to non-English languages.
## fr-boris
Boris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) codebase.
Boris was trained on around 78B tokens of French text from the [C4](https://huggingface.co/datasets/c4) dataset. We started training from GPT-J, which has been trained on [The Pile](https://pile.eleuther.ai/). As a consequence the model still has good performance in English language. Boris makes use of the unmodified GPT-2 tokenizer.
Boris is named after the great French writer [Boris Vian](https://en.wikipedia.org/wiki/Boris_Vian).
# How do I test Cedille?
For the time being, the easiest way to test the model is to use our [publicly accessible playground](https://en.cedille.ai/).
Cedille is a relatively large model and running it in production can get expensive. Consider contacting us for API access at hello@cedille.ai.
## 📊 Cedille paper
Our paper is out now! https://arxiv.org/abs/2202.03371
Thanks for citing our work if you make use of Cedille
```bibtex
@misc{muller2022cedille,
title={Cedille: A large autoregressive French language model},
author={Martin M{\"{u}}ller and Florian Laurent},
year={2022},
eprint={2202.03371},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact us
For any custom development please contact us at hello@cedille.ai.
## Links
* [Official website](https://en.cedille.ai/)
* [Blog](https://en.cedille.ai/blog)
* [GitHub](https://github.com/coteries/cedille-ai)
* [Twitter](https://twitter.com/CedilleAI)
|
mjc00/distilbert-base-uncased-finetuned-emotion
|
mjc00
| 2022-03-15T05:48:00Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T05:23:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.924132235882821
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2153
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7986 | 1.0 | 250 | 0.3021 | 0.91 | 0.9078 |
| 0.2386 | 2.0 | 500 | 0.2153 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
somosnlp-hackathon-2022/poem-gen-gpt2-small-spanish
|
somosnlp-hackathon-2022
| 2022-03-15T04:32:41Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-15T04:09:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: poem-gen-gpt2-small-spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-gpt2-small-spanish
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 4.1366
- eval_runtime: 25.1623
- eval_samples_per_second: 43.676
- eval_steps_per_second: 10.929
- epoch: 0.78
- step: 2040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.