repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
chrisvinsen/xlsr-wav2vec2-2
|
chrisvinsen
|
wav2vec2
| 9 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,908 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlsr-wav2vec2-2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5884
- Wer: 0.4301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.6058 | 1.38 | 400 | 3.1894 | 1.0 |
| 2.3145 | 2.76 | 800 | 0.7193 | 0.7976 |
| 0.6737 | 4.14 | 1200 | 0.5338 | 0.6056 |
| 0.4651 | 5.52 | 1600 | 0.5699 | 0.6007 |
| 0.3968 | 6.9 | 2000 | 0.4608 | 0.5221 |
| 0.3281 | 8.28 | 2400 | 0.5264 | 0.5209 |
| 0.2937 | 9.65 | 2800 | 0.5366 | 0.5096 |
| 0.2619 | 11.03 | 3200 | 0.4902 | 0.5021 |
| 0.2394 | 12.41 | 3600 | 0.4706 | 0.4908 |
| 0.2139 | 13.79 | 4000 | 0.5526 | 0.4871 |
| 0.2034 | 15.17 | 4400 | 0.5396 | 0.5108 |
| 0.1946 | 16.55 | 4800 | 0.4959 | 0.4866 |
| 0.1873 | 17.93 | 5200 | 0.4898 | 0.4877 |
| 0.1751 | 19.31 | 5600 | 0.5488 | 0.4932 |
| 0.1668 | 20.69 | 6000 | 0.5645 | 0.4986 |
| 0.1638 | 22.07 | 6400 | 0.5367 | 0.4946 |
| 0.1564 | 23.45 | 6800 | 0.5282 | 0.4898 |
| 0.1566 | 24.83 | 7200 | 0.5489 | 0.4841 |
| 0.1522 | 26.21 | 7600 | 0.5439 | 0.4821 |
| 0.1378 | 27.59 | 8000 | 0.5796 | 0.4866 |
| 0.1459 | 28.96 | 8400 | 0.5603 | 0.4875 |
| 0.1406 | 30.34 | 8800 | 0.6773 | 0.5005 |
| 0.1298 | 31.72 | 9200 | 0.5858 | 0.4827 |
| 0.1268 | 33.1 | 9600 | 0.6007 | 0.4790 |
| 0.1204 | 34.48 | 10000 | 0.5716 | 0.4734 |
| 0.113 | 35.86 | 10400 | 0.5866 | 0.4748 |
| 0.1088 | 37.24 | 10800 | 0.5790 | 0.4752 |
| 0.1074 | 38.62 | 11200 | 0.5966 | 0.4721 |
| 0.1018 | 40.0 | 11600 | 0.5720 | 0.4668 |
| 0.0968 | 41.38 | 12000 | 0.5826 | 0.4698 |
| 0.0874 | 42.76 | 12400 | 0.5937 | 0.4634 |
| 0.0843 | 44.14 | 12800 | 0.6056 | 0.4640 |
| 0.0822 | 45.52 | 13200 | 0.5531 | 0.4569 |
| 0.0806 | 46.9 | 13600 | 0.5669 | 0.4484 |
| 0.072 | 48.28 | 14000 | 0.5683 | 0.4484 |
| 0.0734 | 49.65 | 14400 | 0.5735 | 0.4437 |
| 0.0671 | 51.03 | 14800 | 0.5455 | 0.4394 |
| 0.0617 | 52.41 | 15200 | 0.5838 | 0.4365 |
| 0.0607 | 53.79 | 15600 | 0.6233 | 0.4397 |
| 0.0593 | 55.17 | 16000 | 0.5649 | 0.4340 |
| 0.0551 | 56.55 | 16400 | 0.5923 | 0.4392 |
| 0.0503 | 57.93 | 16800 | 0.5858 | 0.4325 |
| 0.0496 | 59.31 | 17200 | 0.5884 | 0.4301 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
80f82bb5453bc777c52d768d14f011b9
|
Geotrend/distilbert-base-en-fr-ar-cased
|
Geotrend
|
distilbert
| 6 | 5 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
|
['multilingual']
|
['wikipedia']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,233 | false |
# distilbert-base-en-fr-ar-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-ar-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
2864e809efab9efe5b01937610a0903d
|
nypnop/distilbert-base-uncased-finetuned-bbc-news
|
nypnop
|
distilbert
| 21 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,340 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-bbc-news
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0107
- Accuracy: 0.9955
- F1: 0.9955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3463 | 0.84 | 500 | 0.0392 | 0.9865 | 0.9865 |
| 0.0447 | 1.68 | 1000 | 0.0107 | 0.9955 | 0.9955 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
14ef8338bb9969a7980f1f73a2317880
|
botika/distilbert-base-uncased-finetuned-squad
|
botika
|
distilbert
| 18 | 3 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,279 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3149 | 1.0 | 2767 | 1.2079 |
| 1.053 | 2.0 | 5534 | 1.1408 |
| 0.8809 | 3.0 | 8301 | 1.1500 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
d1393c93cdccce990503e8c9a4c3d5bb
|
radhe2205/finetuning-sentiment-model-3000-samples
|
radhe2205
|
distilbert
| 13 | 9 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,055 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7339
- Accuracy: 0.6567
- F1: 0.6979
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
ff33f44ff1221d205c3f0a93eb05aee2
|
subtlegradient/distilbert-base-uncased-finetuned-cola
|
subtlegradient
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,155 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5180
- eval_matthews_correlation: 0.4063
- eval_runtime: 0.8532
- eval_samples_per_second: 1222.419
- eval_steps_per_second: 77.353
- epoch: 1.0
- step: 535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu116
- Datasets 2.5.1
- Tokenizers 0.12.1
|
26b5ea432a6d1b6175b1006da642754d
|
lmqg/mbart-large-cc25-squad-qg
|
lmqg
|
mbart
| 35 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['en']
|
['lmqg/qg_squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation']
| true | true | true | 6,056 | false |
# Model Card of `lmqg/mbart-large-cc25-squad-qg`
This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/mbart-large-cc25-squad-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-squad-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.36 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 56 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 39.41 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 29.76 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 23.03 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 25.1 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 63.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 50.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metrics (Question Generation, Out-of-Domain)***
| Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link |
|:--------|:-----|---------:|-------:|-------:|-----------:|--------:|-----:|
| [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | default | 11.05 | 0.0 | 1.05 | 44.94 | 3.4 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_dequad.default.json) |
| [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | default | 60.73 | 0.57 | 5.27 | 48.76 | 18.99 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json) |
| [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | default | 16.47 | 0.02 | 1.55 | 45.35 | 5.13 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json) |
| [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | default | 41.46 | 0.48 | 3.84 | 47.28 | 13.25 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json) |
| [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | default | 19.89 | 0.06 | 1.74 | 45.51 | 6.11 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_jaquad.default.json) |
| [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | default | 31.67 | 0.38 | 3.06 | 46.59 | 10.34 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_koquad.default.json) |
| [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | default | 26.19 | 0.18 | 2.65 | 46.09 | 8.34 | [link](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_ruquad.default.json) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: facebook/mbart-large-cc25
- max_length: 512
- max_length_output: 32
- epoch: 6
- batch: 32
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-squad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
d736016ef9f40eb644e9538d5f0c0366
|
gokuls/distilbert_sa_GLUE_Experiment_data_aug_mnli_96
|
gokuls
|
distilbert
| 17 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,642 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_data_aug_mnli_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9477
- Accuracy: 0.5655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.9142 | 1.0 | 31440 | 0.9328 | 0.5686 |
| 0.8099 | 2.0 | 62880 | 0.9523 | 0.5752 |
| 0.7371 | 3.0 | 94320 | 1.0072 | 0.5737 |
| 0.6756 | 4.0 | 125760 | 1.0606 | 0.5750 |
| 0.6229 | 5.0 | 157200 | 1.1116 | 0.5739 |
| 0.5784 | 6.0 | 188640 | 1.1396 | 0.5795 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
a8c79827a776abd8621ce7222fb53d93
|
Alred/bart-base-finetuned-summarization-cnn-ver1.3
|
Alred
|
bart
| 13 | 6 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null |
['cnn_dailymail']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 2,221 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-summarization-cnn-ver1.3
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3148
- Bertscore-mean-precision: 0.8890
- Bertscore-mean-recall: 0.8603
- Bertscore-mean-f1: 0.8742
- Bertscore-median-precision: 0.8874
- Bertscore-median-recall: 0.8597
- Bertscore-median-f1: 0.8726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bertscore-mean-precision | Bertscore-mean-recall | Bertscore-mean-f1 | Bertscore-median-precision | Bertscore-median-recall | Bertscore-median-f1 |
|:-------------:|:-----:|:-----:|:---------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:|
| 2.3735 | 1.0 | 5742 | 2.2581 | 0.8831 | 0.8586 | 0.8705 | 0.8834 | 0.8573 | 0.8704 |
| 1.744 | 2.0 | 11484 | 2.2479 | 0.8920 | 0.8620 | 0.8765 | 0.8908 | 0.8603 | 0.8752 |
| 1.3643 | 3.0 | 17226 | 2.3148 | 0.8890 | 0.8603 | 0.8742 | 0.8874 | 0.8597 | 0.8726 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
cccdae62a6205c761d3e377f0aed1360
|
susnato/xlm-roberta-base-finetuned-panx-en
|
susnato
|
xlm-roberta
| 9 | 6 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,317 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4923
- F1: 0.7205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9902 | 1.0 | 148 | 0.6183 | 0.5830 |
| 0.4903 | 2.0 | 296 | 0.5232 | 0.6675 |
| 0.3272 | 3.0 | 444 | 0.4923 | 0.7205 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
d26556deebf9f299d7bda68ca790c879
|
RamAnanth1/distilgpt2-sd-prompts
|
RamAnanth1
|
gpt2
| 11 | 10 |
transformers
| 2 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,500 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-sd-prompts
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on [Stable-Diffusion-Prompts](https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts).
It achieves the following results on the evaluation set:
- Loss: 0.9450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5122 | 1.93 | 500 | 1.5211 |
| 1.2912 | 3.86 | 1000 | 1.1045 |
| 0.9313 | 5.79 | 1500 | 0.9704 |
| 0.7744 | 7.72 | 2000 | 0.9450 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
cbbcd9860376d0b4bf1d301676fd2d99
|
google/multiberts-seed_1-step_1900k
|
google
|
bert
| 8 | 16 |
transformers
| 0 | null | true | true | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_1900k']
| false | true | true | 3,527 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1900k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #1, captured at step 1900k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1900k')
model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1900k')
model = BertModel.from_pretrained("google/multiberts-seed_1-step_1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
6069669f3ef6f7a6648764b71848d75a
|
cyk1337/ddpm-butterflies-128
|
cyk1337
| null | 16 | 0 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/smithsonian_butterflies_subset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,229 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/cyk1337/ddpm-butterflies-128/tensorboard?#scalars)
|
8dfee4d60ff954831d3837838e32a64b
|
huangjia/xlm-roberta-base-finetuned-panx-de-fr
|
huangjia
|
xlm-roberta
| 10 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,315 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1584
- F1: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 358 | 0.1776 | 0.8263 |
| 0.2394 | 2.0 | 716 | 0.1599 | 0.8447 |
| 0.2394 | 3.0 | 1074 | 0.1584 | 0.8537 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.18.4
- Tokenizers 0.10.3
|
ae8030b61917ee94266ad96df97894c3
|
bubuxiong/whisper-small-hi
|
bubuxiong
|
whisper
| 11 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,452 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4281
- Wer: 31.9521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0879 | 2.44 | 1000 | 0.2908 | 33.7933 |
| 0.0216 | 4.89 | 2000 | 0.3440 | 33.0229 |
| 0.0014 | 7.33 | 3000 | 0.4063 | 32.2611 |
| 0.0005 | 9.78 | 4000 | 0.4281 | 31.9521 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
d0d59b5f1903c2d20eef4acadc4cd524
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_cola_128
|
gokuls
|
mobilebert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,993 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_cola_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6807
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.8228 | 1.0 | 67 | 0.6863 | 0.0 |
| 0.7969 | 2.0 | 134 | 0.6870 | 0.0 |
| 0.7965 | 3.0 | 201 | 0.6834 | 0.0 |
| 0.795 | 4.0 | 268 | 0.6835 | 0.0 |
| 0.7939 | 5.0 | 335 | 0.6807 | 0.0 |
| 0.7451 | 6.0 | 402 | 0.6986 | 0.0672 |
| 0.6395 | 7.0 | 469 | 0.7051 | 0.0875 |
| 0.6042 | 8.0 | 536 | 0.7293 | 0.1094 |
| 0.5756 | 9.0 | 603 | 0.7376 | 0.1173 |
| 0.5558 | 10.0 | 670 | 0.7879 | 0.1123 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
1a1f0de73aec6edd83ffc2c05348ab7c
|
ALM/whisper-da-small-augmented
|
ALM
|
whisper
| 20 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['da']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,568 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Danish - Robust
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 da dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7926
- Wer: 32.3251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0232 | 15.15 | 1000 | 0.7538 | 35.5813 |
| 0.0061 | 30.3 | 2000 | 0.7933 | 34.3766 |
| 0.0016 | 45.45 | 3000 | 0.7993 | 33.5823 |
| 0.0003 | 60.61 | 4000 | 0.7986 | 31.6097 |
| 0.0002 | 75.76 | 5000 | 0.7901 | 32.1357 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
bea20c5dec0dbdd2d3f87aab39aafccd
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_wnli_256
|
gokuls
|
mobilebert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,593 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_wnli_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3452
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3473 | 1.0 | 5 | 0.3452 | 0.5634 |
| 0.3469 | 2.0 | 10 | 0.3464 | 0.5634 |
| 0.3467 | 3.0 | 15 | 0.3465 | 0.5634 |
| 0.3465 | 4.0 | 20 | 0.3456 | 0.5634 |
| 0.3466 | 5.0 | 25 | 0.3453 | 0.5634 |
| 0.3466 | 6.0 | 30 | 0.3455 | 0.5634 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
cbe4669e6d2ef939d5a65a7fb2a55412
|
StatsGary/audio-diffusion-electro-rock
|
StatsGary
| null | 7 | 1 |
diffusers
| 0 | null | true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'unconditional-audio-generation', 'diffusion-models-class']
| false | true | true | 500 | false |
# Model Card for Unit 4 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional audio generation of music in the genre Rock
## Usage
```python
from IPython.display import Audio
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("StatsGary/audio-diffusion-electro-rock")
output = pipe()
display(output.images[0])
display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
```
|
a61e8fa82795b692a6b0fc9de3a03fcf
|
nejox/roberta-base-squad2-coffee20230108
|
nejox
|
roberta
| 23 | 4 |
transformers
| 0 |
question-answering
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,925 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-coffee20230108
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 90 | 1.6912 |
| 1.8817 | 2.0 | 180 | 1.7054 |
| 1.3233 | 3.0 | 270 | 1.6376 |
| 0.9894 | 4.0 | 360 | 2.1005 |
| 0.7526 | 5.0 | 450 | 2.7104 |
| 0.6553 | 6.0 | 540 | 2.2928 |
| 0.5512 | 7.0 | 630 | 2.6380 |
| 0.4148 | 8.0 | 720 | 2.8010 |
| 0.2964 | 9.0 | 810 | 3.1167 |
| 0.2538 | 10.0 | 900 | 3.5313 |
| 0.2538 | 11.0 | 990 | 3.6620 |
| 0.1918 | 12.0 | 1080 | 4.1138 |
| 0.1363 | 13.0 | 1170 | 4.0901 |
| 0.1606 | 14.0 | 1260 | 4.2286 |
| 0.1162 | 15.0 | 1350 | 4.2379 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.11.0+cu113
- Datasets 2.8.0
- Tokenizers 0.13.2
|
5b180985e250fadbe7ddff2489106f74
|
annahaz/xlm-roberta-base-misogyny-sexism-indomain-mix-trans
|
annahaz
|
xlm-roberta
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,884 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-misogyny-sexism-indomain-mix-trans
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8397
- Accuracy: 0.797
- F1: 0.7691
- Precision: 0.8918
- Recall: 0.676
- Mae: 0.203
- Tn: 459
- Fp: 41
- Fn: 162
- Tp: 338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | Tn | Fp | Fn | Tp |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-----:|:---:|:--:|:---:|:---:|
| 0.2914 | 1.0 | 2711 | 0.5846 | 0.794 | 0.7726 | 0.8621 | 0.7 | 0.206 | 444 | 56 | 150 | 350 |
| 0.2836 | 2.0 | 5422 | 0.6752 | 0.785 | 0.7491 | 0.8992 | 0.642 | 0.215 | 464 | 36 | 179 | 321 |
| 0.2516 | 3.0 | 8133 | 0.7715 | 0.769 | 0.7214 | 0.9088 | 0.598 | 0.231 | 470 | 30 | 201 | 299 |
| 0.2047 | 4.0 | 10844 | 0.8397 | 0.797 | 0.7691 | 0.8918 | 0.676 | 0.203 | 459 | 41 | 162 | 338 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
0c8b5bd890e5b5d8ed1e8d28d13fe573
|
RuiqianLi/wav2vec2-large-xls-r-300m-singlish-colab
|
RuiqianLi
|
wav2vec2
| 17 | 5 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['li_singlish']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,671 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-singlish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the li_singlish dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7199
- Wer: 0.3337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2984 | 4.76 | 400 | 2.9046 | 1.0 |
| 1.1895 | 9.52 | 800 | 0.7725 | 0.4535 |
| 0.1331 | 14.28 | 1200 | 0.7068 | 0.3847 |
| 0.0701 | 19.05 | 1600 | 0.7547 | 0.3617 |
| 0.0509 | 23.8 | 2000 | 0.7123 | 0.3444 |
| 0.0385 | 28.57 | 2400 | 0.7199 | 0.3337 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
e0f98a328feb803503a5185ad9f43688
|
avuhong/DB_Hokusai_Monet_style
|
avuhong
| null | 14 | 14 |
diffusers
| 3 |
text-to-image
| true | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 1 | 0 | 1 |
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape', 'classical-art']
| false | true | true | 1,246 | false |
# DreamBooth model for the painting of mixed style between Claude-Monet and Hokusai
This is a Stable Diffusion model fine-tuned to generate mixed styled paintings between Claude-Monet and Hokusai taught to Stable Diffusion with DreamBooth.
It can be used by modifying the `instance_prompt`: **a painting in $M## style of**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on paintings of both Claude-Monet and Hokusai.
## Examples
Since it's more for landscape painting, the image size matters. I found that 512*1024 normally gave interesting results.
Check out this gallery for more generated images:
https://www.vuhongai.com/classicalart-ai
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('avuhong/DB_Hokusai_Monet_style')
prompt = "a painting in $M## style of a fishing village under a cherry blossom forest at sunset"
image = pipe(prompt,
num_inference_steps=200,
guidance_scale=5,
height=512, width=1024,
).images[0]
image
```
|
461d59f6f443fff7788f3292c9c7c64a
|
fabiogr/opus-mt-en-de-finetuned-en-to-de-wd01-fp16false
|
fabiogr
|
marian
| 11 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['wmt16']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 976 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-de-finetuned-en-to-de-wd01-fp16false
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
300b4dda873387770d3e27311e67e068
|
rusoloco73/peronv2
|
rusoloco73
| null | 29 | 2 |
diffusers
| 0 | null | false | false | false |
mit
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,830 | false |
### PeronV2 on Stable Diffusion via Dreambooth
#### model by rusoloco73
This your the Stable Diffusion model fine-tuned the PeronV2 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks peron**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:











|
29b654cd8d5d80849b565322b6168087
|
tsantosh7/Bailii-Roberta
|
tsantosh7
|
roberta
| 12 | 6 |
transformers
| 2 |
fill-mask
| true | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['fill-mask']
| false | true | true | 2,182 | false |
# Pre-trained Language Model for England and Wales Court of Appeal (Criminal Division) Decisions
## Introduction
The research for understanding the bias in criminal court decisions need the support of natural language processing tools.
The pre-trained language model has greatly improved the accuracy of text mining in general texts. At present, there is an urgent need for a pre-trained language model specifically for the automatic processing of court decision texts.
We used the text from the [Bailii website](https://www.bailii.org/ew/cases/EWCA/Crim/) as the training set. Based on the deep language model framework of RoBERTa, we constructed bailii-roberta pre-training language model by [transformers/run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) and [transformers/mlm_wwm](https://github.com/huggingface/transformers/tree/main/examples/research_projects/mlm_wwm).
## How to use
### Huggingface Transformers
The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain bailii-roberta model online.
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("tsantosh7/bailii-roberta")
model = AutoModel.from_pretrained("tsantosh7/bailii-roberta")
```
### Download Models
- The version of the model we provide is `PyTorch`.
### From Huggingface
- Download directly through Huggingface's official website.
- [tsantosh7/bailii-roberta](https://huggingface.co/tsantosh7/Bailii-Roberta/)
## Disclaimer
- The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to the random number of seeds and computing equipment.
- **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.**
## Acknowledgment
- bailii-roberta was trained based on [roberta-base](https://arxiv.org/abs/1907.11692)).
|
dd364e62393839bb3e6e93f07c0b0e97
|
IDEA-CCNL/Erlangshen-Longformer-110M
|
IDEA-CCNL
|
longformer
| 5 | 101 |
transformers
| 2 | null | true | false | false |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,174 | false |
# Erlangshen-Longformer-110M
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
善于处理长文本,采用旋转位置编码的中文版1.1亿参数的Longformer-base
The Chinese Longformer-base (110M), which uses rotating positional encoding, is adept at handling lengthy text.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | Longformeer | 110M | 中文 Chinese |
## 模型信息 Model Information
遵循Longformer-base的设计,我们基于[chinese_roformer_L-12_H-768_A-12](https://github.com/ZhuiyiTechnology/roformer),在悟道语料库(180 GB版本)上进行了继续预训练。特别的,我们采用旋转位置嵌入(RoPE)来避免预训练语料库的不均匀序列长度问题。
Following the design of Longformer-base, we performed continual pre-training on the WuDao corpus (180 GB) based on [chinese_roformer_L-12_H-768_A-12](https://github.com/ZhuiyiTechnology/roformer). Particularly, we employed rotational position embedding (RoPE) to avoid the uneven sequence length of the pre-trained corpus.
## 使用 Usage
因为[transformers](https://github.com/huggingface/transformers)库中是没有Longformer-base相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。
Since there is no structure of Longformer-base in [transformers library](https://github.com/huggingface/transformers), you can find the structure of Longformer-base and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
### 加载模型 Loading Models
```python
from fengshen import LongformerModel
from fengshen import LongformerConfig
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-Longformer-110M")
config = LongformerConfig.from_pretrained("IDEA-CCNL/Erlangshen-Longformer-110M")
model = LongformerModel.from_pretrained("IDEA-CCNL/Erlangshen-Longformer-110M")
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
167e7b78adfa357220e306b51d969098
|
ayu1003/fin_sentiment
|
ayu1003
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,109 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fin_sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.5128 | 0.8157 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
5dbcbbdf44d30c336fc3a1ee91c8bc52
|
firqaaa/indo-dpr-question_encoder-single-squad-base
|
firqaaa
|
dpr
| 8 | 15 |
transformers
| 0 |
feature-extraction
| true | false | false |
apache-2.0
|
['id']
|
['squad_v2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['feature-extraction', 'transformers']
| false | true | true | 1,844 | false |
### indo-dpr-question_encoder-single-squad-base
<p style="font-size:16px">Indonesian Dense Passage Retrieval trained on translated SQuADv2.0 dataset in DPR format.</p>
### Evaluation
| Class | Precision | Recall | F1-Score | Support |
|-------|-----------|--------|----------|---------|
| hard_negative | 0.9963 | 0.9963 | 0.9963 | 183090 |
| positive | 0.8849 | 0.8849 | 0.8849 | 5910 |
| Metric | Value |
|--------|-------|
| Accuracy | 0.9928 |
| Macro Average | 0.9406 |
| Weighted Average | 0.9928 |
<p style="font-size:16px">Note: This report is for evaluation on the dev set, after 12000 batches.</p>
### Usage
```python
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer
tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('firqaaa/indo-dpr-question_encoder-single-squad-base')
model = DPRQuestionEncoder.from_pretrained('firqaaa/indo-dpr-question_encoder-single-squad-base')
input_ids = tokenizer("Ibukota Indonesia terletak dimana?", return_tensors='pt')["input_ids"]
embeddings = model(input_ids).pooler_output
```
We can use it using `haystack` as follows:
```
from haystack.nodes import DensePassageRetriever
from haystack.document_stores import InMemoryDocumentStore
retriever = DensePassageRetriever(document_store=InMemoryDocumentStore(),
query_embedding_model="firqaaa/indo-dpr-question_encoder-single-squad-base",
passage_embedding_model="firqaaa/indo-dpr-question_encoder-single-squad-base",
max_seq_len_query=64,
max_seq_len_passage=256,
batch_size=16,
use_gpu=True,
embed_title=True,
use_fast_tokenizers=True)
```
|
ab36fe3ae28d6e694bd6296c9bb55645
|
yazdipour/sparql-qald9-t5-small-2021-10-19_07-12_RAW
|
yazdipour
|
t5
| 11 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,532 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-small-2021-10-19_07-12_RAW
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:----------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 2.8581 | 19.0 | 0.3301 | 0.0433 | 0.1830 | 7.5917 | [69.82603479304139, 45.68226763348714, 32.33357717629846, 24.56861133935908] | 0.1903 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
cc325e2747587c724f2c758484def9e2
|
DataikuNLP/paraphrase-albert-small-v2
|
DataikuNLP
|
albert
| 12 | 3 |
sentence-transformers
| 0 |
sentence-similarity
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
| false | true | true | 3,770 | false |
# DataikuNLP/paraphrase-albert-small-v2
**This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/paraphrase-albert-small-v2/) from sentence-transformers at the specific commit `1eb1996223dd90a4c25be2fc52f6f336419a0d52`.**
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-albert-small-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-albert-small-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-albert-small-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-albert-small-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
3bba3b3f47ab6ec1e5949424ac75014b
|
Helsinki-NLP/opus-mt-gil-fi
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-gil-fi
* source languages: gil
* target languages: fi
* OPUS readme: [gil-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.fi | 23.1 | 0.447 |
|
b07afbf4fd88545b4ee4b8d5fd34d0c9
|
joniponi/communication-classifier
|
joniponi
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,146 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# communication-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1249
- eval_accuracy: 0.9644
- eval_f1: 0.9644
- eval_runtime: 2.6719
- eval_samples_per_second: 126.126
- eval_steps_per_second: 8.234
- epoch: 3.0
- step: 255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
73273622efb49411956558f41c3532dd
|
anas-awadalla/roberta-base-few-shot-k-64-finetuned-squad-seed-2
|
anas-awadalla
|
roberta
| 17 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 985 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-64-finetuned-squad-seed-2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ac885fcf65ad7233a07a3314b0955357
|
duja1/boys
|
duja1
| null | 21 | 52 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 593 | false |
### boys Dreambooth model trained by duja1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
b123oy (use that on your prompt)
|
978579c5eab3bcb1e7d7b7cbeba7d735
|
sd-concepts-library/led-toy
|
sd-concepts-library
| null | 9 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,000 | false |
### led-toy on Stable Diffusion
This is the `<led-toy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
bbcb9ec654542a82197fe9c5487e6a6c
|
projecte-aina/roberta-base-ca-v2-cased-pos
|
projecte-aina
|
roberta
| 9 | 11 |
transformers
| 1 |
token-classification
| true | false | false |
apache-2.0
|
['ca']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['catalan', 'part of speech tagging', 'pos', 'CaText', 'Catalan Textual Corpus']
| true | true | true | 5,788 | false |
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Part-of-speech-tagging (POS)
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-base-ca-v2-cased-pos** is a Part-of-speech-tagging (POS) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
## Intended uses and limitations
**roberta-base-ca-v2-cased-pos** model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("token-classification", model="projecte-aina/roberta-base-ca-v2-cased-pos")
example = "Em dic Lluïsa i visc a Santa Maria del Camí."
pos_results = nlp(example)
pprint(pos_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We used the POS dataset in Catalan from the [Universal Dependencies Treebank](https://huggingface.co/datasets/universal_dependencies) we refer to _Ancora-ca-pos_ for training and evaluation.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing F1 score.
## Evaluation results
We evaluated the _roberta-base-ca-v2-cased-pos_ on the Ancora-ca-ner test set against standard multilingual and monolingual baselines:
| Model | Ancora-ca-pos (F1) |
| ------------|:-------------|
| roberta-base-ca-v2-cased-pos | **98.96** |
| roberta-base-ca-cased-pos | **98.96** |
| mBERT | 98.83 |
| XLM-RoBERTa | 98.89 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to aina@bsc.es
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Citation information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
61b810766af93538346608946f95ea4f
|
microsoft/swinv2-base-patch4-window16-256
|
microsoft
|
swinv2
| 5 | 4,047 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet-1k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-classification']
| false | true | true | 3,773 | false |
# Swin Transformer v2 (base-sized model)
Swin Transformer v2 model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer v2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.
Swin Transformer v2 adds 3 main improvements: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) a log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) a self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swinv2) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-base-patch4-window16-256")
model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-base-patch4-window16-256")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swinv2.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-09883,
author = {Ze Liu and
Han Hu and
Yutong Lin and
Zhuliang Yao and
Zhenda Xie and
Yixuan Wei and
Jia Ning and
Yue Cao and
Zheng Zhang and
Li Dong and
Furu Wei and
Baining Guo},
title = {Swin Transformer {V2:} Scaling Up Capacity and Resolution},
journal = {CoRR},
volume = {abs/2111.09883},
year = {2021},
url = {https://arxiv.org/abs/2111.09883},
eprinttype = {arXiv},
eprint = {2111.09883},
timestamp = {Thu, 02 Dec 2021 15:54:22 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09883.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
b6d9bbee8fb86520921459ff9b19d366
|
Leizhang/xlm-roberta-base-finetuned-panx-de
|
Leizhang
|
xlm-roberta
| 12 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
c7ab1dd09685cfc1d85c6da7ca4291d9
|
jannesg/takalane_xho_roberta
|
jannesg
|
roberta
| 8 | 7 |
transformers
| 0 |
fill-mask
| true | false | true |
mit
|
['xho']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['xho', 'fill-mask', 'pytorch', 'roberta', 'masked-lm']
| false | true | true | 1,061 | false |
# Takalani Sesame - Xhosa 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_xho_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_xho_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 100000
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
8d3e9d28da99160f57b67174f8c98373
|
Helsinki-NLP/opus-mt-de-ha
|
Helsinki-NLP
|
marian
| 10 | 9 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-de-ha
* source languages: de
* target languages: ha
* OPUS readme: [de-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ha/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ha/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.ha | 20.7 | 0.417 |
|
0fd366e851b7db38cb8d7501e6ea6127
|
T-qualizer/distilbert-base-uncased-finetuned-advers
|
T-qualizer
|
distilbert
| 20 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['adversarial_qa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,193 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-advers
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the adversarial_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6424 | 0.18 | 3000 | 3.6462 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
f21c2d6b8bf26d53d2288fab22fd5f6a
|
NaliniK/distilbert-base-uncased-finetuned-cola
|
NaliniK
|
distilbert
| 13 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,572 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8239
- Matthews Correlation: 0.5495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5235 | 1.0 | 535 | 0.5402 | 0.4156 |
| 0.3484 | 2.0 | 1070 | 0.5272 | 0.5233 |
| 0.2381 | 3.0 | 1605 | 0.6665 | 0.5050 |
| 0.1746 | 4.0 | 2140 | 0.7512 | 0.5429 |
| 0.1308 | 5.0 | 2675 | 0.8239 | 0.5495 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
5379afdeb755660eb0ebd434258867c0
|
kkotkar1/finetuning-sentiment-model-3000-samples-kunal
|
kkotkar1
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 959 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples-kunal
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
254f9a8a713b274c998dc49ff604416a
|
facebook/opt-iml-30b
|
facebook
|
opt
| 17 | 2,311 |
transformers
| 39 |
text-generation
| true | false | false |
other
| null | null | null | 1 | 1 | 0 | 0 | 2 | 2 | 0 |
['text-generation', 'opt']
| false | true | true | 3,135 | false |
# OPT-IML
## Model Description
[OPT-IML (OPT + Instruction Meta-Learning)](https://arxiv.org/abs/2212.12017) is a set of instruction-tuned versions of OPT, on a collection of ~2000 NLP tasks gathered from 8 NLP benchmarks, called OPT-IML Bench.
We provide two model versions:
* OPT-IML trained on 1500 tasks with several tasks held-out for purposes of downstream evaluation, and
* OPT-IML-Max trained on all ~2000 tasks
### How to use
For large OPT models, such as this one, it is not recommend to make use of the `text-generation` pipeline because
one should load the model in half-precision to accelerate generation and optimize memory consumption on GPU.
It is recommended to directly call the [`generate`](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate)
method as follows:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> import torch
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-iml-30b", torch_dtype=torch.float16).cuda()
>>> # the fast tokenizer currently does not work correctly
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-iml-30b", use_fast=False)
>>> prompt = "What is the color of a carrot?\nA:"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
>>> generated_ids = model.generate(input_ids)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
```
### Limitations and bias
While OPT-IML models outperform baseline OPT on an extensive set of evaluations,
nevertheless, they are susceptible to the various risks associated with using large language models
relating to factual correctness, generation of toxic language and enforcing stereotypes. While we release our
OPT-IML models to proliferate future work on instruction-tuning and to improve the availability
of large instruction-tuned causal LMs, the use of these models should be
accompanied with responsible best practices.
## Training data
OPT-IML models are trained on OPT-IML Bench, a large benchmark for Instruction MetaLearning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks include Super-NaturalInstructions, FLAN, PromptSource, etc.
## Training procedure
The texts are tokenized using the GPT2 byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 30B model was fine-tuned on 64 40GB A100 GPUs. During fine-tuning, models saw approximately 2 billion tokens, which is only 0.6% of the pre-training
budget of OPT.
### BibTeX entry and citation info
```bibtex
@misc{iyer2022opt,
title={OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization},
author={Iyer, Srinivasan and Lin, Xi Victoria and Pasunuru, Ramakanth and Mihaylov, Todor and Simig, D{\'a}niel and Yu, Ping and Shuster, Kurt and Wang, Tianlu and Liu, Qing and Koura, Punit Singh and others},
year={2022},
eprint={2212.12017},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
0a6c2ebd33b979e542081a0496516eec
|
cl-tohoku/bert-base-japanese-whole-word-masking
|
cl-tohoku
|
bert
| 8 | 2,720,474 |
transformers
| 28 |
fill-mask
| true | true | true |
cc-by-sa-4.0
|
['ja']
|
['wikipedia']
| null | 1 | 1 | 0 | 0 | 1 | 1 | 0 |
[]
| false | true | true | 2,004 | false |
# BERT base Japanese (IPA dictionary, whole word masking enabled)
This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language.
This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization.
Additionally, the model is trained with the whole word masking enabled for the masked language modeling (MLM) objective.
The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0).
## Model architecture
The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads.
## Training Data
The model is trained on Japanese Wikipedia as of September 1, 2019.
To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles.
The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences.
## Tokenization
The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm.
The vocabulary size is 32000.
## Training
The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps.
For the training of the MLM (masked language modeling) objective, we introduced the **Whole Word Masking** in which all of the subword tokens corresponding to a single word (tokenized by MeCab) are masked at once.
## Licenses
The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
## Acknowledgments
For training models, we used Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
|
fadff6630bd8dae8356c53873a2c1f4a
|
nvidia/stt_en_conformer_transducer_xlarge
|
nvidia
| null | 3 | 1,891 |
nemo
| 34 |
automatic-speech-recognition
| true | false | false |
cc-by-4.0
|
['en']
|
['librispeech_asr', 'fisher_corpus', 'Switchboard-1', 'WSJ-0', 'WSJ-1', 'National-Singapore-Corpus-Part-1', 'National-Singapore-Corpus-Part-6', 'vctk', 'VoxPopuli-(EN)', 'Europarl-ASR-(EN)', 'Multilingual-LibriSpeech-(2000-hours)', 'mozilla-foundation/common_voice_8_0', 'MLCommons/peoples_speech']
| null | 1 | 0 | 1 | 0 | 1 | 0 | 1 |
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
| true | true | true | 5,932 | false |
# NVIDIA Conformer-Transducer X-Large (en-US)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
This model transcribes speech in lower case English alphabet along with spaces and apostrophes.
It is an "extra-large" versions of Conformer-Transducer (around 600M parameters) model.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
'''
'''
(if it causes an error):
pip install nemo_toolkit[all]
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_en_conformer_transducer_xlarge")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_en_conformer_transducer_xlarge"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding instead of CTC Loss. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of English speech:
- Librispeech 960 hours of English speech
- Fisher Corpus
- Switchboard-1 Dataset
- WSJ-0 and WSJ-1
- National Speech Corpus (Part 1, Part 6)
- VCTK
- VoxPopuli (EN)
- Europarl-ASR (EN)
- Multilingual Librispeech (MLS EN) - 2,000 hrs subset
- Mozilla Common Voice (v8.0)
- People's Speech - 12,000 hrs subset
Note: older versions of the model may have trained on smaller set of datasets.
## Performance
The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
| Version | Tokenizer | Vocabulary Size | LS test-other | LS test-clean | WSJ Eval92 | WSJ Dev93 | NSC Part 1 | MLS Test | MLS Dev | MCV Test 8.0 | Train Dataset |
|---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-----|-------|------|----|------|
| 1.10.0 | SentencePiece Unigram | 1024 | 3.01 | 1.62 | 1.17 | 2.05 | 5.70 | 5.32 | 4.59 | 6.46 | NeMo ASRSET 3.0 |
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## NVIDIA Riva: Deployment
[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
[1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
ecb4691a0bf50a25905cf024408a49b1
|
EgilKarlsen/ApacheDistilRoberta
|
EgilKarlsen
|
roberta
| 9 | 6 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,247 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# apache-access
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3744 | 1.0 | 18523 | 0.3469 |
| 0.3071 | 2.0 | 37046 | 0.2804 |
| 0.2796 | 3.0 | 55569 | 0.2636 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
76f1a571d5c7c464b26f722016d2e4fc
|
Dinithi/BERT
|
Dinithi
|
distilbert
| 12 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,199 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8223
- Accuracy: 0.82
- Precision: 0.84
- Recall: 0.9130
- F1: 0.8750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6778 | 1.0 | 50 | 0.6148 | 0.69 | 0.7794 | 0.7681 | 0.7737 |
| 0.5331 | 2.0 | 100 | 0.5578 | 0.8 | 0.8267 | 0.8986 | 0.8611 |
| 0.3768 | 3.0 | 150 | 0.5052 | 0.73 | 0.8889 | 0.6957 | 0.7805 |
| 0.2802 | 4.0 | 200 | 0.4998 | 0.86 | 0.8667 | 0.9420 | 0.9028 |
| 0.1869 | 5.0 | 250 | 0.5187 | 0.81 | 0.8906 | 0.8261 | 0.8571 |
| 0.1293 | 6.0 | 300 | 0.6516 | 0.85 | 0.8649 | 0.9275 | 0.8951 |
| 0.1165 | 7.0 | 350 | 0.6541 | 0.82 | 0.8806 | 0.8551 | 0.8676 |
| 0.0937 | 8.0 | 400 | 0.6855 | 0.84 | 0.8841 | 0.8841 | 0.8841 |
| 0.0791 | 9.0 | 450 | 0.7652 | 0.81 | 0.8472 | 0.8841 | 0.8652 |
| 0.0599 | 10.0 | 500 | 0.8223 | 0.82 | 0.84 | 0.9130 | 0.8750 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
4985f133e6b81405ac91e555cf4d8437
|
stevemobs/deberta-base-finetuned-aqa
|
stevemobs
|
deberta
| 13 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null |
['adversarial_qa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,222 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-aqa
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the adversarial_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1054 | 1.0 | 2527 | 1.6947 |
| 1.5387 | 2.0 | 5054 | 1.6394 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
5912ec229ed38676a183af7ee88e0dba
|
juancopi81/vincentcat-cat
|
juancopi81
| null | 27 | 15 |
diffusers
| 4 |
text-to-image
| true | false | false |
creativeml-openrail-m
| null |
['juancopi81/jcp-vincent-cat']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
| false | true | true | 2,179 | false |
# DreamBooth model for the vincentcat concept trained by juancopi81 on the juancopi81/jcp-vincent-cat dataset.
This is a Stable Diffusion model fine-tuned on the [vincentcat](https://huggingface.co/datasets/juancopi81/jcp-vincent-cat) concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of vincentcat cat**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `cat` images for the animal theme.
## Examples
---
Prompt: A painting of vincentcat cat in the style of Van Gogh
<img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_VG_final.jpeg">
---
Prompt:
<img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_Cartoon_1.png">
<img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_Cartoon_2.png">
---
Prompt: painting of vincentcat cat as an anime warrior, trending on artstation pixiv makoto shinkai
<img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_7.jpg">
---
Prompt: A painting of vincentcat cat, acrylic palette knife
<img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_3.jpg">
---
Prompt:
<img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_VG_2.png">
---
Prompt: Painting of vincentcat cat flying around the moon in the style of Leonardo Da Vinci
<img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_7.jpg">
---
Prompt: A photo of the Acropolis, and a portrair of vincentcat cat walking near the tower
<img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_6.jpg">
---
Prompt: A photo of the Eiffel Tower, a vincentcat cat is walking near the tower
<img src="https://huggingface.co/juancopi81/vincentcat-cat/resolve/main/Vincent_5.jpg">
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('juancopi81/vincentcat-cat')
image = pipeline().images[0]
image
```
|
321a941335b19fc789b89d0a67e89326
|
wietsedv/xlm-roberta-base-ft-udpos28-ko
|
wietsedv
|
xlm-roberta
| 8 | 49 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
|
['ko']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['part-of-speech', 'token-classification']
| true | true | true | 566 | false |
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Korean
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ko")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ko")
```
|
647625a70139d3fd68c249b37f17fa33
|
gsarti/it5-base
|
gsarti
|
t5
| 12 | 649 |
transformers
| 11 |
text2text-generation
| true | true | true |
apache-2.0
|
['it']
|
['gsarti/clean_mc4_it']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['seq2seq', 'lm-head']
| false | true | true | 5,873 | false |
# Italian T5 Base 🇮🇹
The [IT5](https://huggingface.co/models?search=it5) model family represents the first effort in pretraining large-scale sequence-to-sequence transformer models for the Italian language, following the approach adopted by the original [T5 model](https://github.com/google-research/text-to-text-transfer-transformer).
This model is released as part of the project ["IT5: Large-Scale Text-to-Text Pretraining for Italian Language Understanding and Generation"](https://arxiv.org/abs/2203.03759), by [Gabriele Sarti](https://gsarti.com/) and [Malvina Nissim](https://malvinanissim.github.io/) with the support of [Huggingface](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) and with TPU usage sponsored by Google's [TPU Research Cloud](https://sites.research.google/trc/). All the training was conducted on a single TPU3v8-VM machine on Google Cloud. Refer to the Tensorboard tab of the repository for an overview of the training process.
*TThe inference widget is deactivated because the model needs a task-specific seq2seq fine-tuning on a downstream task to be useful in practice. The models in the [`it5`](https://huggingface.co/it5) organization provide some examples of this model fine-tuned on various downstream task.*
## Model variants
This repository contains the checkpoints for the `base` version of the model. The model was trained for one epoch (1.05M steps) on the [Thoroughly Cleaned Italian mC4 Corpus](https://huggingface.co/datasets/gsarti/clean_mc4_it) (~41B words, ~275GB) using 🤗 Datasets and the `google/t5-v1_1-base` improved configuration. Another version of this model trained on the [OSCAR corpus](https://oscar-corpus.com/) is also available under the name [`gsarti/it5-base-oscar`](https://huggingface.co/gsartiit5-base-oscar). The training procedure is made available [on Github](https://github.com/gsarti/t5-flax-gcp).
The following table summarizes the parameters for all available models
| |`it5-small` |`it5-base` (this one) |`it5-large` |`it5-base-oscar` |
|-----------------------|-----------------------|----------------------|-----------------------|----------------------------------|
|`dataset` |`gsarti/clean_mc4_it` |`gsarti/clean_mc4_it` |`gsarti/clean_mc4_it` |`oscar/unshuffled_deduplicated_it`|
|`architecture` |`google/t5-v1_1-small` |`google/t5-v1_1-base` |`google/t5-v1_1-large` |`t5-base` |
|`learning rate` | 5e-3 | 5e-3 | 5e-3 | 1e-2 |
|`steps` | 1'050'000 | 1'050'000 | 2'100'000 | 258'000 |
|`training time` | 36 hours | 101 hours | 370 hours | 98 hours |
|`ff projection` |`gated-gelu` |`gated-gelu` |`gated-gelu` |`relu` |
|`tie embeds` |`false` |`false` |`false` |`true` |
|`optimizer` | adafactor | adafactor | adafactor | adafactor |
|`max seq. length` | 512 | 512 | 512 | 512 |
|`per-device batch size`| 16 | 16 | 8 | 16 |
|`tot. batch size` | 128 | 128 | 64 | 128 |
|`weigth decay` | 1e-3 | 1e-3 | 1e-2 | 1e-3 |
|`validation split size`| 15K examples | 15K examples | 15K examples | 15K examples |
The high training time of `it5-base-oscar` was due to [a bug](https://github.com/huggingface/transformers/pull/13012) in the training script.
For a list of individual model parameters, refer to the `config.json` file in the respective repositories.
## Using the models
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("gsarti/it5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("gsarti/it5-base")
```
*Note: You will need to fine-tune the model on your downstream seq2seq task to use it. See an example [here](https://huggingface.co/gsarti/it5-base-nli).*
Flax and Tensorflow versions of the model are also available:
```python
from transformers import FlaxT5ForConditionalGeneration, TFT5ForConditionalGeneration
model_flax = FlaxT5ForConditionalGeneration.from_pretrained("gsarti/it5-base")
model_tf = TFT5ForConditionalGeneration.from_pretrained("gsarti/it5-base")
```
## Limitations
Due to the nature of the web-scraped corpus on which IT5 models were trained, it is likely that their usage could reproduce and amplify pre-existing biases in the data, resulting in potentially harmful content such as racial or gender stereotypes and conspiracist views. For this reason, the study of such biases is explicitly encouraged, and model usage should ideally be restricted to research-oriented and non-user-facing endeavors.
## Model curators
For problems or updates on this model, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
## Citation Information
```bibtex
@article{sarti-nissim-2022-it5,
title={IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
|
74c5d3fa4d3009b18bc30cc2eba205f2
|
PrimeQA/tapas-based-tableqa-wikisql-lookup
|
PrimeQA
|
tapas
| 8 | 35 |
transformers
| 0 |
table-question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,185 | false |
# Model description
This is an [tapas-base](https://huggingface.co/google/tapas-base) model, trained on the lookup queries of [wikisql](https://huggingface.co/datasets/wikisql) dataset. It was trained to take tables and questions as input to extract answers from the table.
# Overview
*Language model*: tapas-base \
*Language*: English\
*Task*: Table Question Answering \
*Data*: WikiSQL
# Intented use and limitations
One can use this model to predict answers for natural language queries given a table. Biases associated with pre-training of tapas-base and wikisql dataset may be present.
## Usage
One can use this model directly in the [PrimeQA](https://github.com/primeqa/primeqa) framework as in this example [notebook](https://github.com/primeqa/primeqa/blob/tableqa_tapas/notebooks/tableqa/tableqa_inference.ipynb).
## Citation
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
|
f473d38eedc6d88eea4ab8437ac3eab0
|
jonatasgrosman/exp_w2v2t_ar_vp-nl_s377
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ar']
| false | true | true | 469 | false |
# exp_w2v2t_ar_vp-nl_s377
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
c795b56dadcc4fa7a0ffed80472f1190
|
jonaskoenig/topic_classification_02
|
jonaskoenig
|
bert
| 8 | 1 |
transformers
| 0 |
text-classification
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,486 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jonaskoenig/topic_classification_02
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0189
- Train Binary Crossentropy: 0.3299
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Binary Crossentropy | Epoch |
|:----------:|:-------------------------:|:-----:|
| 0.0250 | 0.4229 | 0 |
| 0.0214 | 0.3684 | 1 |
| 0.0204 | 0.3530 | 2 |
| 0.0198 | 0.3433 | 3 |
| 0.0193 | 0.3359 | 4 |
| 0.0189 | 0.3299 | 5 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
1ad26addade2e56b62c3edd0dd35991b
|
madatnlp/mt5-kormath
|
madatnlp
|
mt5
| 9 | 1 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 3,641 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# madatnlp/mt5-kormath
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7119
- Validation Loss: 1.1299
- Epoch: 61
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: mixed_bfloat16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 17.9929 | 5.9287 | 0 |
| 5.4802 | 3.9942 | 1 |
| 4.1718 | 3.2517 | 2 |
| 3.5750 | 2.9586 | 3 |
| 3.1535 | 2.4970 | 4 |
| 2.8665 | 2.4626 | 5 |
| 2.6682 | 2.3795 | 6 |
| 2.5323 | 2.2238 | 7 |
| 2.4057 | 2.0684 | 8 |
| 2.3107 | 2.2033 | 9 |
| 2.2501 | 1.8339 | 10 |
| 2.1089 | 1.9064 | 11 |
| 2.0741 | 2.0256 | 12 |
| 1.9868 | 1.8107 | 13 |
| 1.9719 | 1.7157 | 14 |
| 1.8762 | 1.6966 | 15 |
| 1.8814 | 1.6580 | 16 |
| 1.8052 | 1.6043 | 17 |
| 1.7567 | 1.6572 | 18 |
| 1.7209 | 1.5485 | 19 |
| 1.7347 | 1.6464 | 20 |
| 1.6760 | 1.5892 | 21 |
| 1.6286 | 1.5765 | 22 |
| 1.6124 | 1.7408 | 23 |
| 1.5683 | 1.4875 | 24 |
| 1.5814 | 1.4448 | 25 |
| 1.5306 | 1.4902 | 26 |
| 1.5121 | 1.5133 | 27 |
| 1.4869 | 1.4217 | 28 |
| 1.4539 | 1.5602 | 29 |
| 1.4650 | 1.4699 | 30 |
| 1.4508 | 1.4319 | 31 |
| 1.3910 | 1.5975 | 32 |
| 1.3758 | 1.4031 | 33 |
| 1.3550 | 1.4295 | 34 |
| 1.3405 | 1.3804 | 35 |
| 1.3144 | 1.4202 | 36 |
| 1.3136 | 1.5135 | 37 |
| 1.2617 | 1.4790 | 38 |
| 1.2260 | 1.4108 | 39 |
| 1.2348 | 1.3108 | 40 |
| 1.2019 | 1.1461 | 41 |
| 1.1775 | 1.2509 | 42 |
| 1.1690 | 1.2179 | 43 |
| 1.1318 | 1.2483 | 44 |
| 1.1013 | 1.0815 | 45 |
| 1.0735 | 1.2135 | 46 |
| 1.0439 | 1.1260 | 47 |
| 1.0182 | 1.1993 | 48 |
| 0.9971 | 1.0797 | 49 |
| 0.9583 | 1.2587 | 50 |
| 0.9505 | 1.0793 | 51 |
| 0.9366 | 1.0501 | 52 |
| 0.9170 | 1.1476 | 53 |
| 0.8741 | 1.0560 | 54 |
| 0.8558 | 1.0024 | 55 |
| 0.8394 | 0.9604 | 56 |
| 0.8203 | 1.2700 | 57 |
| 0.7938 | 1.1081 | 58 |
| 0.7800 | 1.0198 | 59 |
| 0.7378 | 1.1748 | 60 |
| 0.7119 | 1.1299 | 61 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.2.0
- Tokenizers 0.12.1
|
5d0271fa3047e8801521075f96418d92
|
wietsedv/xlm-roberta-base-ft-udpos28-zh
|
wietsedv
|
xlm-roberta
| 8 | 46 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
|
['zh']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['part-of-speech', 'token-classification']
| true | true | true | 567 | false |
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Chinese
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-zh")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-zh")
```
|
d71a8e6e65cf1aa8fcca380b0edd16bf
|
Visual-Attention-Network/VAN-Base-original
|
Visual-Attention-Network
| null | 3 | 0 | null | 0 |
image-classification
| false | false | false |
apache-2.0
| null |
['imagenet']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification']
| false | true | true | 2,360 | false |
# VAN-Base
VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network).
## Description
While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc.
## Evaluation Results
| Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download |
| :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: |
| VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) |
| VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) |
| VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), |
| VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) |
### BibTeX entry and citation info
```bibtex
@article{guo2022visual,
title={Visual Attention Network},
author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min},
journal={arXiv preprint arXiv:2202.09741},
year={2022}
}
```
|
7400827013aef5af94802935f468cb83
|
muhtasham/small-mlm-glue-rte-target-glue-qqp
|
muhtasham
|
bert
| 10 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,931 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-rte-target-glue-qqp
This model is a fine-tuned version of [muhtasham/small-mlm-glue-rte](https://huggingface.co/muhtasham/small-mlm-glue-rte) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3294
- Accuracy: 0.8496
- F1: 0.8112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4764 | 0.04 | 500 | 0.4288 | 0.7863 | 0.7498 |
| 0.4172 | 0.09 | 1000 | 0.3936 | 0.8089 | 0.7701 |
| 0.4017 | 0.13 | 1500 | 0.3707 | 0.8236 | 0.7785 |
| 0.3865 | 0.18 | 2000 | 0.3751 | 0.8197 | 0.7857 |
| 0.3788 | 0.22 | 2500 | 0.3682 | 0.8292 | 0.7938 |
| 0.364 | 0.26 | 3000 | 0.3517 | 0.8351 | 0.7969 |
| 0.3616 | 0.31 | 3500 | 0.3324 | 0.8496 | 0.8043 |
| 0.3533 | 0.35 | 4000 | 0.3348 | 0.8457 | 0.8071 |
| 0.3599 | 0.4 | 4500 | 0.3362 | 0.8451 | 0.8094 |
| 0.3465 | 0.44 | 5000 | 0.3294 | 0.8496 | 0.8112 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
5c44cefb859d2173ffe5da7bb83b1516
|
allenai/System1_FigLang2022
|
allenai
|
t5
| 7 | 12 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,487 | false |
# Model description
This is the T5-3B model for System 1 as described in our paper Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE, FigLang workshop @ EMNLP 2022 (Arxiv link: https://arxiv.org/abs/2210.16407)
System 1: Using original data
Given the <Premise, Hypothesis, Label, Explanation> in the original data, we first trained a sequence-to-sequence model for the figurative language NLI task
using the following input-output format:
```
Input <Premise> <Hypothesis>
Output <Label> <Explanation>
```
# How to use this model?
We provide a quick example of how you can try out System 1 in our paper with just a few lines of code:
```
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/System1_FigLang2022")
>>> tokenizer = AutoTokenizer.from_pretrained("t5-3b")
>>> input_string = "Premise: My neighbor actually purchased a dream car of mine and I see it parked in his driveway everyday just taunting me. Hypothesis: My neighbor's new car is exactly my dream car, and I feel so happy every time I see it parked in his driveway. Is there a contradiction or entailment between the premise and hypothesis?"
>>> input_ids = tokenizer.encode(input_string, return_tensors="pt")
>>> output = model.generate(input_ids, max_length=200)
>>> tokenizer.batch_decode(output, skip_special_tokens=True)
["Answer : Contradiction. Explanation : Most people would not be happy to see someone else's new car that they cannot afford because it is way out of their budget"]
```
# More details about DREAM-FLUTE ...
For more details about DREAM-FLUTE, please refer to our:
* 📄Paper: https://arxiv.org/abs/2210.16407
* 💻GitHub Repo: https://github.com/allenai/dream/
This model is part of our DREAM-series of works. This is a line of research where we make use of scene elaboration for building a "mental model" of situation given in text. Check out our GitHub Repo for more!
# More details about this model ...
## Training and evaluation data
We use the FLUTE dataset for the FigLang2022SharedTask (https://huggingface.co/datasets/ColumbiaNLP/FLUTE) for training this model. ∼7500 samples are provided as the training set. We used a 80-20 split to create our own training (6027 samples) and validation (1507 samples) partitions on which we build our models. For details on how we make use of the training data provided in the FigLang2022 shared task, please refer to https://github.com/allenai/dream/blob/main/FigLang2022SharedTask/Process_Data_Train_Dev_split.ipynb.
## Model details
This model is a fine-tuned version of [t5-3b](https://huggingface.co/t5-3b).
It achieves the following results on the evaluation set:
- Loss: 0.7602
- Rouge1: 58.1212
- Rouge2: 38.1109
- Rougel: 52.1198
- Rougelsum: 52.092
- Gen Len: 40.4851
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.0017 | 0.33 | 1000 | 0.8958 | 40.072 | 27.6729 | 38.429 | 38.4023 | 19.0 |
| 0.9054 | 0.66 | 2000 | 0.8336 | 41.4505 | 29.2616 | 39.5164 | 39.4976 | 19.0 |
| 0.8777 | 1.0 | 3000 | 0.7863 | 41.4221 | 29.6675 | 39.6719 | 39.6627 | 19.0 |
| 0.5608 | 1.33 | 4000 | 0.8007 | 41.1495 | 29.9008 | 39.5706 | 39.5554 | 19.0 |
| 0.5594 | 1.66 | 5000 | 0.7785 | 41.3834 | 30.2818 | 39.8259 | 39.8324 | 19.0 |
| 0.5498 | 1.99 | 6000 | 0.7602 | 41.6364 | 30.6513 | 40.1522 | 40.1332 | 19.0 |
| 0.3398 | 2.32 | 7000 | 0.8580 | 41.4948 | 30.7467 | 40.0274 | 40.0116 | 18.9954 |
| 0.3518 | 2.65 | 8000 | 0.8430 | 41.7283 | 31.178 | 40.3487 | 40.3328 | 18.9861 |
| 0.3465 | 2.99 | 9000 | 0.8405 | 41.956 | 31.527 | 40.5671 | 40.5517 | 18.9907 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
6f5f126855d6eff35668d600a729a206
|
valurank/distilroberta-hatespeech
|
valurank
|
roberta
| 11 | 527 |
transformers
| 0 |
text-classification
| true | false | false |
other
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,511 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-hatespeech
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3619
- Acc: 0.8423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3096 | 1.0 | 4021 | 0.3375 | 0.8540 |
| 0.3711 | 2.0 | 8042 | 0.3305 | 0.8574 |
| 0.322 | 3.0 | 12063 | 0.3398 | 0.8534 |
| 0.3197 | 4.0 | 16084 | 0.3444 | 0.8504 |
| 0.3332 | 5.0 | 20105 | 0.3619 | 0.8423 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
4b0f4f403a3f34a551537a6f06f61a1e
|
XLab/rst-gaokao-writing-11b
|
XLab
|
t5
| 6 | 1 |
transformers
| 2 |
text2text-generation
| true | false | false |
afl-3.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 11,247 | false |
<p align="center">
<br>
<img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/>
<br>
</p>
# reStructured Pre-training (RST)
official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html)
#### RST is a new paradigm for language pre-training, which
* unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model,
* surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc)
* achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam
In such a pre-training paradigm,
* Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing
* Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access.
## Model Description
We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters.
| Model | Description | Recommended Application
| ----------- | ----------- |----------- |
| rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) |
| rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker |
| rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) |
| rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction |
| rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains|
| rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction |
| rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification |
| rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning |
| rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning |
| rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification |
| rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering|
| rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling|
| **rst-gaokao-writing-11b** | **Trained with example essays from past Gaokao-English exams and grammar error correction signals** | **Essay writing, story generation, grammar error correction and other text generation tasks** |
## Have a try?
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b")
model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b")
inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
## Data for reStructure Pre-training
This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research.
We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals.
###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush:
| Mine | Signal | #Sample | Use in DataLab | Some Applications |
| --- | --- | --- | --- | --- |
| [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion|
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification |
| [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning|
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion |
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning |
| [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation|
| [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition|
[Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation |
| [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference|
|[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension|
| [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension|
| [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension |
| [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension|
| [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension|
| [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension |
| [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification|
| [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion|
| [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition|
| [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion|
## Bibtext for Citation Info
```
@article{yuan2022restructured,
title={reStructured Pre-training},
author={Yuan, Weizhe and Liu, Pengfei},
journal={arXiv preprint arXiv:2206.11147},
year={2022}
}
```
|
2ca3c0f13ad002a0f2ff08fa63b03e2e
|
emmajoanne/models
|
emmajoanne
| null | 15 | 0 | null | 4 | null | false | false | false |
afl-3.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 21,494 | false |
**Art & Eros (Jan 06, 2023)**
*https://civitai.com/models/3950/art-and-eros-aeros-a-tribute-to-beauty*
***Tags:*** artistic, armorgirl, knollingcase, nude, photograpyfantasy, sci fi, fantasy, punk, photorealistic, synthwave, cyber punk, nsfw, post apocalyptic, porn, hassan, portraits, elden ring style, girl, woman, realistic, apocalypse, cyborg, science fiction, dreamlikeart, dreamlike, mod
***Trigger Words:*** elden ring style, modelshoot style, dreamlikeart, postapocalypse, analog style, knollingcase, swpunk, synthwave, cyborgdiffusion
***Example 1:*** wide shot, a (muscular:1.1) (((naked))) girl, young [[[Gal Gadot]]], (small head), by Alphonse Mucha, (Mandy Jurgens), Granblue Fantasy, Greg Rutkowski, detailed face, detailed belly, PERFECT (((gorgeous FACE))), highly detailed, INTRICATE
***Example 2:*** a curvy (((naked))) girl as a (skimpy) futuristic battle armor, [MELISSA BENOIST], [Emma Watson], cinematic lighting, toned abs, perfect large breasts, thick ass
***Example 3:*** wide angle closeup close up nude portrait of a Dutch Appealing cute woman with hollywood hair wearing 30s medieval style Poncho , thong bikini , nude big tits, Defiance look, gesture motion on side in St Petersburg street, exposed hairy pussy, working as Public Relations Officer , outside Shoe Store with a Art Hoe mood, in front of a Disease man , Fill Light , Olive,Mulberry , sidelit, analog 85mm sharp focus, d750 hdr photo by Ying Tang , Engaging , focus on the eyes, rim halo light, 8k canon RAW, art photography, cold blue lighting, Microburst golden hour, hard light, movie still from Braveheart , knollingcase by Simon Bisley
---------------------------------------------------------------
**Dreamlike Photoreal (Jan 04, 2023)**
*https://civitai.com/models/3811/dreamlike-photoreal-20*
***Tags:*** photorealistic
***Example 1:*** lookin high quality studio photo of slim [(hs20yo19:1.13):0.6] [(hstei:0.1)::0.4] with (blonde bun hair :1.3) [(eyeshadows, smokyeyes, heavy clubbing makeup:1.35):0.3] , person smiling (21yo:0.1) (sitting with spread legs in a locker room:1.3), (visible perky nipples:1.3) (hs20yo9:1.17) (cleavage, big breasts :1.25)in ( (short cotton tshirt:1.4) and denim shorts:1.1), studio lighting, smiling fitness model (defined abs :1) Nikon, 8k, 1080p, 40mm, photoshop
***Example 2:*** photo, higly detailed, 8k, pretty woman making selfie, table in cafe outroom, wide angle, morning, colored, happy crowd around, paparaci, wind, dynamic scene, cinema like
***Example 3:*** (extremely detailed CG unity 8k wallpaper), young swedish woman, soft lighting, detailed face, concept art, digital painting, looking into camera. photorealistic, photorealism, greg rutkowski, trending on artstation, upper waist photo by Annie Leibovitz, film, studio lighting, detailed skin, ultra realistic, bokeh, sharp features, unreal engine cinematic smooth, intricate detail
---------------------------------------------------------------
**Grapefruit (version 3)**
*https://civitai.com/models/2583/grapefruit-hentai-model*
***Tags:*** anime, nsfw, hentai
***Example 1:*** masterpiece, 1girl, solo, animal ears, long hair, beach, red eyes, black hair, nude, large breasts, tongue, from above, choker, paw pose, cum,
***Example 2:*** (masterpiece), best quality, detailed, looking at viewer, ((nude) robotic girl sitting:1.3), mechanical, (cyberpunk city in background), beret, orange eyes, silver long hair, sigma 135mm lens, cowboy shot, medium breasts, night, from above,
***Example 3:*** masterpiece, best quality, detailed, 1girl, blonde hair, braid, sweets, candies, chocolates, cozy, warm, bangs, (messy room:1.2), light pink eyes, books, medium breasts, witch, [[spread legs]], thighhighs, lying, topless, pussy,
---------------------------------------------------------------
***GuoFeng (version 2)***
*https://civitai.com/models/8470/guofeng2*
***tags:*** style, anime, character, girl, woman, cartoon, realistic, 3d, chinese,game character, chinese dress
***Example 1:*** best quality, masterpiece, highres, young girl, china dress,Beautiful face, earrings, hair ornament, upper body, orange eyes, long black hair, solo, light smile,
***Example 2:*** (Masterpiece), (Extremely detailed CG Unity 8k wallpaper), Best Quality, (Original Character Painting), ((cowboy shot)),(Solo), 1 Girl, (Medium Tits), (cleavage),((Brunetize)), Sweeping Bangs, (Extremely Delicate Beautiful), (Beautiful and Detailed Eye Description), (Beautiful and Detailed Facial Depiction), Standing, ((Embroidery)), ((Dao Robe)), Delicate Clothes Slipping Off Shoulders, Hair Accessories, Gemstone Necklaces, Delicate Faces, Look at the audience,
***Example 3:*** (best quality),((masterpiece)),(highres), original, (extremely detailed 8K wallpaper), overexposure,1girl,(medium breasts),(an extremely delicate and beautiful),(Beautiful and detailed eye description),(Beautiful and detailed facial depiction),(upper body),earrings,necklace,snow,snowflakes, bangs,Ice crystal,winter
**Openjourney (version 1)**
*https://civitai.com/models/86/openjourney-aka-midjourney-v4*
***Tags:*** style, midjourney
***Trigger Words:*** mdjrny-v4 style
***Example 1:*** OpenJourney 3 d goddess close - up profile portrait with ram skull. beautiful intricately detailed japanese crow kitsune mask and clasical japanese kimono. betta fish, jellyfish phoenix, bio luminescent, plasma, ice, water, wind, creature, artwork by tooth wu and wlop and beeple and greg rutkowski , mdjrny-v4 style
***Example 2:*** [[Barbara Palvin]], Alicia Vikander, Cyberpunk-rock, Flight Jacket, skimpy outfit, cool colorful dieselpunk, flower punk, atompunk, Ink Dropped in water, splatter drippings, frosted tips hair, lots of chains, spikes on a jacket, pulp Manga, cinematic lighting, in the style of Gediminas Pranckevicius, Moebius, (((PERFECT FACE))), ((PERFECT big BREAST)), (thick ass), highly detailed, (INTRICATE), (((detailed face))), ((detailed breast)), (detailed nipple), mdjrny-v4 style
***Example 3:*** 1984 big brother, cinematic, artstation, 8k, extremely detailed, dark color palette, detailed, hyperrealism, postprocessing, 8k, octane render, de-noise, blender render
---------------------------------------------------------------
**PFG (version 2)**
*https://civitai.com/models/1227/pfg*
***Tags:*** hental, porn, women
***Example 1:*** Nude girl!!! holding a cat, studio lighting!! trending on artstation 3d. 8k quality super realistic illustration by Wayne Barlowe and Gustave Dore lineart!!!!! of the character! full body shot!!!! hdr painting concept Art Zbrush Threu Bokowina popovnuk macro lens flare lights cute detailed photorealistic cinematic photography 35mm camera wide
***Example 2:*** threesome, nude, sweaty, big tits, facial, blond
***Example 3:*** (insanely detailed, bloom:1.5), ((solo)), (highest quality, Alessandro Casagrande, Greg Rutkowski, Sally Mann, concept art, 4k), (colourful), (high sharpness), ((detailed pupils)), red eyes, ((painting:1.1)), (digital painting:1.1), detailed face and eyes,Masterpiece, best quality, highly detailed photo:1, 8k, detailed face,photorealistic, (black Hair,ponytail hair cut, ecstatic:1.1),(18yo woman:1),By jeremy mann, by sandra chevrier, by maciej kuciara, ((Large Breast)), sharp, ((perfect body)), realistic, real shadow, 3d, ((black jacket)), black leather pants, (black sexy obsessive bra), ((full body)), ((cyberpunk night city background)), (by Michelangelo)
---------------------------------------------------------------
**Protogen (version x5.8)**
*https://civitai.com/models/3666/protogen-x34-photorealism-official-release*
***Trigger Words:*** modelshoot style, analog style, mdjrny-v4 style, nousr robot
***Example 1:*** modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, english medieval witch, black silk vale, pale skin, black silk robe, black cat, necromancy magic, sexy, medieval era, photorealistic painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, photorealistic painting art by midjourney and greg rutkowski
***Example 2:*** actress Emilia Clarke naked, Daenerys Targaryen, HDR, super realistic, hyper realistic, hyper detailed, highly detailed, 4k, 8k, DSLR, photo, photo realistic, wide angle camera, slim girl, skinny girl, cute girl, a 15 year old girl, ((((teenage girl)))), ((small breast)), flat chest, ((tiny breasts)), ((small tits)), ((little boobs)), ((ass|pussy)), girl in center, asian girl, (((one girl))), (((solo girl))), pink hair, (((girl wearing black leather suit))), ((legs spread)), (((girl sitting on motorcycle | red kawasaki ninja))), ((((motorcycle with dragon (drawing|print|design))))), white garage environment, ((fingerless leather bicycle gloves)), ((golden snake bracelets on her arms)), ((((Glamor Shot)))), ((((Golden Hour)))), ((Color Grading)), ((Bokeh))
***Example 3:*** 64k uhd, sharp gamma, dan mumford colors, digital extremely detailed painting Artstation coral tentacles - By alexander jansson smeared watermelon pattern floating in the air thick liquid creative - 4k uhd, hyper detailed, ((steampunk)), lovecraft colors, epic composition, octane render, Metal Hellsinger style
***Example 4:*** full shot body photo of the most beautiful artwork in the world featuring bikini model, ((((small breasts))), ((((naked)))), ((((small boobs)))), smiling, freckles, sexy, High Detail, Sharp focus, dramatic, photorealistic, ultra sharp, ultra hd, hyper realistic, ultra realistic, no underwear, no bikini, no pants, no shorts, completely naked, wide open legs, showing pussy, (((holding her tits with both hands))), beautiful hands, beautiful fingers, fine fingers
---------------------------------------------------------------
**RealEldenApocalypse_AnalogSexKnoll_4CandyPureSimp+FEET (version 1)**
*https://civitai.com/models/1654/realeldenapocalypseanalogsexknoll4candypuresimpfeet*
***Tags:*** artistic, nude, science fiction, portraits, elden ring style, fantasy, photorealistic, nsfw, post apocalyptic, hassan, girl, woman, realistic, photography, knollingcase, sci fi, apocalypse
***Trigger Words:*** elden ring style, postapocalypse, knollingcase, analog style, bf
***Example 1:*** professional medium shot photo of skinny provoking nymphette with (curly pixie haircut) (ruby red hair) snub nose with detailed facial features and model eyes hanging out at meadow, soft shadows
***Example 2:*** elden ring style, bf, Professional Photo, ((Front Shot)), ((Full Body)), (Clothed), ((wearing skimpy fantasy Maid To Tease White and Black Lace Apron with Ruffle details attached elastic garders adjustable criss cross back straps and ribbon waist tie), (Young Female:1.2), ((DARK ELF)), (dark grey skin), (standing), [grim dark:cyberpunk:0.75], (in A magical kingdom where everything is perfect and everyone is happy), Legs slightly bent, Curvy Fit body type, Medium breasts, (Puffy Nipples), (pokies), Neutral Expression, Shaved Pubic Hair, Small labia, tight pussy, (Perfect Large Ass), Perfect face, detailed eyes, succubus, (horns on head), ((magical glowing tattoos)), (((blood and dirt on clothes and skin))), distressed, Supple Skin Pores, (Dark scarlet colored hair), wet, depth of field, cinematic lighting, photographed on a Canon EOS-1D X Mark III, 50mm Sigma, ISO 100, (highly detailed:1.2), photorealism, HDR 4k, cinematic film still from the Lord of The Rings, Masterpiece
***Example 3:*** a young woman sitting and spreading her legs, full_body_shot, closeup, nsfw, sweaty, pussy, nipples, cinematic, detailed face, realistic face, photo realistic, elden ring style, knollingcase, analog style, bf
***Example 4:*** professional photo of a nude woman lying on her back on a bed with legs spread, full body, medium breast, smiling, highly detailed, 8k resolution
---------------------------------------------------------------
**WoopWoop-Photo (version 1.2)**
*https://civitai.com/models/4041/woopwoop-photo*
***Tags:*** men, realistic, photography, women, photorealistic, nsfw, hentai, hardcore, porn, anatomical, gay, penis, anatomy, realism, semi-realistic, hyperrealism, vagina, homoerotic, homosexual, lesbian, lgbtqia+, lgbtq, lgbt, queer, genderqueer
***Example 1:*** (((photographic, photo, photogenic, rule of thirds, dramatic lighting))), ((sexy)), (detailed face, detailed nose) (((mature woman))) ((thickthick)) (((wearing tank top, spaghetti straps))), ((freckles)), long curly messy brown hair, ((collar or choker)), ((smirk)), ((tattoo))
***Example 2:*** (((photographic, photo, photogenic, rule of thirds, candle lighting))), ((beautiful)), (detailed face, detailed nose) (((mature woman))) ((brown skin)) ((thick)) (((wearing summer dress))), , medium curly messy brown,brunette hair, ((collar or choker)), ((smile)), ((tattoo))
***Example 3:*** (((photographic, photo, photogenic, rule of thirds, moody lighting))), ((face only)) ((beautiful)), (detailed face, detailed nose) (((mature woman))) on beach ((black skin)) ((thick)) (((wearing summer dress))), , short wavy brushed red,ginger hair, ((collar or choker)), ((smile)), ((tattoo))
---------------------------------------------------------------
***Project Photo Beta 2.0 LITE (version 2)***
*https://civitai.com/models/5160/project-photo-beta-20-lite*
***Tags:*** photography, photograph, photorealistic
***Trigger Words:*** (lightroom) red:34% blue:53% green:43% filmgrain_minimal, texture:+25%, clarity: +40%, Contrast:+4%, shadows: +11% , sharpen:70%
***Example 1:*** Portrait of teen boy with blue hair and with cute face, North Pole Snow Vibe, perfect composition, hyperrealistic, super detailed, 8k, high quality, trending art, trending on artstation, sharp focus, studio photo, intricate details, highly detailed, by greg rutkowski (lightroom) red:68% blue:41% green:37% filmgrain_minimal, texture:+25%, clarity: +40%, Contrast:+15%, shadows: +11% , sharpen:100%
***Example 2:*** professional portrait photography, 85mm lens, gothic woman with red hair, centered, scenic background, perfect composition, golden ratio, hyperrealistic, photorealism, super detailed, 32k, high quality, trending on artstation, sharp focus, studio lighting, intricate details, hyperdetailed photography by greg rutkowski, dino tomic, (lightroom) red:68% blue:41% green:37% filmgrain_minimal, texture:+25%, clarity: +40%, Contrast:+15%, shadows: +11% , sharpen:100%
***Example 3:*** portrait 1girl, arms_behind_back, breasts, dress, hair_over_one_eye, jewelry, lips, medium_breasts, navel, necklace, pink_hair, realistic, short_hair, solo, (SEMI-SILHOUETTE light:1.1), (raytracing:1.1), (cryengine:1.1), (skin detail:1.1),(photrealistic:1.1)
---------------------------------------------------------------
***Project Unreal Engine 5 (version 2)***
*https://civitai.com/models/4752/project-unreal-engine-5*
***Tags:*** portraits, 3d, ultra realistic, real person
***Example 1:*** (1 girl) < intricate stunning highly detailed girl by artgerm and edouard bisson, pale eyes, long blonde hair, portrait, soft studio lighting, ultra realistic gold filigree detailed bodice, photorealistic, octane render, unreal engine, hyper detailed, volumetric lighting, hdr, octane render, 4k, 8K (skin defect: very few) (freckled face: very few) (Birthmark: 0,3) (greasy hair: 0,2) (clothes wrinkling: 0,5) (body scrub: 0,4) (perfect eyes: 1,0) (eyes size: 1,0) (lipsticked mouth: 1,5) (boobs size big) (age 25) (long hair minimum) (make up medium) (face skinny) (realistic fingers) (little nose) (NO TEXT) (attentive facial expression) (left and right hands five fingers)
***Example 2:*** portrait pale pink haired goddess, wearing byzantine gown | fantasy, hyper-detailed, accurate anatomy, symmetrical facial features, sharp focus, volumetric lighting, 16k | karol bak, yoshitaka amano, tom bagshaw, aurora, zbrush cel-shaded, cgsociety | ethereal beautiful astral vaporwave storybook illustration, dark fantasy
***Example 3:*** masterpiece portrait of Rei Ayanami \(evangelion\), evangelion \(Hideaki\), caustics, textile shading, high resolution illustration, red eyes, feminine, no pupils, blue hair, short hair, japanese school uniform, loafers, detailed school, japanese school hallway, japanese modern school in Tokyo, soft light, black stockings, torn stockings, indoors, wooden floor, hallway, at night, neon lights
---------------------------------------------------------------
***Openjourney (version 1)***
*https://civitai.com/models/86/openjourney-aka-midjourney-v4*
***tags:*** style, midjourney
***Example 1:*** [[Barbara Palvin]], Alicia Vikander, Cyberpunk-rock, Flight Jacket, skimpy outfit, cool colorful dieselpunk, flower punk, atompunk, Ink Dropped in water, splatter drippings, frosted tips hair, lots of chains, spikes on a jacket, pulp Manga, cinematic lighting, in the style of Gediminas Pranckevicius, Moebius, (((PERFECT FACE))), ((PERFECT big BREAST)), (thick ass), highly detailed, (INTRICATE), (((detailed face))), ((detailed breast)), (detailed nipple), mdjrny-v4 style
***Example 2:*** OpenJourney 3 d goddess close - up profile portrait with ram skull. beautiful intricately detailed japanese crow kitsune mask and clasical japanese kimono. betta fish, jellyfish phoenix, bio luminescent, plasma, ice, water, wind, creature, artwork by tooth wu and wlop and beeple and greg rutkowski , mdjrny-v4 style
***Example 3:*** mdjrny-v4 style of an oil painting of a flower (dragon skull:1.1) as vase on a table with a white cloth on it and a white tablecloth, (flying skull moths:1.1), impressionist painting, vivid, painting by (Leonid Afremov:1.2), Patrice Murciano
---------------------------------------------------------------
***Realistic Vision (version 1.3)***
*https://civitai.com/models/4201/realistic-vision-v13*
***tags:*** character, realistic, photorealistic, nsfw, anatomical, semi-realistic, cgi
***Trigger Words:*** analog style, modelshoot style, nsfw, nudity
***Example 1:*** girl, (pale skin:0.1), techwear, city, (detailed skin:1.4), realistic, film grain, natural light
***Example 2:*** RAW photo, a wide shot photo of 21 y.o woman in swimsuit clothes, long haircut, pale skin, slim body, ((full body)), background is grassy meadow, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
***Example 3:*** RAW photo, a close up portrait photo of Natasha Romanoff in string bikini clothes, redhair,long hair, pale skin, background is new york, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
---------------------------------------------------------------
***Kenshi (version 01)***
*https://civitai.com/models/3850/kenshi*
***Tags:*** anime, semi-realistic, bochen, nixeu, guweiz, wlop
***Example 1:*** (midriff), middle_finger, (makima:1.2) close up, (makima_eye:1.2), (glowing_eye:1.2), scary, detailed_eyes, ((face_focus, zoomed_face, zoomed_in, bokeh, underwater, water, wet_surface,)) ((face_portrait:1.2)), (duality_style:0.3), (line_style:1), (minimal_gradient:0.7), (nixeu_basic2:0.7), (nixeu_extra:0.7),(nixeu_soft:0.7),(nixeu_white:0.7), (dark_fantasy:1), (flame_surge_style:1), ((sitting on a throne)), bloody, evil, dark,moody, spooky background, villian, colorful, beautiful, braided, (chains), somber expression, looking down, dark energy, colorful, vibrant colors, portal to another world, red nail polish, side view, ultra realistic, intricate details, elegant, hyper realistic, tonemapping, hyperfocus, sharp focus, hyperdetailed,intricated detail, shiny, realism, [colorful], [volumetric lighting],photorealistic realistic, luxurious, close_up, 8k, detailed, unreal engine 5, ray tracing, 8k, cinematic, depth of field, octane render,realistic lighting, cinematic lighting, small gold particles, best_quality, big smile, oversized jacket, good_anatomy, highly detailed, fit, ultra realistic, highres, superb, 8k wallpaper, extremely detailed, intricate, limited palette, ,smile, (freckles:0.5), small details, ultra detailed, close fists, c_cup, confused, fragance rose, red, black, white, gold, Decapitation of the damned, 3d,3ds, unreal engine 5, volumetric lighting, realistic, realistic lighting, cinematic, 4k, cinematic lighting, 8k, depth of field 3d, 3ds, masterpiece, perfect, award-winning,hyper-detailed, photorealistic, ultra realistic, realistic light, unity, hard lighting, intricate details, stop motion, hyperfocus, tonemapping, sharp focus, hyper detailed, scary, zoom out
***Example 2:*** masterpiece, best quality, ultra-detailed, illustration, random, island, tropical, clear skies, blue water, sandy beaches, palm trees, exotic flowers, lush vegetation, diverse wildlife, seabirds, boats, ships, waterfalls, canyons, cliffs, caves, ancient ruins, detailed background, a mix of different elements, surreal, dreamlike, abstract, a blend of different landscapes, cultures, architecture, nature, elements of fantasy, science fiction, mystery, depth, dimension, light and shadows,
***Example 3:*** ((female:1.2)), 💥☠️🔮✨, beatiful young woman, human, v-shaped chin, (perfectly symmetrical face), ((villanous facial expression)), ((cyberpunk:1.0, retowave:0.8 colorful:1.2 outfit)), blurred environment background, neon energy halo in her back, (perfectly shaped eyes:0.8), dark black hair, tied hair, pale skin, portrait, digital art, concept art, post processed, dynamic lighting, (painted by bochen and wlop, stylized by nixeu and greg rutkowski), trend on pixiv, perfect composition, cinematic, moody, rule of thirds, majestic, detailed, sharp details, sharp focus, perfect anatomy, shiny, masterpiece, award-winning photography, fine-tuning face, masterpiece
***Example 4:*** sam yang, 1girl, (jellyfish hair:1.5), (peach hair:1.1), (flattop:1.4), hair clip, covered nipples, puffy nipples, raglan top, jeans, detailed_eyes, spoken_heart, arms behind back, large breasts, <lora:samdoesartsSamYang_normal:0.95>
***Example 5:*** (male:1.2), adult face, symmetrical face, sharp eyes, orange eyes, long yellow orange hair, man with unique power, dream power, (wearing a blue cloak), (glowing_eye: 1.1), alone, energy around him (anime_style:1.1), (semi-style:1.0), (pixel-style:0.2), (detailed) (Face_focus:1.2), Close up shot, upper body shot, posing, looking forward,
---------------------------------------------------------------
***ChilloutMix (version fp32)***
*https://civitai.com/models/6424/chilloutmix*
***Example 1:*** parameters best quality, ultra high res, (photorealistic:1.35),(Korean:1.1) ,ultra-detailed,incredibly detailed,(an extremely delicate and beautiful),detailed cg 8k wallpaper,(nsfw:1.4641),POV, (half naked hanfu:1.8), (realistic humid skin:1.2),(solo:1.4), (1girl:1.1),(hanfugirl:1.6),(open clothes:1.4), (off shoulder:1.1), (looking at viewer:1.331), (large breasts:1.71),(clear fingers:1.5), (shiny skin:1.41), armlet, bangle, anklet, black hair, blunt bangs, parted bangs, high ponytail, hair rings, half updo, braided bun, (widow's peak:1.21), hair ornament, earrings,(Standing in the water:1.331), (parted lips:1.1), (eyelashes:1.1), (happy:1.6), (depth of field:1.1), lens flare, (chromatic aberration:1.1), (caustics:1.1), in summer, (water:1.331), branch, (beautiful detailed sky:1.331), (flower on liquid:1.331),white clothes,Mouth slightly open, beautiful detailed eyes,(scattered luminous petals:1.331), (style-keta:0.78), (qrx:0.51),gbf
***Example 2:*** (head to toe:1.4), a fantasy blonde princess in lingerie, doggystyle, legs, thighs, white skin, slender, 18 years old, looking at viewer, 1girl, princess, hair ornament, jewelry, necklace, bracelet, cleavage, gold bra, gold panties, gold thighhighs, lot of jewelry, inside a castle background, erotic pose, candles, navel, midriff, red curtains, beautiful, round face,
***Example 3:*** (masterpiece:1.0), (best quality:1.4), (ultra highres:1.2), (photorealistic:1.4), (8k, RAW photo:1.2), (soft focus:1.4), 1 young girl, (18yo:1.3), (sharp focus:1.4), (Japanese:0.7), (russian:1.1), detailed beautiful face, black hair, (detailed maid crothes:1.4), (lace choker:1.2), beautiful white shiny humid skin
***Example 4:*** (masterpiece:1.0), (best quality:1.4), (ultra highres:1.2), (delicate illustration:1.4), (renaissance art:1.4), (8k, RAW photo:1.2), (soft focus:1.4), 1 young girl, (18yo:1.3), (sharp focus:1.4), (Japanese:1.0), (korean:0.7), detailed beautiful face, black hair, (detailed maid crothes:1.4), (lace choker:1.2), beautiful white shiny humid skin
***Example 5:*** 4k, high-res, masterpiece, best quality, ((Hasselblad photography)), (Korean K-pop idol), finely detailed skin, ((pale white skin)), sharp focus, (cinematic lighting), collarbone, (overcast tone), overcast whitebalance, morning, soft lighting, narrow waist, dynamic angle, [:(detailed face:1.2):0.2], (PureErosFace_V1), armpit crease, lewd pose, natural breasts, snowy white skin, winter clothings, groin, thigh gap, slender, ((highleg bikini)), scarf, beret, thongs, ((sagging breasts))
***Example 6:*** Perfect full body photo of a 16yo cute girl,(Elf) fairy,cute hairstyle,(Sexy wet (Epic fantasy gorgeous dress) translucent beautyfull intricacy clothing decorative pattern details multicolor gown),cute delicate face,symmetrical leg,large breasts,sex happy,hairy wet pussy cum dildo,pale skin pores,hoop earrings
***Example 7:*** european girl, best quality, ultra high res, (photorealistic:1.4), autumn, street, stilettos, long grey coat, stockings, panties, perfect body, small breasts, nipples, (blond short hair:1), ((puffy eyes)), happy, full body
---------------------------------------------------------------
***Uber Realistic Porn Merge (URPM) (version 1.2)***
*https://civitai.com/models/2661/uber-realistic-porn-merge-urpm*
***Tags:*** portraits, character, girl, woman, realistic, photography, person, women, fantasy, photorealistic, merge, nsfw, sexy, blend, sex, hardcore, porn, nude, pussy, lewd
***Example 1:*** wide angle pussy and ass, (woman) porn, tight (asshole), natural boobs, big tits
***Example 2:*** a hot frightened helpless, screaming young woman riding dick of a creepy monster, (((penis penetrating asshole))), (focus on asshole), (detailed dandruff penis), fucked hard, ((detailed facial features)), very detailed face , wide-angle, (full body), digital art, high contrast dynamic lighting, horror fantasy, intricate detail, sharp focus, masterpiece, anatomical details, full body shot, 8k , ultra wide angle
***Example 3:*** 20 year old k-idol, 1 girl, 1 man, boyfriend, (sharp focus:1.4), (smile:1.1), (realistic humid skin:1.4), (beautiful face:1.1), detailed eyes, detailed face, (small breasts:1), (curvy body:0.8), (long black ponytail hair:1.2), bangs, black eyes, depth of field, nude, naked, best quality, ultra high res, (photorealistic:1.4), (aegyo sal:1), ((puffy eyes)), full body, ((legs spread on cock)), ((super wet skin)), (moaning), horny, pussy, ((Sexual intercourse)), ((sex)), ((fucked by man)), ((POV from below)), ((Sexual penetration)), ((vast cum on woman's legs)), ((vast cum on woman's pussy)), ((5 fingers)), hetero, ((1girl above 1man)), ((1man below 1girl)), (((cowgirl position))), (straddling), luxury hotel, ((suite room)), bed, side lighting, high contrast
***Example 4:*** 20 year old k-idol, 1 girl, 1 man, boyfriend, (sharp focus:1.4), (smile:1.1), (realistic humid skin:1.4), (beautiful face:1.1), detailed eyes, detailed face, (small breasts:1), (curvy body:0.8), (long black ponytail hair:1.2), bangs, black eyes, depth of field, nude, naked, best quality, ultra high res, (photorealistic:1.4), (aegyo sal:1), ((puffy eyes)), full body, ((legs spread on cock)), ((super wet skin)), (moaning), horny, pussy, ((Sexual intercourse)), ((sex)), ((fucked by man)), ((POV from below)), ((Sexual penetration)), ((vast cum on woman's legs)), ((vast cum on woman's pussy)), ((5 fingers)), hetero, ((1girl above 1man)), ((1man below 1girl)), (((cowgirl position))), (straddling), luxury hotel, ((suite room)), bed, side lighting, high contrast, sexy lingeries
***Example 5:*** (iphone shot), (uncovered Nipples:1.4), (perfect face), (pretty face), ((indonesian hijab)), (white_skin), (style-glass:1.1)), indonesian girl with hijab showing wet pussy to camera, looking to camera, no bra, no panties, nipples through material, shadowed eyes, Intricate, High Detail, Sharp focus, porn, jakarta, monas, thamrin, bundaran_HI, transjakarta, stasiun, gojek
***Example 6:*** ((best quality)), ((ultra res)), ((photorealistic:1.4)), (intricate details), 19 years old, blonde hair, perfect face, make up:1.5, light on face, face detail,
***Example 7:*** RAW photo, ((chromatic aberration)), ((caustic)), ((detailed face)),nude woman posing for a picture in front of a window with her hand up, smiling, hairy pussy, trending on ArtStation Pixiv, high detail, sharp focus, smooth,aesthetic ,8k uhd, dslr, soft lighting, high quality, film grain
***Example 8:*** A blonde punk woman with stitches on her face stands in a dark urban setting, holding a liquor bottle. She is dressed in tattered punk clothes and has a cheerful expression. The street lights highlight her unique appearance in a medium shot. The image conveys a sense of individuality, rebellion, and carefree joy. body blend by the light.
---------------------------------------------------------------
---------------------------------------------------------------
|
62d8307d065854156e72886910d57c28
|
timm/levit_conv_128s.fb_dist_in1k
|
timm
| null | 4 | 16 |
timm
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet-1k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'timm']
| false | true | true | 4,841 | false |
# Model card for levit_conv_128s.fb_dist_in1k
A LeViT image classification model using default linear mode (non-convolutional mode with nn.Linear and nn.BatchNorm1d). Pretrained on ImageNet-1k using distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 7.8
- GMACs: 0.3
- Activations (M): 1.9
- Image size: 224 x 224
- **Papers:**
- LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference: https://arxiv.org/abs/2104.01136
- **Original:** https://github.com/facebookresearch/LeViT
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('levit_conv_128s.fb_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'levit_conv_128s.fb_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'levit_conv_128s.fb_dist_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for levit_conv_256:
# torch.Size([2, 256, 14, 14])
# torch.Size([2, 384, 7, 7])
# torch.Size([2, 512, 4, 4])
print(o.shape)
```
## Model Comparison
|model |top1 |top5 |param_count|img_size|
|-----------------------------------|------|------|-----------|--------|
|levit_384.fb_dist_in1k |82.596|96.012|39.13 |224 |
|levit_conv_384.fb_dist_in1k |82.596|96.012|39.13 |224 |
|levit_256.fb_dist_in1k |81.512|95.48 |18.89 |224 |
|levit_conv_256.fb_dist_in1k |81.512|95.48 |18.89 |224 |
|levit_conv_192.fb_dist_in1k |79.86 |94.792|10.95 |224 |
|levit_192.fb_dist_in1k |79.858|94.792|10.95 |224 |
|levit_128.fb_dist_in1k |78.474|94.014|9.21 |224 |
|levit_conv_128.fb_dist_in1k |78.474|94.02 |9.21 |224 |
|levit_128s.fb_dist_in1k |76.534|92.864|7.78 |224 |
|levit_conv_128s.fb_dist_in1k |76.532|92.864|7.78 |224 |
## Citation
```bibtex
@InProceedings{Graham_2021_ICCV,
author = {Graham, Benjamin and El-Nouby, Alaaeldin and Touvron, Hugo and Stock, Pierre and Joulin, Armand and Jegou, Herve and Douze, Matthijs},
title = {LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {12259-12269}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
013c244fce2cd835e068959be170e67c
|
ubikpt/t5-small-finetuned-cnn
|
ubikpt
|
t5
| 23 | 4 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null |
['cnn_dailymail']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 1,982 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8436
- Rouge1: 33.2082
- Rouge2: 16.798
- Rougel: 28.9573
- Rougelsum: 31.1044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.3793 | 1.0 | 359 | 1.8885 | 33.0321 | 16.7798 | 28.9367 | 30.9509 |
| 2.1432 | 2.0 | 718 | 1.8481 | 33.1559 | 16.8557 | 29.015 | 31.1122 |
| 2.0571 | 3.0 | 1077 | 1.8391 | 32.99 | 16.716 | 28.8118 | 30.9178 |
| 2.0001 | 4.0 | 1436 | 1.8357 | 33.0543 | 16.6731 | 28.8375 | 30.9604 |
| 1.9609 | 5.0 | 1795 | 1.8437 | 33.1019 | 16.7576 | 28.8669 | 31.001 |
| 1.925 | 6.0 | 2154 | 1.8402 | 33.1388 | 16.7539 | 28.8887 | 31.0262 |
| 1.9036 | 7.0 | 2513 | 1.8423 | 33.1825 | 16.759 | 28.9154 | 31.0656 |
| 1.8821 | 8.0 | 2872 | 1.8436 | 33.2082 | 16.798 | 28.9573 | 31.1044 |
### Framework versions
- Transformers 4.14.0
- Pytorch 1.5.0
- Datasets 2.3.2
- Tokenizers 0.10.3
|
a2d57c59b48e107cc88e93510267a93d
|
Shang37/distilgpt_edgel1
|
Shang37
|
gpt2
| 23 | 4 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 900 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt_edgel1
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
b39e73203b1d52c74a406a7233a7535a
|
ParanoidAndroid/bert-finetuned-squad
|
ParanoidAndroid
|
bert
| 12 | 3 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 953 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
7cd7e97b26ea14aebf9f8d41e0014150
|
ali2066/correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24
|
ali2066
|
distilbert
| 13 | 10 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,814 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5794
- Precision: 0.0094
- Recall: 0.0147
- F1: 0.0115
- Accuracy: 0.7156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6319 | 0.08 | 0.0312 | 0.0449 | 0.6753 |
| No log | 2.0 | 20 | 0.6265 | 0.0364 | 0.0312 | 0.0336 | 0.6764 |
| No log | 3.0 | 30 | 0.6216 | 0.0351 | 0.0312 | 0.0331 | 0.6762 |
| No log | 4.0 | 40 | 0.6193 | 0.0274 | 0.0312 | 0.0292 | 0.6759 |
| No log | 5.0 | 50 | 0.6183 | 0.0222 | 0.0312 | 0.0260 | 0.6773 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ec50640638517ad80bbd38fc5334bcc1
|
Duskfallcrew/finalfantasiespt1
|
Duskfallcrew
| null | 22 | 9 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 929 | false |
### Final Fantasy XIV Part One Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
fntsy1 (use that on your prompt)
|
b92eaf910f4571a7d82aa535ce548ae1
|
xmzhu/whisper-small-zh
|
xmzhu
|
whisper
| 23 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['zh']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,568 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Chinese
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 zh-CN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3946
- Wer: 72.3626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5179 | 2.02 | 1000 | 0.3333 | 72.9831 |
| 0.1273 | 4.04 | 2000 | 0.3562 | 73.9621 |
| 0.0163 | 6.06 | 3000 | 0.3790 | 73.9708 |
| 0.004 | 8.07 | 4000 | 0.3946 | 72.3626 |
| 0.025 | 11.0 | 5000 | 0.4019 | 72.6772 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
455b5823a747cb3fff15c9f38890f909
|
Geotrend/distilbert-base-en-sw-cased
|
Geotrend
|
distilbert
| 6 | 6 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
|
['multilingual']
|
['wikipedia']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,224 | false |
# distilbert-base-en-sw-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-sw-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-sw-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
1a3260b948bc4cb94a35f17c6c18d0b5
|
phd411r1/HooshvareLab_bert-fa-base-uncased_finetune-on-hoshfa
|
phd411r1
|
bert
| 10 | 3 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,298 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fa-base-uncased-finetune_on_hoshfa
This model is a fine-tuned version of [HooshvareLab/bert-fa-base-uncased](https://huggingface.co/HooshvareLab/bert-fa-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3643 | 1.0 | 1604 | 2.1323 |
| 1.5142 | 2.0 | 3208 | 2.1392 |
| 0.8834 | 3.0 | 4812 | 2.5274 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
d30509f93c92c8c9069f80335b891932
|
cwinkler/distilbert-base-uncased-finetuned-greenpatent
|
cwinkler
|
distilbert
| 10 | 23 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['cwinkler/green_patents']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,949 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Classification of patent title - "green" or "no green"
This model classifies patents into "green patents" or "no green patents" by their titles.
### Examples of "green patents" titles:
- "A method for recycling waste" - score: 0.714
- "A method of reducing pollution" - score: 0.786
- "An apparatus to improve environmental aspects" - score: 0.570
- "A method to improve waste management" - score: 0.813
- "A device to use renewable energy sources" - score: 0.98
- "A technology for efficient electrical power generation"- score: 0.975
- "A method for the production of fuel of non-fossil origin" - score: 0.975
- "Biofuels from waste" - score: 0.88
- "A combustion technology with mitigation potential" - score: 0.947
- "A device to capture greenhouse gases" - score: 0.871
- "A method to reduce the greenhouse effect" - score: 0.887
- "A device to improve the climate" - score: 0.650
- "A device to stop climate change" - score: 0.55
### Examples of "no green patents" titles:
- "A device to destroy the nature" - score: 0.19
- "A method to produce smoke" - score: 0.386
### Examples of the model's limitation
- "A method to avoid trash" - score: 0.165
- "A method to reduce trash" - score: 0.333
- "A method to burn the Amazonas" - score: 0.501
- "A method to burn wood" - score: 0.408
- "Green plastics" - score: 0.126
- "Greta Thunberg" - score: 0.313 (How dare you, model?); BUT: "A method of using Greta Thunberg to stop climate change" - score: 0.715
Examples were inspired by https://www.epo.org/news-events/in-focus/classification/classification.html
# distilbert-base-uncased-finetuned-greenpatent
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [green patent dataset](https://huggingface.co/datasets/cwinkler/green_patents). The green patent dataset was split into 70 % training data and 30 % test data (using ".train_test_split(test_size=0.3)").
The model achieves the following results on the evaluation set:
- Loss: 0.3148
- Accuracy: 0.8776
- F1: 0.8770
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4342 | 1.0 | 101 | 0.3256 | 0.8721 | 0.8712 |
| 0.3229 | 2.0 | 202 | 0.3148 | 0.8776 | 0.8770 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cpu
- Datasets 2.8.0
- Tokenizers 0.13.2
|
263c25c6a7e3291b7687731ed0e0a4c5
|
Eyvaz/wav2vec2-base-russian-demo-kaggle
|
Eyvaz
|
wav2vec2
| 22 | 7 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,047 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-russian-demo-kaggle
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0102 | 1.03 | 500 | inf | 0.9997 |
| 0.0068 | 2.06 | 1000 | inf | 0.9997 |
| 0.0 | 3.09 | 1500 | inf | 0.9997 |
| 0.0313 | 4.12 | 2000 | inf | 0.9997 |
| 0.0 | 5.15 | 2500 | inf | 0.9997 |
| 0.0052 | 6.19 | 3000 | inf | 0.9997 |
| 0.0287 | 7.22 | 3500 | inf | 0.9997 |
| 0.0 | 8.25 | 4000 | inf | 0.9997 |
| 0.01 | 9.28 | 4500 | inf | 0.9997 |
| 0.0 | 10.31 | 5000 | inf | 0.9997 |
| 0.3919 | 11.34 | 5500 | inf | 0.9997 |
| 0.0 | 12.37 | 6000 | inf | 0.9997 |
| 0.0 | 13.4 | 6500 | inf | 0.9997 |
| 0.0 | 14.43 | 7000 | inf | 0.9997 |
| 0.6422 | 15.46 | 7500 | inf | 0.9997 |
| 0.0 | 16.49 | 8000 | inf | 0.9997 |
| 0.0 | 17.53 | 8500 | inf | 0.9997 |
| 0.0 | 18.56 | 9000 | inf | 0.9997 |
| 0.0 | 19.59 | 9500 | inf | 0.9997 |
| 0.0 | 20.62 | 10000 | inf | 0.9997 |
| 0.0427 | 21.65 | 10500 | inf | 0.9997 |
| 0.0 | 22.68 | 11000 | inf | 0.9997 |
| 0.0 | 23.71 | 11500 | inf | 0.9997 |
| 0.0 | 24.74 | 12000 | inf | 0.9997 |
| 0.0091 | 25.77 | 12500 | inf | 0.9997 |
| 0.1243 | 26.8 | 13000 | inf | 0.9997 |
| 0.0 | 27.83 | 13500 | inf | 0.9997 |
| 0.0 | 28.87 | 14000 | inf | 0.9997 |
| 0.0 | 29.9 | 14500 | inf | 0.9997 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
|
d0da61afe4c2d6f1862c54fee3de364d
|
Fredvv/bert-finetuned-pos
|
Fredvv
|
bert
| 12 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['generated_from_trainer']
| true | true | true | 1,512 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-pos
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0580
- Precision: 0.9348
- Recall: 0.9502
- F1: 0.9424
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0875 | 1.0 | 1756 | 0.0680 | 0.9158 | 0.9352 | 0.9254 | 0.9826 |
| 0.0321 | 2.0 | 3512 | 0.0611 | 0.9289 | 0.9448 | 0.9368 | 0.9856 |
| 0.0222 | 3.0 | 5268 | 0.0580 | 0.9348 | 0.9502 | 0.9424 | 0.9868 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
42eed2e440aedcbd7d197f85da204b8c
|
coreml/coreml-Healys-Anime-Blend
|
coreml
| null | 3 | 0 | null | 2 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['coreml', 'stable-diffusion', 'text-to-image']
| false | true | true | 1,200 | false |
# Core ML Converted Model:
- This model was converted to Core ML for use on Apple Silicon devices. Instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-files-to-Core-ML).<br>
- Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
# Note: This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
# Healy's Anime Blend V1.7:
Source(s): [CivitAI](https://civitai.com/models/1400/healys-anime-blend)
This is a blend of some anime models mixed with "realistic" stuff to get a look i've been trying to accomplish for awhile. Im pretty happy with what it outputs, but judge that for yourself. I can't for the life of me remember what I put into this model.
I take no credit whatsoever, I just smashed rocks together like a caveman and the outcome somehow worked.
It can create NSFW stuff to I think, but i've noticed the outcomes remain pretty tolerable with "cleavage" in the negative prompts.
|
3647c5aec9fce226f101aa905dfa9159
|
Helsinki-NLP/opus-mt-fiu-fiu
|
Helsinki-NLP
|
marian
| 11 | 12 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['se', 'fi', 'hu', 'et', 'fiu']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 3,612 | false |
### fiu-fiu
* source group: Finno-Ugrian languages
* target group: Finno-Ugrian languages
* OPUS readme: [fiu-fiu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fiu-fiu/README.md)
* model: transformer
* source language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro
* target language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.est-est.est.est | 2.0 | 0.252 |
| Tatoeba-test.est-fin.est.fin | 51.0 | 0.704 |
| Tatoeba-test.est-fkv.est.fkv | 1.1 | 0.211 |
| Tatoeba-test.est-vep.est.vep | 3.1 | 0.272 |
| Tatoeba-test.fin-est.fin.est | 55.2 | 0.722 |
| Tatoeba-test.fin-fkv.fin.fkv | 1.6 | 0.207 |
| Tatoeba-test.fin-hun.fin.hun | 42.4 | 0.663 |
| Tatoeba-test.fin-izh.fin.izh | 12.9 | 0.509 |
| Tatoeba-test.fin-krl.fin.krl | 4.6 | 0.292 |
| Tatoeba-test.fkv-est.fkv.est | 2.4 | 0.148 |
| Tatoeba-test.fkv-fin.fkv.fin | 15.1 | 0.427 |
| Tatoeba-test.fkv-liv.fkv.liv | 1.2 | 0.261 |
| Tatoeba-test.fkv-vep.fkv.vep | 1.2 | 0.233 |
| Tatoeba-test.hun-fin.hun.fin | 47.8 | 0.681 |
| Tatoeba-test.izh-fin.izh.fin | 24.0 | 0.615 |
| Tatoeba-test.izh-krl.izh.krl | 1.8 | 0.114 |
| Tatoeba-test.krl-fin.krl.fin | 13.6 | 0.407 |
| Tatoeba-test.krl-izh.krl.izh | 2.7 | 0.096 |
| Tatoeba-test.liv-fkv.liv.fkv | 1.2 | 0.164 |
| Tatoeba-test.liv-vep.liv.vep | 3.4 | 0.181 |
| Tatoeba-test.multi.multi | 36.7 | 0.581 |
| Tatoeba-test.vep-est.vep.est | 3.4 | 0.251 |
| Tatoeba-test.vep-fkv.vep.fkv | 1.2 | 0.215 |
| Tatoeba-test.vep-liv.vep.liv | 3.4 | 0.179 |
### System Info:
- hf_name: fiu-fiu
- source_languages: fiu
- target_languages: fiu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fiu-fiu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['se', 'fi', 'hu', 'et', 'fiu']
- src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.test.txt
- src_alpha3: fiu
- tgt_alpha3: fiu
- short_pair: fiu-fiu
- chrF2_score: 0.581
- bleu: 36.7
- brevity_penalty: 0.981
- ref_len: 19444.0
- src_name: Finno-Ugrian languages
- tgt_name: Finno-Ugrian languages
- train_date: 2020-07-26
- src_alpha2: fiu
- tgt_alpha2: fiu
- prefer_old: False
- long_pair: fiu-fiu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
696d580822b77b40847fefee64a2ffed
|
jonatasgrosman/exp_w2v2t_nl_xls-r_s831
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['nl']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'nl']
| false | true | true | 453 | false |
# exp_w2v2t_nl_xls-r_s831
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
8c2308d9eba7325fb2f606936e505de8
|
versae/roberta-base-bne-finetuned-recores2
|
versae
|
roberta
| 13 | 0 |
transformers
| 0 |
multiple-choice
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,754 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-recores2
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.9761
- Accuracy: 0.3113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6094 | 1.0 | 1047 | 1.6094 | 0.2259 |
| 1.6094 | 2.0 | 2094 | 1.6094 | 0.2121 |
| 1.6094 | 3.0 | 3141 | 1.6094 | 0.2314 |
| 1.6094 | 4.0 | 4188 | 1.6094 | 0.1956 |
| 1.6094 | 5.0 | 5235 | 1.6094 | 0.2121 |
| 1.6121 | 6.0 | 6282 | 1.6094 | 0.1818 |
| 1.6094 | 7.0 | 7329 | 1.6094 | 0.2259 |
| 1.6092 | 8.0 | 8376 | 1.6094 | 0.1736 |
| 1.6094 | 9.0 | 9423 | 1.6094 | 0.1956 |
| 1.6094 | 10.0 | 10470 | 1.6094 | 0.1736 |
| 1.6094 | 11.0 | 11517 | 1.6094 | 0.1983 |
| 1.6094 | 12.0 | 12564 | 1.6094 | 0.2176 |
| 1.6094 | 13.0 | 13611 | 1.6094 | 0.1928 |
| 1.6096 | 14.0 | 14658 | 1.6094 | 0.1846 |
| 1.6145 | 15.0 | 15705 | 1.6094 | 0.2066 |
| 1.6094 | 16.0 | 16752 | 1.6022 | 0.2121 |
| 1.8471 | 17.0 | 17799 | 1.6101 | 0.1763 |
| 2.8148 | 18.0 | 18846 | 2.7585 | 0.2452 |
| 2.5445 | 19.0 | 19893 | 2.4576 | 0.2920 |
| 1.9972 | 20.0 | 20940 | 3.6002 | 0.2865 |
| 1.9844 | 21.0 | 21987 | 5.3809 | 0.3168 |
| 2.849 | 22.0 | 23034 | 7.2230 | 0.3140 |
| 1.4208 | 23.0 | 24081 | 8.0602 | 0.2975 |
| 0.4045 | 24.0 | 25128 | 8.2947 | 0.3058 |
| 0.3052 | 25.0 | 26175 | 8.9761 | 0.3113 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
cb29241b9f2184ac665c2311ab3f58bc
|
zack-paperspace/roberta-base-finetuned-cola
|
zack-paperspace
|
roberta
| 7 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,686 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-cola
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5732
- Matthews Correlation: 0.6495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- total_eval_batch_size: 5
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5211 | 1.0 | 534 | 0.4031 | 0.5599 |
| 0.3739 | 2.0 | 1068 | 0.4688 | 0.5713 |
| 0.0697 | 3.0 | 1602 | 0.4988 | 0.6070 |
| 0.0712 | 4.0 | 2136 | 0.5596 | 0.6221 |
| 0.0955 | 5.0 | 2670 | 0.5732 | 0.6495 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cpu
- Datasets 2.8.0
- Tokenizers 0.12.1
|
215563f46ca0efa3ae779490a18ef1a8
|
sudo-s/modeversion2_m7_e8
|
sudo-s
|
vit
| 14 | 11 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'generated_from_trainer']
| true | true | true | 9,786 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modeversion2_m7_e8
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem7 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1060
- Accuracy: 0.9761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.0231 | 0.06 | 100 | 3.8568 | 0.1883 |
| 3.3863 | 0.12 | 200 | 3.2510 | 0.2596 |
| 2.6187 | 0.18 | 300 | 2.6243 | 0.3882 |
| 2.3097 | 0.23 | 400 | 2.2189 | 0.4527 |
| 1.9016 | 0.29 | 500 | 1.9495 | 0.5244 |
| 1.7478 | 0.35 | 600 | 1.6609 | 0.6091 |
| 1.2345 | 0.41 | 700 | 1.4335 | 0.6426 |
| 1.4129 | 0.47 | 800 | 1.3001 | 0.6752 |
| 1.1722 | 0.53 | 900 | 1.2030 | 0.6785 |
| 1.0808 | 0.59 | 1000 | 1.0051 | 0.7273 |
| 0.8814 | 0.64 | 1100 | 1.0715 | 0.7063 |
| 0.9831 | 0.7 | 1200 | 0.9283 | 0.7334 |
| 0.8118 | 0.76 | 1300 | 0.8525 | 0.7631 |
| 0.7203 | 0.82 | 1400 | 0.7849 | 0.7756 |
| 0.8881 | 0.88 | 1500 | 0.8786 | 0.7487 |
| 0.6407 | 0.94 | 1600 | 0.6896 | 0.8000 |
| 0.7574 | 1.0 | 1700 | 0.7314 | 0.7754 |
| 0.6063 | 1.06 | 1800 | 0.6312 | 0.8068 |
| 0.4797 | 1.11 | 1900 | 0.5792 | 0.8296 |
| 0.4973 | 1.17 | 2000 | 0.5846 | 0.8221 |
| 0.4432 | 1.23 | 2100 | 0.7057 | 0.7905 |
| 0.5518 | 1.29 | 2200 | 0.5621 | 0.8304 |
| 0.3256 | 1.35 | 2300 | 0.5890 | 0.8143 |
| 0.4284 | 1.41 | 2400 | 0.5204 | 0.8485 |
| 0.3702 | 1.47 | 2500 | 0.5699 | 0.8256 |
| 0.2858 | 1.52 | 2600 | 0.5815 | 0.8287 |
| 0.3706 | 1.58 | 2700 | 0.4615 | 0.8571 |
| 0.3484 | 1.64 | 2800 | 0.4812 | 0.8518 |
| 0.2865 | 1.7 | 2900 | 0.4285 | 0.8638 |
| 0.4474 | 1.76 | 3000 | 0.5217 | 0.8377 |
| 0.2101 | 1.82 | 3100 | 0.4478 | 0.8589 |
| 0.3545 | 1.88 | 3200 | 0.4444 | 0.8612 |
| 0.2728 | 1.93 | 3300 | 0.4213 | 0.8645 |
| 0.3525 | 1.99 | 3400 | 0.3551 | 0.8848 |
| 0.0936 | 2.05 | 3500 | 0.4074 | 0.8748 |
| 0.2118 | 2.11 | 3600 | 0.4089 | 0.8812 |
| 0.2744 | 2.17 | 3700 | 0.3534 | 0.8894 |
| 0.211 | 2.23 | 3800 | 0.4422 | 0.8599 |
| 0.1684 | 2.29 | 3900 | 0.3705 | 0.8858 |
| 0.1885 | 2.34 | 4000 | 0.3651 | 0.8862 |
| 0.249 | 2.4 | 4100 | 0.4234 | 0.8687 |
| 0.1485 | 2.46 | 4200 | 0.3784 | 0.8798 |
| 0.1188 | 2.52 | 4300 | 0.3589 | 0.8873 |
| 0.1274 | 2.58 | 4400 | 0.3570 | 0.8917 |
| 0.2206 | 2.64 | 4500 | 0.3377 | 0.8920 |
| 0.1287 | 2.7 | 4600 | 0.3170 | 0.9023 |
| 0.1805 | 2.75 | 4700 | 0.3469 | 0.8934 |
| 0.1505 | 2.81 | 4800 | 0.4258 | 0.8757 |
| 0.1592 | 2.87 | 4900 | 0.3415 | 0.8948 |
| 0.1297 | 2.93 | 5000 | 0.3168 | 0.9028 |
| 0.1284 | 2.99 | 5100 | 0.3060 | 0.9089 |
| 0.0833 | 3.05 | 5200 | 0.2610 | 0.9207 |
| 0.0334 | 3.11 | 5300 | 0.2766 | 0.9197 |
| 0.0847 | 3.17 | 5400 | 0.3366 | 0.9016 |
| 0.1112 | 3.22 | 5500 | 0.3098 | 0.9079 |
| 0.0477 | 3.28 | 5600 | 0.3385 | 0.9041 |
| 0.0419 | 3.34 | 5700 | 0.2944 | 0.9139 |
| 0.0827 | 3.4 | 5800 | 0.2715 | 0.9239 |
| 0.0659 | 3.46 | 5900 | 0.2695 | 0.9230 |
| 0.0244 | 3.52 | 6000 | 0.3050 | 0.9147 |
| 0.0883 | 3.58 | 6100 | 0.2862 | 0.9203 |
| 0.0527 | 3.63 | 6200 | 0.2383 | 0.9319 |
| 0.0828 | 3.69 | 6300 | 0.2984 | 0.9182 |
| 0.0678 | 3.75 | 6400 | 0.2135 | 0.9436 |
| 0.0492 | 3.81 | 6500 | 0.2605 | 0.9296 |
| 0.0374 | 3.87 | 6600 | 0.2192 | 0.9380 |
| 0.1846 | 3.93 | 6700 | 0.2804 | 0.9187 |
| 0.0557 | 3.99 | 6800 | 0.2599 | 0.9253 |
| 0.0127 | 4.04 | 6900 | 0.2412 | 0.9336 |
| 0.0203 | 4.1 | 7000 | 0.2214 | 0.9415 |
| 0.0272 | 4.16 | 7100 | 0.2322 | 0.9356 |
| 0.066 | 4.22 | 7200 | 0.2643 | 0.9325 |
| 0.0628 | 4.28 | 7300 | 0.2170 | 0.9406 |
| 0.0108 | 4.34 | 7400 | 0.2388 | 0.9405 |
| 0.026 | 4.4 | 7500 | 0.2533 | 0.9372 |
| 0.0401 | 4.45 | 7600 | 0.2407 | 0.9358 |
| 0.0493 | 4.51 | 7700 | 0.2213 | 0.9415 |
| 0.0951 | 4.57 | 7800 | 0.3016 | 0.9237 |
| 0.0017 | 4.63 | 7900 | 0.2183 | 0.9448 |
| 0.0561 | 4.69 | 8000 | 0.1962 | 0.9492 |
| 0.0063 | 4.75 | 8100 | 0.1868 | 0.9522 |
| 0.0054 | 4.81 | 8200 | 0.2068 | 0.9459 |
| 0.0519 | 4.87 | 8300 | 0.2141 | 0.9429 |
| 0.027 | 4.92 | 8400 | 0.2138 | 0.9438 |
| 0.0034 | 4.98 | 8500 | 0.1774 | 0.9529 |
| 0.0096 | 5.04 | 8600 | 0.1778 | 0.9512 |
| 0.0011 | 5.1 | 8700 | 0.1854 | 0.9512 |
| 0.0195 | 5.16 | 8800 | 0.1914 | 0.9483 |
| 0.0245 | 5.22 | 8900 | 0.2156 | 0.9471 |
| 0.0055 | 5.28 | 9000 | 0.1640 | 0.9574 |
| 0.0166 | 5.33 | 9100 | 0.1770 | 0.9568 |
| 0.0217 | 5.39 | 9200 | 0.2011 | 0.9479 |
| 0.0017 | 5.45 | 9300 | 0.2210 | 0.9462 |
| 0.0161 | 5.51 | 9400 | 0.1510 | 0.9621 |
| 0.0193 | 5.57 | 9500 | 0.1643 | 0.9586 |
| 0.0121 | 5.63 | 9600 | 0.1716 | 0.9535 |
| 0.0146 | 5.69 | 9700 | 0.1720 | 0.9554 |
| 0.0071 | 5.74 | 9800 | 0.1831 | 0.9541 |
| 0.0018 | 5.8 | 9900 | 0.2076 | 0.9485 |
| 0.0007 | 5.86 | 10000 | 0.1636 | 0.9599 |
| 0.0005 | 5.92 | 10100 | 0.1625 | 0.9602 |
| 0.0277 | 5.98 | 10200 | 0.1874 | 0.9546 |
| 0.0005 | 6.04 | 10300 | 0.1790 | 0.9579 |
| 0.0012 | 6.1 | 10400 | 0.1840 | 0.9544 |
| 0.0431 | 6.15 | 10500 | 0.1571 | 0.9628 |
| 0.0332 | 6.21 | 10600 | 0.1599 | 0.9591 |
| 0.0014 | 6.27 | 10700 | 0.1493 | 0.9632 |
| 0.0014 | 6.33 | 10800 | 0.1366 | 0.9661 |
| 0.0006 | 6.39 | 10900 | 0.1582 | 0.9609 |
| 0.0005 | 6.45 | 11000 | 0.1704 | 0.9589 |
| 0.0004 | 6.51 | 11100 | 0.1376 | 0.9671 |
| 0.0755 | 6.57 | 11200 | 0.1375 | 0.9654 |
| 0.0002 | 6.62 | 11300 | 0.1361 | 0.9661 |
| 0.0006 | 6.68 | 11400 | 0.1323 | 0.9675 |
| 0.0009 | 6.74 | 11500 | 0.1239 | 0.9692 |
| 0.0004 | 6.8 | 11600 | 0.1514 | 0.9631 |
| 0.0002 | 6.86 | 11700 | 0.1386 | 0.9664 |
| 0.0004 | 6.92 | 11800 | 0.1368 | 0.9659 |
| 0.0004 | 6.98 | 11900 | 0.1276 | 0.9684 |
| 0.0002 | 7.03 | 12000 | 0.1171 | 0.9712 |
| 0.0002 | 7.09 | 12100 | 0.1142 | 0.9711 |
| 0.0001 | 7.15 | 12200 | 0.1183 | 0.9727 |
| 0.0002 | 7.21 | 12300 | 0.1167 | 0.9732 |
| 0.0002 | 7.27 | 12400 | 0.1143 | 0.9737 |
| 0.0001 | 7.33 | 12500 | 0.1129 | 0.9737 |
| 0.0002 | 7.39 | 12600 | 0.1116 | 0.9742 |
| 0.0002 | 7.44 | 12700 | 0.1126 | 0.9745 |
| 0.0002 | 7.5 | 12800 | 0.1111 | 0.9748 |
| 0.0002 | 7.56 | 12900 | 0.1102 | 0.9747 |
| 0.0001 | 7.62 | 13000 | 0.1094 | 0.9747 |
| 0.0001 | 7.68 | 13100 | 0.1086 | 0.9742 |
| 0.0001 | 7.74 | 13200 | 0.1079 | 0.9748 |
| 0.0002 | 7.8 | 13300 | 0.1062 | 0.9754 |
| 0.0002 | 7.85 | 13400 | 0.1068 | 0.9757 |
| 0.0001 | 7.91 | 13500 | 0.1061 | 0.9762 |
| 0.0001 | 7.97 | 13600 | 0.1060 | 0.9761 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
27af1b3ab04e64383b71894495f8f040
|
bert-large-cased-whole-word-masking
| null |
bert
| 9 | 1,884 |
transformers
| 2 |
fill-mask
| true | true | true |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 9,603 | false |
# BERT large model (cased) whole word masking
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is cased: it makes a difference between english and English.
Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same.
The training is identical -- each masked WordPiece token is predicted independently.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
This model has the following configuration:
- 24-layer
- 1024 hidden dimension
- 16 attention heads
- 336M parameters.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-cased-whole-word-masking')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] Hello I'm a fashion model. [SEP]",
"score":0.1474294513463974,
"token":4633,
"token_str":"fashion"
},
{
"sequence":"[CLS] Hello I'm a magazine model. [SEP]",
"score":0.05430116504430771,
"token":2435,
"token_str":"magazine"
},
{
"sequence":"[CLS] Hello I'm a male model. [SEP]",
"score":0.039395421743392944,
"token":2581,
"token_str":"male"
},
{
"sequence":"[CLS] Hello I'm a former model. [SEP]",
"score":0.036936815828084946,
"token":1393,
"token_str":"former"
},
{
"sequence":"[CLS] Hello I'm a professional model. [SEP]",
"score":0.03663451969623566,
"token":1848,
"token_str":"professional"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking')
model = BertModel.from_pretrained("bert-large-cased-whole-word-masking")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking')
model = TFBertModel.from_pretrained("bert-large-cased-whole-word-masking")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-large-cased-whole-word-masking')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] The man worked as a carpenter. [SEP]",
"score":0.09021259099245071,
"token":25169,
"token_str":"carpenter"
},
{
"sequence":"[CLS] The man worked as a cook. [SEP]",
"score":0.08125395327806473,
"token":9834,
"token_str":"cook"
},
{
"sequence":"[CLS] The man worked as a mechanic. [SEP]",
"score":0.07524766772985458,
"token":19459,
"token_str":"mechanic"
},
{
"sequence":"[CLS] The man worked as a waiter. [SEP]",
"score":0.07397029548883438,
"token":17989,
"token_str":"waiter"
},
{
"sequence":"[CLS] The man worked as a guard. [SEP]",
"score":0.05848982185125351,
"token":3542,
"token_str":"guard"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] The woman worked as a maid. [SEP]",
"score":0.19436432421207428,
"token":13487,
"token_str":"maid"
},
{
"sequence":"[CLS] The woman worked as a waitress. [SEP]",
"score":0.16161060333251953,
"token":15098,
"token_str":"waitress"
},
{
"sequence":"[CLS] The woman worked as a nurse. [SEP]",
"score":0.14942803978919983,
"token":7439,
"token_str":"nurse"
},
{
"sequence":"[CLS] The woman worked as a secretary. [SEP]",
"score":0.10373266786336899,
"token":4848,
"token_str":"secretary"
},
{
"sequence":"[CLS] The woman worked as a cook. [SEP]",
"score":0.06384387612342834,
"token":9834,
"token_str":"cook"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
---------------------------------------- | :-------------: | :----------------:
BERT-Large, Cased (Whole Word Masking) | 92.9/86.7 | 86.46
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
d499f85e06eb063a221a2af58d603ee3
|
Xiegg/scherenschnitt_papercut
|
Xiegg
| null | 3 | 0 | null | 0 | null | false | false | false |
cc
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 845 | false |
This model trained on SD-1.5 provides different styles of layered paper art
Triggerword: scherenschnitt papercut
Prompt expample:
layering paper art, 75mm photography of a scherenschnitt papercut, the christmas crib scene in the stable with ox mule and adoration of kings, artist's work, detailed, (white) paper, (navyblue) paper, (color) paper, christmas, backlight effect, harmonic shapes, winter landscape, cute, romantic xmas, in focus, 8k, a bit underexposed, 3d effect, unreal engine, blender render, ((symmetrie)), abstraction, HD, family christmas in switzerland, in layering paper art, paper cut, paper folding
Negative prompt: text, writing, logo, signature, tree
Settings
Steps: 50,
Sampler: DPM fast,
CFG scale: 14,
Seed: 2147632306,
Size: 704x512,
Model hash: 78e2aaa9,
Variation seed: 362561481,
Variation seed strength: 0.4
|
5aa5749119edeaabcb01efa51e95d043
|
prajjwal1/bert-medium
|
prajjwal1
| null | 5 | 18,721 |
transformers
| 1 | null | true | false | false |
['mit']
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['BERT', 'MNLI', 'NLI', 'transformer', 'pre-training']
| false | true | true | 2,455 | false |
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-tiny](https://huggingface.co/prajjwal1/bert-tiny), [bert-mini](https://huggingface.co/prajjwal1/bert-mini) and [bert-small](https://huggingface.co/prajjwal1/bert-small). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
If you use the model, please consider citing both the papers:
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Config of this model:
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
Other models to check out:
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
- `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
|
feaf9d7de0f3fbc2b14af48ec42f7f2b
|
huggan/pix2pix-maps
|
huggan
| null | 4 | 0 | null | 1 | null | true | false | false |
mit
| null |
['huggan/maps']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['huggan', 'gan']
| false | true | true | 2,077 | false |
# Pix2Pix trained on the maps dataset
## Model description
This model is a [Pix2Pix](https://arxiv.org/abs/1611.07004) model trained on the [huggan/maps](https://huggingface.co/datasets/huggan/maps) dataset. The goal for the model is to turn a satellite map into a geographic map à la Google Maps, and the other way around.
The model was trained using the [example script](https://github.com/huggingface/community-events/tree/main/huggan/pytorch/pix2pix) provided by HuggingFace as part of the [HugGAN sprint](https://github.com/huggingface/community-events/tree/main/huggan).
## Intended uses & limitations
#### How to use
```python
from huggan.pytorch.pix2pix.modeling_pix2pix import GeneratorUNet
from PIL import Image
from torchvision.utils import save_image
image = Image.open("...")
generator = GeneratorUNet.from_pretrained("huggan/pix2pix-maps")
pixel_values = transform(image).unsqueeze(0)
output = generator(pixel_values)
save_image(output, 'output.png', normalize=True)
```
#### Limitations and bias
Provide examples of latent issues and potential remediations.
## Training data
The data used was huggan/maps.
## Training procedure
The following command was used:
```bash
accelerate launch train.py --dataset huggan/maps --push_to_hub --model_name pix2pix-maps --checkpoint_interval 1
```
## Eval results
## Generated Images
You can embed local or remote images using ``
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/IsolaZZE16,
author = {Phillip Isola and
Jun{-}Yan Zhu and
Tinghui Zhou and
Alexei A. Efros},
title = {Image-to-Image Translation with Conditional Adversarial Networks},
journal = {CoRR},
volume = {abs/1611.07004},
year = {2016},
url = {http://arxiv.org/abs/1611.07004},
eprinttype = {arXiv},
eprint = {1611.07004},
timestamp = {Mon, 13 Aug 2018 16:49:05 +0200},
biburl = {https://dblp.org/rec/journals/corr/IsolaZZE16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
887ca12133f231379234860deda5b55c
|
Qiliang/t5-small-finetuned-xsum
|
Qiliang
|
t5
| 16 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['xsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,408 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4495
- Rouge1: 28.6501
- Rouge2: 7.9821
- Rougel: 22.5657
- Rougelsum: 22.579
- Gen Len: 18.819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6832 | 1.0 | 25506 | 2.4495 | 28.6501 | 7.9821 | 22.5657 | 22.579 | 18.819 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
38d2f45423368894d48cb7bcdaf3dca6
|
yoshitomo-matsubara/bert-base-uncased-qqp
|
yoshitomo-matsubara
|
bert
| 9 | 115 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['qqp']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bert', 'qqp', 'glue', 'torchdistill']
| false | true | true | 704 | false |
`bert-base-uncased` fine-tuned on QQP dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qqp/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
1a296060a6dff3285d1244f622d2d505
|
jonatasgrosman/exp_w2v2t_en_vp-nl_s169
|
jonatasgrosman
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'en']
| false | true | true | 475 | false |
# exp_w2v2t_en_vp-nl_s169
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
64702f32d6886720ca78beb1209d8cf0
|
nandysoham/Dell-theme-finetuned-overfinetuned
|
nandysoham
|
distilbert
| 10 | 5 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 3,426 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/Dell-theme-finetuned-overfinetuned
This model is a fine-tuned version of [nandysoham/distilbert-base-uncased-finetuned-squad](https://huggingface.co/nandysoham/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4305
- Train End Logits Accuracy: 0.7857
- Train Start Logits Accuracy: 0.8006
- Validation Loss: 2.3316
- Validation End Logits Accuracy: 0.1647
- Validation Start Logits Accuracy: 0.2118
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 210, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5691 | 0.5179 | 0.5119 | 1.2093 | 0.4588 | 0.4588 | 0 |
| 0.9333 | 0.6101 | 0.5833 | 1.2828 | 0.3176 | 0.3647 | 1 |
| 0.7924 | 0.6042 | 0.5982 | 1.4627 | 0.2824 | 0.2824 | 2 |
| 0.6858 | 0.6905 | 0.6786 | 1.5630 | 0.3059 | 0.2941 | 3 |
| 0.6562 | 0.6518 | 0.6815 | 1.7647 | 0.2235 | 0.2118 | 4 |
| 0.5996 | 0.7054 | 0.6994 | 2.0109 | 0.2118 | 0.2471 | 5 |
| 0.5277 | 0.7440 | 0.7589 | 2.1286 | 0.1765 | 0.2000 | 6 |
| 0.4810 | 0.7679 | 0.7798 | 2.2263 | 0.1529 | 0.2000 | 7 |
| 0.4488 | 0.8036 | 0.7887 | 2.2999 | 0.1529 | 0.1882 | 8 |
| 0.4305 | 0.7857 | 0.8006 | 2.3316 | 0.1647 | 0.2118 | 9 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.8.0
- Tokenizers 0.13.2
|
8cca2fbbb4f7f71f1b8ac1f79e3174f8
|
salascorp/distilroberta-base-mrpc-glue-oscar-salas9
|
salascorp
|
roberta
| 14 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'generated_from_trainer']
| true | true | true | 1,034 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue-oscar-salas9
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3999
- Accuracy: 0.8705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cpu
- Datasets 2.6.1
- Tokenizers 0.13.1
|
454d756c2c733cd741fc3bd307a25e8d
|
WillHeld/t5-base-pointer-adv-mtop
|
WillHeld
|
mt5
| 17 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['en']
|
['mtop']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,188 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-pointer-adv-mtop
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the mtop dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1281
- Exact Match: 0.7105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|
| 1.7704 | 1.09 | 200 | 0.3664 | 0.1315 |
| 1.9751 | 2.17 | 400 | 0.2091 | 0.3400 |
| 1.0019 | 3.26 | 600 | 0.1453 | 0.4586 |
| 1.313 | 4.35 | 800 | 0.1313 | 0.5065 |
| 0.6593 | 5.43 | 1000 | 0.1281 | 0.5266 |
| 0.3216 | 6.52 | 1200 | 0.1317 | 0.5253 |
| 0.4614 | 7.61 | 1400 | 0.1508 | 0.5262 |
| 0.3577 | 8.69 | 1600 | 0.1422 | 0.5360 |
| 0.3748 | 9.78 | 1800 | 0.1419 | 0.5459 |
| 0.2422 | 10.87 | 2000 | 0.1603 | 0.5356 |
| 0.4443 | 11.96 | 2200 | 0.1526 | 0.5472 |
| 0.2671 | 13.04 | 2400 | 0.1606 | 0.5481 |
| 0.227 | 14.13 | 2600 | 0.1774 | 0.5441 |
| 0.2053 | 15.22 | 2800 | 0.1752 | 0.5441 |
| 0.1517 | 16.3 | 3000 | 0.1770 | 0.5481 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
27400776e1d0484980fd35ec33d059d8
|
yashbhutoria/fin_sentiment
|
yashbhutoria
|
distilbert
| 13 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,200 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fin_sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5401
- Accuracy: 0.7840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.5401 | 0.7840 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
9073b22d6c455f1030a49b50cc813c1c
|
tehqikness/manda2
|
tehqikness
| null | 16 | 5 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 418 | false |
### manda2 Dreambooth model trained by tehqikness with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
f285aae0fd5e08d9a9da0d4a4e067fb2
|
alicenkbaytop/donut-base-sroie
|
alicenkbaytop
|
vision-encoder-decoder
| 15 | 0 |
transformers
| 0 | null | true | false | false |
mit
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 940 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cpu
- Datasets 2.9.0
- Tokenizers 0.13.2
|
85f4704e9595f74cff0603e6f0435dad
|
eicu/fastbooth-jsjessy-950
|
eicu
| null | 30 | 2 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 1,704 | false |
### fastbooth-jsjessy-950 Dreambooth model trained by eicu with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:












|
305430c223f1027b4d863c314e5b55d7
|
Helsinki-NLP/opus-mt-itc-itc
|
Helsinki-NLP
|
marian
| 11 | 93 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 10,975 | false |
### itc-itc
* source group: Italic languages
* target group: Italic languages
* OPUS readme: [itc-itc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-itc/README.md)
* model: transformer
* source language(s): arg ast bjn cat cos egl fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Grek lat_Latn lij lld_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm_Latn
* target language(s): arg ast bjn cat cos egl fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Grek lat_Latn lij lld_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.zip)
* test set translations: [opus-2020-07-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.test.txt)
* test set scores: [opus-2020-07-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.arg-fra.arg.fra | 40.8 | 0.501 |
| Tatoeba-test.arg-spa.arg.spa | 59.9 | 0.739 |
| Tatoeba-test.ast-fra.ast.fra | 45.4 | 0.628 |
| Tatoeba-test.ast-por.ast.por | 100.0 | 1.000 |
| Tatoeba-test.ast-spa.ast.spa | 46.8 | 0.636 |
| Tatoeba-test.cat-fra.cat.fra | 51.6 | 0.689 |
| Tatoeba-test.cat-ita.cat.ita | 49.2 | 0.699 |
| Tatoeba-test.cat-por.cat.por | 48.0 | 0.688 |
| Tatoeba-test.cat-ron.cat.ron | 35.4 | 0.719 |
| Tatoeba-test.cat-spa.cat.spa | 69.0 | 0.826 |
| Tatoeba-test.cos-fra.cos.fra | 22.3 | 0.383 |
| Tatoeba-test.cos-pms.cos.pms | 3.4 | 0.199 |
| Tatoeba-test.egl-fra.egl.fra | 9.5 | 0.283 |
| Tatoeba-test.egl-ita.egl.ita | 3.0 | 0.206 |
| Tatoeba-test.egl-spa.egl.spa | 3.7 | 0.194 |
| Tatoeba-test.fra-arg.fra.arg | 3.8 | 0.090 |
| Tatoeba-test.fra-ast.fra.ast | 25.9 | 0.457 |
| Tatoeba-test.fra-cat.fra.cat | 42.2 | 0.637 |
| Tatoeba-test.fra-cos.fra.cos | 3.3 | 0.185 |
| Tatoeba-test.fra-egl.fra.egl | 2.2 | 0.120 |
| Tatoeba-test.fra-frm.fra.frm | 1.0 | 0.191 |
| Tatoeba-test.fra-gcf.fra.gcf | 0.2 | 0.099 |
| Tatoeba-test.fra-glg.fra.glg | 40.5 | 0.625 |
| Tatoeba-test.fra-hat.fra.hat | 22.6 | 0.472 |
| Tatoeba-test.fra-ita.fra.ita | 46.7 | 0.679 |
| Tatoeba-test.fra-lad.fra.lad | 15.9 | 0.345 |
| Tatoeba-test.fra-lat.fra.lat | 2.9 | 0.247 |
| Tatoeba-test.fra-lij.fra.lij | 1.0 | 0.201 |
| Tatoeba-test.fra-lld.fra.lld | 1.1 | 0.257 |
| Tatoeba-test.fra-lmo.fra.lmo | 1.2 | 0.241 |
| Tatoeba-test.fra-msa.fra.msa | 0.4 | 0.111 |
| Tatoeba-test.fra-oci.fra.oci | 7.3 | 0.322 |
| Tatoeba-test.fra-pap.fra.pap | 69.8 | 0.912 |
| Tatoeba-test.fra-pcd.fra.pcd | 0.6 | 0.144 |
| Tatoeba-test.fra-pms.fra.pms | 1.0 | 0.181 |
| Tatoeba-test.fra-por.fra.por | 39.7 | 0.619 |
| Tatoeba-test.fra-roh.fra.roh | 5.7 | 0.286 |
| Tatoeba-test.fra-ron.fra.ron | 36.4 | 0.591 |
| Tatoeba-test.fra-scn.fra.scn | 2.1 | 0.101 |
| Tatoeba-test.fra-spa.fra.spa | 47.5 | 0.670 |
| Tatoeba-test.fra-srd.fra.srd | 2.8 | 0.306 |
| Tatoeba-test.fra-vec.fra.vec | 3.0 | 0.345 |
| Tatoeba-test.fra-wln.fra.wln | 3.5 | 0.212 |
| Tatoeba-test.frm-fra.frm.fra | 11.4 | 0.472 |
| Tatoeba-test.gcf-fra.gcf.fra | 7.1 | 0.267 |
| Tatoeba-test.gcf-lad.gcf.lad | 0.0 | 0.170 |
| Tatoeba-test.gcf-por.gcf.por | 0.0 | 0.230 |
| Tatoeba-test.gcf-spa.gcf.spa | 13.4 | 0.314 |
| Tatoeba-test.glg-fra.glg.fra | 54.7 | 0.702 |
| Tatoeba-test.glg-ita.glg.ita | 40.1 | 0.661 |
| Tatoeba-test.glg-por.glg.por | 57.6 | 0.748 |
| Tatoeba-test.glg-spa.glg.spa | 70.0 | 0.817 |
| Tatoeba-test.hat-fra.hat.fra | 14.2 | 0.419 |
| Tatoeba-test.hat-spa.hat.spa | 17.9 | 0.449 |
| Tatoeba-test.ita-cat.ita.cat | 51.0 | 0.693 |
| Tatoeba-test.ita-egl.ita.egl | 1.1 | 0.114 |
| Tatoeba-test.ita-fra.ita.fra | 58.2 | 0.727 |
| Tatoeba-test.ita-glg.ita.glg | 41.7 | 0.652 |
| Tatoeba-test.ita-lad.ita.lad | 17.5 | 0.419 |
| Tatoeba-test.ita-lat.ita.lat | 7.1 | 0.294 |
| Tatoeba-test.ita-lij.ita.lij | 1.0 | 0.208 |
| Tatoeba-test.ita-msa.ita.msa | 0.9 | 0.115 |
| Tatoeba-test.ita-oci.ita.oci | 12.3 | 0.378 |
| Tatoeba-test.ita-pms.ita.pms | 1.6 | 0.182 |
| Tatoeba-test.ita-por.ita.por | 44.8 | 0.665 |
| Tatoeba-test.ita-ron.ita.ron | 43.3 | 0.653 |
| Tatoeba-test.ita-spa.ita.spa | 56.6 | 0.733 |
| Tatoeba-test.ita-vec.ita.vec | 2.0 | 0.187 |
| Tatoeba-test.lad-fra.lad.fra | 30.4 | 0.458 |
| Tatoeba-test.lad-gcf.lad.gcf | 0.0 | 0.163 |
| Tatoeba-test.lad-ita.lad.ita | 12.3 | 0.426 |
| Tatoeba-test.lad-lat.lad.lat | 1.6 | 0.178 |
| Tatoeba-test.lad-por.lad.por | 8.8 | 0.394 |
| Tatoeba-test.lad-ron.lad.ron | 78.3 | 0.717 |
| Tatoeba-test.lad-spa.lad.spa | 28.3 | 0.531 |
| Tatoeba-test.lat-fra.lat.fra | 9.4 | 0.300 |
| Tatoeba-test.lat-ita.lat.ita | 20.0 | 0.421 |
| Tatoeba-test.lat-lad.lat.lad | 3.8 | 0.173 |
| Tatoeba-test.lat-por.lat.por | 13.0 | 0.354 |
| Tatoeba-test.lat-ron.lat.ron | 14.0 | 0.358 |
| Tatoeba-test.lat-spa.lat.spa | 21.8 | 0.436 |
| Tatoeba-test.lij-fra.lij.fra | 13.8 | 0.346 |
| Tatoeba-test.lij-ita.lij.ita | 14.7 | 0.442 |
| Tatoeba-test.lld-fra.lld.fra | 18.8 | 0.428 |
| Tatoeba-test.lld-spa.lld.spa | 11.1 | 0.377 |
| Tatoeba-test.lmo-fra.lmo.fra | 11.0 | 0.329 |
| Tatoeba-test.msa-fra.msa.fra | 0.8 | 0.129 |
| Tatoeba-test.msa-ita.msa.ita | 1.1 | 0.138 |
| Tatoeba-test.msa-msa.msa.msa | 19.1 | 0.453 |
| Tatoeba-test.msa-pap.msa.pap | 0.0 | 0.037 |
| Tatoeba-test.msa-por.msa.por | 2.4 | 0.155 |
| Tatoeba-test.msa-ron.msa.ron | 1.2 | 0.129 |
| Tatoeba-test.msa-spa.msa.spa | 1.0 | 0.139 |
| Tatoeba-test.multi.multi | 40.8 | 0.599 |
| Tatoeba-test.mwl-por.mwl.por | 35.4 | 0.561 |
| Tatoeba-test.oci-fra.oci.fra | 24.5 | 0.467 |
| Tatoeba-test.oci-ita.oci.ita | 23.3 | 0.493 |
| Tatoeba-test.oci-spa.oci.spa | 26.1 | 0.505 |
| Tatoeba-test.pap-fra.pap.fra | 31.0 | 0.629 |
| Tatoeba-test.pap-msa.pap.msa | 0.0 | 0.051 |
| Tatoeba-test.pcd-fra.pcd.fra | 13.8 | 0.381 |
| Tatoeba-test.pcd-spa.pcd.spa | 2.6 | 0.227 |
| Tatoeba-test.pms-cos.pms.cos | 3.4 | 0.217 |
| Tatoeba-test.pms-fra.pms.fra | 13.4 | 0.347 |
| Tatoeba-test.pms-ita.pms.ita | 13.0 | 0.373 |
| Tatoeba-test.pms-spa.pms.spa | 13.1 | 0.374 |
| Tatoeba-test.por-ast.por.ast | 100.0 | 1.000 |
| Tatoeba-test.por-cat.por.cat | 45.1 | 0.673 |
| Tatoeba-test.por-fra.por.fra | 52.5 | 0.698 |
| Tatoeba-test.por-gcf.por.gcf | 16.0 | 0.128 |
| Tatoeba-test.por-glg.por.glg | 57.5 | 0.750 |
| Tatoeba-test.por-ita.por.ita | 50.1 | 0.710 |
| Tatoeba-test.por-lad.por.lad | 15.7 | 0.341 |
| Tatoeba-test.por-lat.por.lat | 11.1 | 0.362 |
| Tatoeba-test.por-msa.por.msa | 2.4 | 0.136 |
| Tatoeba-test.por-mwl.por.mwl | 30.5 | 0.559 |
| Tatoeba-test.por-roh.por.roh | 0.0 | 0.132 |
| Tatoeba-test.por-ron.por.ron | 40.0 | 0.632 |
| Tatoeba-test.por-spa.por.spa | 58.6 | 0.756 |
| Tatoeba-test.roh-fra.roh.fra | 23.1 | 0.564 |
| Tatoeba-test.roh-por.roh.por | 21.4 | 0.347 |
| Tatoeba-test.roh-spa.roh.spa | 19.8 | 0.489 |
| Tatoeba-test.ron-cat.ron.cat | 59.5 | 0.854 |
| Tatoeba-test.ron-fra.ron.fra | 47.4 | 0.647 |
| Tatoeba-test.ron-ita.ron.ita | 45.7 | 0.683 |
| Tatoeba-test.ron-lad.ron.lad | 44.2 | 0.712 |
| Tatoeba-test.ron-lat.ron.lat | 14.8 | 0.449 |
| Tatoeba-test.ron-msa.ron.msa | 1.2 | 0.098 |
| Tatoeba-test.ron-por.ron.por | 42.7 | 0.650 |
| Tatoeba-test.ron-spa.ron.spa | 50.4 | 0.686 |
| Tatoeba-test.scn-fra.scn.fra | 2.4 | 0.180 |
| Tatoeba-test.scn-spa.scn.spa | 5.1 | 0.212 |
| Tatoeba-test.spa-arg.spa.arg | 10.8 | 0.267 |
| Tatoeba-test.spa-ast.spa.ast | 24.6 | 0.514 |
| Tatoeba-test.spa-cat.spa.cat | 61.6 | 0.783 |
| Tatoeba-test.spa-egl.spa.egl | 2.2 | 0.106 |
| Tatoeba-test.spa-fra.spa.fra | 51.1 | 0.683 |
| Tatoeba-test.spa-gcf.spa.gcf | 7.8 | 0.067 |
| Tatoeba-test.spa-glg.spa.glg | 62.8 | 0.776 |
| Tatoeba-test.spa-hat.spa.hat | 16.6 | 0.398 |
| Tatoeba-test.spa-ita.spa.ita | 51.8 | 0.718 |
| Tatoeba-test.spa-lad.spa.lad | 14.6 | 0.393 |
| Tatoeba-test.spa-lat.spa.lat | 21.5 | 0.486 |
| Tatoeba-test.spa-lld.spa.lld | 2.0 | 0.222 |
| Tatoeba-test.spa-msa.spa.msa | 0.8 | 0.113 |
| Tatoeba-test.spa-oci.spa.oci | 10.3 | 0.377 |
| Tatoeba-test.spa-pcd.spa.pcd | 0.9 | 0.115 |
| Tatoeba-test.spa-pms.spa.pms | 1.5 | 0.194 |
| Tatoeba-test.spa-por.spa.por | 49.4 | 0.698 |
| Tatoeba-test.spa-roh.spa.roh | 4.6 | 0.261 |
| Tatoeba-test.spa-ron.spa.ron | 39.1 | 0.618 |
| Tatoeba-test.spa-scn.spa.scn | 2.0 | 0.113 |
| Tatoeba-test.spa-wln.spa.wln | 8.7 | 0.295 |
| Tatoeba-test.srd-fra.srd.fra | 6.7 | 0.369 |
| Tatoeba-test.vec-fra.vec.fra | 59.9 | 0.608 |
| Tatoeba-test.vec-ita.vec.ita | 14.2 | 0.405 |
| Tatoeba-test.wln-fra.wln.fra | 8.9 | 0.344 |
| Tatoeba-test.wln-spa.wln.spa | 9.6 | 0.298 |
### System Info:
- hf_name: itc-itc
- source_languages: itc
- target_languages: itc
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-itc/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']
- src_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.test.txt
- src_alpha3: itc
- tgt_alpha3: itc
- short_pair: itc-itc
- chrF2_score: 0.599
- bleu: 40.8
- brevity_penalty: 0.968
- ref_len: 77448.0
- src_name: Italic languages
- tgt_name: Italic languages
- train_date: 2020-07-07
- src_alpha2: itc
- tgt_alpha2: itc
- prefer_old: False
- long_pair: itc-itc
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
eed275acf73b969fe1e247a1756d63c8
|
cjvt/crosloengual-bert-si-nli
|
cjvt
|
bert
| 8 | 13 |
transformers
| 0 |
text-classification
| true | false | false |
cc-by-4.0
|
['sl', 'hr', 'en', 'multilingual']
| null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 636 | false |
# crosloengual-bert-si-nli
CroSloEngual BERT model finetuned on the SI-NLI dataset for Slovene natural language inference.
Fine-tuned in a classic sequence pair classification setting on the official training/validation/test split for 10 epochs, using validation set accuracy for model selection.
Optimized using the AdamW optimizer (learning rate 2e-5) and cross-entropy loss.
Using batch size `82` (selected based on the available GPU memory) and maximum sequence length `107` (99th percentile of the lengths in the training set).
Achieves the following metrics:
- best validation accuracy: `0.660`
- test accuracy = `0.673`
|
aa94272b325da0f39b7effd7a6b5c7ec
|
sd-concepts-library/aavegotchi
|
sd-concepts-library
| null | 10 | 0 | null | 1 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,144 | false |
### aavegotchi on Stable Diffusion
This is the `<aave-gotchi>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
8b16686dc483320827c9b51460711108
|
rymaju/t5-small-finetuned-en-to-regex
|
rymaju
|
t5
| 20 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,983 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-regex
This model is a fine-tuned version of [rymaju/t5-small-finetuned-en-to-regex](https://huggingface.co/rymaju/t5-small-finetuned-en-to-regex) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0032
- Bleu: 12.1984
- Gen Len: 16.7502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0092 | 1.0 | 6188 | 0.0043 | 12.1984 | 16.7522 |
| 0.0069 | 2.0 | 12376 | 0.0040 | 12.2039 | 16.7502 |
| 0.0056 | 3.0 | 18564 | 0.0034 | 12.2091 | 16.7483 |
| 0.0048 | 4.0 | 24752 | 0.0035 | 12.2103 | 16.7502 |
| 0.0049 | 5.0 | 30940 | 0.0035 | 12.1984 | 16.7502 |
| 0.0046 | 6.0 | 37128 | 0.0033 | 12.1984 | 16.7502 |
| 0.0046 | 7.0 | 43316 | 0.0035 | 12.1984 | 16.7502 |
| 0.0046 | 8.0 | 49504 | 0.0032 | 12.1984 | 16.7502 |
| 0.0042 | 9.0 | 55692 | 0.0032 | 12.1984 | 16.7502 |
| 0.0043 | 10.0 | 61880 | 0.0032 | 12.1984 | 16.7502 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
2705bda694f5024d4ffec415aa56dd6e
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.