repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ambekarsameer/distilbert-base-uncased-finetuned-cola
|
ambekarsameer
|
distilbert
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,571 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8051
- Matthews Correlation: 0.5338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5233 | 1.0 | 535 | 0.5324 | 0.4151 |
| 0.3489 | 2.0 | 1070 | 0.5132 | 0.4836 |
| 0.2392 | 3.0 | 1605 | 0.5852 | 0.5177 |
| 0.1822 | 4.0 | 2140 | 0.7485 | 0.5256 |
| 0.1382 | 5.0 | 2675 | 0.8051 | 0.5338 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
0c1383af56f3df2dd558205ce292d0ab
|
JeremiahZ/bert-base-uncased-mnli
|
JeremiahZ
|
bert
| 17 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 2 | 0 | 2 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,339 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-mnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4056
- Accuracy: 0.8501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4526 | 1.0 | 12272 | 0.4244 | 0.8388 |
| 0.3344 | 2.0 | 24544 | 0.4252 | 0.8469 |
| 0.2307 | 3.0 | 36816 | 0.4974 | 0.8445 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
babbaf8792bc325e16dc12981a2072a8
|
robingeibel/bigbird-base-finetuned-big_patent
|
robingeibel
|
big_bird
| 13 | 4 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null |
['big_patent']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,263 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-base-finetuned-big_patent
This model is a fine-tuned version of [robingeibel/bigbird-base-finetuned-big_patent](https://huggingface.co/robingeibel/bigbird-base-finetuned-big_patent) on the big_patent dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.1432 | 1.0 | 154482 | 1.0686 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
caa9780cb69eebb4860e0e7f1d187244
|
ajtamayoh/NER_EHR_Spanish_model_Mulitlingual_BERT
|
ajtamayoh
|
bert
| 12 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,709 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER_EHR_Spanish_model_Mulitlingual_BERT
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the DisTEMIST shared task 2022 dataset. It is available at: https://temu.bsc.es/distemist/category/data/
It achieves the following results on the evaluation set:
- Loss: 0.2603
- Precision: 0.5637
- Recall: 0.5801
- F1: 0.5718
- Accuracy: 0.9534
## Model description
For a complete description of our system, please go to: https://ceur-ws.org/Vol-3180/paper-26.pdf
## Training and evaluation data
Dataset provided by DisTEMIST shared task, it is available at: https://temu.bsc.es/distemist/category/data/
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 71 | 0.2060 | 0.5017 | 0.5540 | 0.5266 | 0.9496 |
| No log | 2.0 | 142 | 0.2163 | 0.5363 | 0.5433 | 0.5398 | 0.9495 |
| No log | 3.0 | 213 | 0.2245 | 0.5521 | 0.5356 | 0.5438 | 0.9514 |
| No log | 4.0 | 284 | 0.2453 | 0.5668 | 0.5985 | 0.5822 | 0.9522 |
| No log | 5.0 | 355 | 0.2433 | 0.5657 | 0.5579 | 0.5617 | 0.9530 |
| No log | 6.0 | 426 | 0.2553 | 0.5762 | 0.5762 | 0.5762 | 0.9536 |
| No log | 7.0 | 497 | 0.2603 | 0.5637 | 0.5801 | 0.5718 | 0.9534 |
### How to cite this work:
Tamayo, A., Burgos, D. A., & Gelbukh, A. (2022). mbert and simple post-processing: A baseline for disease mention detection in spanish. In Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings.
@inproceedings{tamayo2022mbert,
title={mbert and simple post-processing: A baseline for disease mention detection in spanish},
author={Tamayo, Antonio and Burgos, Diego A and Gelbukh, Alexander},
booktitle={Working Notes of Conference and Labs of the Evaluation (CLEF) Forum. CEUR Workshop Proceedings},
year={2022}
}
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
306462da243d620d8b75542dd47a3dab
|
jonatasgrosman/exp_w2v2t_it_vp-fr_s579
|
jonatasgrosman
|
wav2vec2
| 10 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['it']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'it']
| false | true | true | 469 | false |
# exp_w2v2t_it_vp-fr_s579
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
c268006252852790387a6ca8b6677a77
|
sd-dreambooth-library/soydavidtapia
|
sd-dreambooth-library
| null | 29 | 4 |
diffusers
| 3 | null | false | false | false |
mit
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,034 | false |
### soydavidtapia on Stable Diffusion via Dreambooth
#### model by soydavidtapia
This your the Stable Diffusion model fine-tuned the soydavidtapia concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of david tapia**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:











|
b98341bc8e068dd3e5ce2bdf09211f26
|
alexandrainst/scandi-nli-base
|
alexandrainst
|
bert
| 8 | 4 |
transformers
| 1 |
zero-shot-classification
| true | false | false |
mit
|
['da', False, 'nb', 'sv']
|
['strombergnlp/danfever', 'KBLab/overlim', 'MoritzLaurer/multilingual-NLI-26lang-2mil7']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 8,858 | false |
# ScandiNLI - Natural Language Inference model for Scandinavian Languages
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) for Natural Language Inference in Danish, Norwegian Bokmål and Swedish.
We have released three models for Scandinavian NLI, of different sizes:
- [alexandrainst/scandi-nli-large](https://huggingface.co/alexandrainst/scandi-nli-large)
- alexandrainst/scandi-nli-base (this)
- [alexandrainst/scandi-nli-small](https://huggingface.co/alexandrainst/scandi-nli-small)
A demo of the large model can be found in [this Hugging Face Space](https://huggingface.co/spaces/alexandrainst/zero-shot-classification) - check it out!
The performance and model size of each of them can be found in the Performance section below.
## Quick start
You can use this model in your scripts as follows:
```python
>>> from transformers import pipeline
>>> classifier = pipeline(
... "zero-shot-classification",
... model="alexandrainst/scandi-nli-base",
... )
>>> classifier(
... "Mexicansk bokser advarer Messi - 'Du skal bede til gud, om at jeg ikke finder dig'",
... candidate_labels=['sundhed', 'politik', 'sport', 'religion'],
... hypothesis_template="Dette eksempel handler om {}",
... )
{'sequence': "Mexicansk bokser advarer Messi - 'Du skal bede til gud, om at jeg ikke finder dig'",
'labels': ['sport', 'religion', 'sundhed', 'politik'],
'scores': [0.724335789680481,
0.1176532730460167,
0.08848614990711212,
0.06952482461929321]}
```
## Performance
We evaluate the models in Danish, Swedish and Norwegian Bokmål separately.
In all cases, we report Matthew's Correlation Coefficient (MCC), macro-average F1-score as well as accuracy.
### Scandinavian Evaluation
The Scandinavian scores are the average of the Danish, Swedish and Norwegian scores, which can be found in the sections below.
| **Model** | **MCC** | **Macro-F1** | **Accuracy** | **Number of Parameters** |
| :-------- | :------------ | :--------- | :----------- | :----------- |
| [`alexandrainst/scandi-nli-large`](https://huggingface.co/alexandrainst/scandi-nli-large) | **73.70%** | **74.44%** | **83.91%** | 354M |
| [`MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) | 69.01% | 71.99% | 80.66% | 279M |
| `alexandrainst/scandi-nli-base` (this) | 67.42% | 71.54% | 80.09% | 178M |
| [`joeddav/xlm-roberta-large-xnli`](https://huggingface.co/joeddav/xlm-roberta-large-xnli) | 64.17% | 70.80% | 77.29% | 560M |
| [`MoritzLaurer/mDeBERTa-v3-base-mnli-xnli`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 63.94% | 70.41% | 77.23% | 279M |
| [`NbAiLab/nb-bert-base-mnli`](https://huggingface.co/NbAiLab/nb-bert-base-mnli) | 61.71% | 68.36% | 76.08% | 178M |
| [`alexandrainst/scandi-nli-small`](https://huggingface.co/alexandrainst/scandi-nli-small) | 56.02% | 65.30% | 73.56% | **22M** |
### Danish Evaluation
We use a test split of the [DanFEVER dataset](https://aclanthology.org/2021.nodalida-main.pdf#page=439) to evaluate the Danish performance of the models.
The test split is generated using [this gist](https://gist.github.com/saattrupdan/1cb8379232fdec6e943dc84595a85e7c).
| **Model** | **MCC** | **Macro-F1** | **Accuracy** | **Number of Parameters** |
| :-------- | :------------ | :--------- | :----------- | :----------- |
| [`alexandrainst/scandi-nli-large`](https://huggingface.co/alexandrainst/scandi-nli-large) | **73.80%** | **58.41%** | **86.98%** | 354M |
| [`MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) | 68.37% | 57.10% | 83.25% | 279M |
| `alexandrainst/scandi-nli-base` (this) | 62.44% | 55.00% | 80.42% | 178M |
| [`NbAiLab/nb-bert-base-mnli`](https://huggingface.co/NbAiLab/nb-bert-base-mnli) | 56.92% | 53.25% | 76.39% | 178M |
| [`MoritzLaurer/mDeBERTa-v3-base-mnli-xnli`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 52.79% | 52.00% | 72.35% | 279M |
| [`joeddav/xlm-roberta-large-xnli`](https://huggingface.co/joeddav/xlm-roberta-large-xnli) | 49.18% | 50.31% | 69.73% | 560M |
| [`alexandrainst/scandi-nli-small`](https://huggingface.co/alexandrainst/scandi-nli-small) | 47.28% | 48.88% | 73.46% | **22M** |
### Swedish Evaluation
We use the test split of the machine translated version of the [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset to evaluate the Swedish performance of the models.
We acknowledge that not evaluating on a gold standard dataset is not ideal, but unfortunately we are not aware of any NLI datasets in Swedish.
| **Model** | **MCC** | **Macro-F1** | **Accuracy** | **Number of Parameters** |
| :-------- | :------------ | :--------- | :----------- | :----------- |
| [`alexandrainst/scandi-nli-large`](https://huggingface.co/alexandrainst/scandi-nli-large) | **76.69%** | **84.47%** | **84.38%** | 354M |
| [`joeddav/xlm-roberta-large-xnli`](https://huggingface.co/joeddav/xlm-roberta-large-xnli) | 75.35% | 83.42% | 83.55% | 560M |
| [`MoritzLaurer/mDeBERTa-v3-base-mnli-xnli`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 73.84% | 82.46% | 82.58% | 279M |
| [`MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) | 73.32% | 82.15% | 82.08% | 279M |
| `alexandrainst/scandi-nli-base` (this) | 72.29% | 81.37% | 81.51% | 178M |
| [`NbAiLab/nb-bert-base-mnli`](https://huggingface.co/NbAiLab/nb-bert-base-mnli) | 64.69% | 76.40% | 76.47% | 178M |
| [`alexandrainst/scandi-nli-small`](https://huggingface.co/alexandrainst/scandi-nli-small) | 62.35% | 74.79% | 74.93% | **22M** |
### Norwegian Evaluation
We use the test split of the machine translated version of the [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset to evaluate the Norwegian performance of the models.
We acknowledge that not evaluating on a gold standard dataset is not ideal, but unfortunately we are not aware of any NLI datasets in Norwegian.
| **Model** | **MCC** | **Macro-F1** | **Accuracy** | **Number of Parameters** |
| :-------- | :------------ | :--------- | :----------- | :----------- |
| [`alexandrainst/scandi-nli-large`](https://huggingface.co/alexandrainst/scandi-nli-large) | **70.61%** | **80.43%** | **80.36%** | 354M |
| [`joeddav/xlm-roberta-large-xnli`](https://huggingface.co/joeddav/xlm-roberta-large-xnli) | 67.99% | 78.68% | 78.60% | 560M |
| `alexandrainst/scandi-nli-base` (this) | 67.53% | 78.24% | 78.33% | 178M |
| [`MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) | 65.33% | 76.73% | 76.65% | 279M |
| [`MoritzLaurer/mDeBERTa-v3-base-mnli-xnli`](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 65.18% | 76.76% | 76.77% | 279M |
| [`NbAiLab/nb-bert-base-mnli`](https://huggingface.co/NbAiLab/nb-bert-base-mnli) | 63.51% | 75.42% | 75.39% | 178M |
| [`alexandrainst/scandi-nli-small`](https://huggingface.co/alexandrainst/scandi-nli-small) | 58.42% | 72.22% | 72.30% | **22M** |
## Training procedure
It has been fine-tuned on a dataset composed of [DanFEVER](https://aclanthology.org/2021.nodalida-main.pdf#page=439) as well as machine translated versions of [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) and [CommitmentBank](https://doi.org/10.18148/sub/2019.v23i2.601) into all three languages, and machine translated versions of [FEVER](https://aclanthology.org/N18-1074/) and [Adversarial NLI](https://aclanthology.org/2020.acl-main.441/) into Swedish.
The training split of DanFEVER is generated using [this gist](https://gist.github.com/saattrupdan/1cb8379232fdec6e943dc84595a85e7c).
The three languages are sampled equally during training, and they're validated on validation splits of [DanFEVER](https://aclanthology.org/2021.nodalida-main.pdf#page=439) and machine translated versions of [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) for Swedish and Norwegian Bokmål, sampled equally.
Check out the [Github repository](https://github.com/alexandrainst/ScandiNLI) for the code used to train the ScandiNLI models, and the full training logs can be found in [this Weights and Biases report](https://wandb.ai/saattrupdan/huggingface/reports/ScandiNLI--VmlldzozMDQyOTk1?accessToken=r9crgxqvvigy2hatdjeobzwipz7f3id5vqg8ooksljhfw6wl0hv1b05asypsfj9v).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 4242
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- max_steps: 50,000
|
e3edb990f5fe590154ac0b2389719bf0
|
xyma/PROP-marco-step400k
|
xyma
|
bert
| 7 | 10 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['msmarco']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PROP', 'Pretrain4IR']
| false | true | true | 1,869 | false |
# PROP-marco-step400k
**PROP**, **P**re-training with **R**epresentative w**O**rds **P**rediction, is a new pre-training method tailored for ad-hoc retrieval. PROP is inspired by the classical statistical language model for IR, specifically the query likelihood model, which assumes that the query is generated as the piece of text representative of the “ideal” document. Based on this idea, we construct the representative words prediction (ROP) task for pre-training. The full paper can be found [here](https://arxiv.org/pdf/2010.10137.pdf).
This model is pre-trained with more steps than [PROP-marco](https://huggingface.co/xyma/PROP-marco) on MS MARCO document corpus, and used at the MS MARCO Document Ranking Leaderboard where we reached 1st place.
# Citation
If you find our work useful, please consider citing our paper:
```bibtex
@inproceedings{DBLP:conf/wsdm/MaGZFJC21,
author = {Xinyu Ma and
Jiafeng Guo and
Ruqing Zhang and
Yixing Fan and
Xiang Ji and
Xueqi Cheng},
editor = {Liane Lewin{-}Eytan and
David Carmel and
Elad Yom{-}Tov and
Eugene Agichtein and
Evgeniy Gabrilovich},
title = {{PROP:} Pre-training with Representative Words Prediction for Ad-hoc
Retrieval},
booktitle = {{WSDM} '21, The Fourteenth {ACM} International Conference on Web Search
and Data Mining, Virtual Event, Israel, March 8-12, 2021},
pages = {283--291},
publisher = {{ACM}},
year = {2021},
url = {https://doi.org/10.1145/3437963.3441777},
doi = {10.1145/3437963.3441777},
timestamp = {Wed, 07 Apr 2021 16:17:44 +0200},
biburl = {https://dblp.org/rec/conf/wsdm/MaGZFJC21.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
d994c905b1e0395d948a9dff3bffd727
|
MiguelCosta/distilbert-finetuned-cisco
|
MiguelCosta
|
distilbert
| 8 | 2 |
transformers
| 0 |
fill-mask
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,541 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MiguelCosta/distilbert-finetuned-cisco
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.4181
- Validation Loss: 4.2370
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -964, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.4181 | 4.2370 | 0 |
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
6594ed683e73d4898f218f8af39e1c34
|
kasrahabib/distilbert-base-cased-trained-on-open-and-closed-source
|
kasrahabib
|
distilbert
| 10 | 4 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 2,316 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kasrahabib/distilbert-base-cased-trained-on-open-and-closed-source
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0045
- Validation Loss: 0.2459
- Train Precision: 0.9168
- Train Recall: 0.9676
- Train F1: 0.9415
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5860, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:-----:|
| 0.2726 | 0.1881 | 0.8684 | 0.9695 | 0.9161 | 0 |
| 0.1050 | 0.1451 | 0.9102 | 0.9676 | 0.9380 | 1 |
| 0.0485 | 0.1617 | 0.9385 | 0.9313 | 0.9349 | 2 |
| 0.0301 | 0.1832 | 0.9011 | 0.9733 | 0.9358 | 3 |
| 0.0214 | 0.1782 | 0.9319 | 0.9408 | 0.9364 | 4 |
| 0.0140 | 0.2199 | 0.9292 | 0.9523 | 0.9406 | 5 |
| 0.0104 | 0.2089 | 0.9308 | 0.9504 | 0.9405 | 6 |
| 0.0060 | 0.2600 | 0.9055 | 0.9695 | 0.9364 | 7 |
| 0.0059 | 0.2426 | 0.9102 | 0.9676 | 0.9380 | 8 |
| 0.0045 | 0.2459 | 0.9168 | 0.9676 | 0.9415 | 9 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.8.0
- Tokenizers 0.13.2
|
be87a98026ddcfd44d62a81fd0c1f8e1
|
delib99127/nlp4web
|
delib99127
|
bert
| 8 | 80 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 958 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [microsoft/xtremedistil-l12-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
9e492ad397a7e4b6d3d43f80a7539e40
|
kurianbenoy/kde_en_ml_translation_model
|
kurianbenoy
|
marian
| 13 | 19 |
fastai
| 2 |
translation
| true | false | false |
mit
|
['en', 'ml']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['fastai', 'translation']
| false | true | true | 789 | false |
# Fine Tune En-ML translation
* source group: English
* target group: Malayalam
This is a Machine translation model created for fun to translate from English text to Malayalam which was fine-tuned for KDE-Dataset.
[Tweet](https://twitter.com/kurianbenoy2/status/1503082136009465857?s=20&t=7Hn-KUqHZRY6VJ16-i1qdA)
# Model card
## Model description
Used a fine tuned model on top of MarianMT models created by Helsinki-NLP group. The [training code is described here](https://kurianbenoy.com/ml-blog/fastai/huggingface/translation/fine%20tuning/malayalam/2022/03/12/_03_13_huggingace_translation_models.html).
## Intended uses & limitations
Intended to use just for fun, and for sake of learning
Limitations: Returns really bad predictions occasionally
|
9fb995a9da289308fdcb6b82275933c8
|
flamesbob/Steampunk_angel
|
flamesbob
| null | 3 | 0 | null | 0 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 914 | false |
art by `Steampunk_angel` this style gives a steampunk look and feel with gears and sometimes mechanical wings to prompts.
License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
|
668463337d360b239a183423ccd4bc1b
|
pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
|
pszemraj
|
longt5
| 11 | 3 |
transformers
| 0 |
summarization
| true | false | false |
['apache-2.0', 'bsd-3-clause']
| null |
['kmfoda/booksum']
| null | 7 | 0 | 6 | 1 | 0 | 0 | 0 |
['summarization', 'summary', 'booksum', 'long-document', 'long-form']
| true | true | true | 1,207 | false |
# long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
> Evaluating some metric results before merging with the "main" wip version
This model is a fine-tuned version of [pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12) on the `kmfoda/booksum`.
The "base" checkpoint that I update when a training session is productive is [here](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1.1
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
c1229f99d9d39248bf655e7ff8795f74
|
wenjalan/starbot-transformers
|
wenjalan
|
gpt2
| 8 | 5 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,601 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starbot-transformers
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3942 | 1.0 | 2992 | 3.3385 |
| 3.2566 | 2.0 | 5984 | 3.2760 |
| 3.4112 | 3.0 | 8976 | 3.4710 |
| 3.4887 | 4.0 | 11968 | 3.5264 |
| 3.4856 | 5.0 | 14960 | 3.5181 |
| 3.4359 | 6.0 | 17952 | 3.5079 |
| 3.4115 | 7.0 | 20944 | 3.4954 |
| 3.3657 | 8.0 | 23936 | 3.4482 |
| 3.3018 | 9.0 | 26928 | 3.4207 |
| 3.2435 | 10.0 | 29920 | 3.4079 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
5d43b54a560d135a517fb11ba684e801
|
inverse-scaling/opt-1.3b_eval
|
inverse-scaling
|
opt
| 11 | 3 |
transformers
| 0 |
text-generation
| true | true | true |
other
|
['en']
| null | null | 19 | 5 | 9 | 5 | 0 | 0 | 0 |
['text-generation', 'opt']
| true | true | true | 8,704 | false |
# OPT : Open Pre-trained Transformer Language Models
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
Content from **this** model card has been written by the Hugging Face team.
## Intro
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
> Large language models trained on massive text collections have shown surprising emergent
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
> can interact with these models through paid APIs, full model access is currently limited to only a
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
> why these large language models work, hindering progress on improving known challenges in areas
> such as robustness, bias, and toxicity.
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
> collective research community as a whole, which is only possible when models are available for study.
## Model description
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
the [official paper](https://arxiv.org/abs/2205.01068).
## Intended uses & limitations
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-1.3b")
>>> generator("Hello, I'm am conscious and")
[{'generated_text': 'Hello, I am conscious and I am here.\nI am here.\nI am conscious.'}]
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True)
>>> generator("Hello, I'm am conscious and")
[{'generated_text': "Hello, I'm am conscious and able to hear. I have a lot of experience in the"}]
```
### Limitations and bias
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
unfiltered content from the internet, which is far from neutral the model is strongly biased :
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True, num_return_sequences=5)
>>> generator("The woman worked as a")
[{'generated_text': 'The woman worked as a bartender for six months before getting to the job she always dreamed of. She'},
{'generated_text': 'The woman worked as a nanny in a house near The White Horse Farm in the Yorkshire Dales'},
{'generated_text': "The woman worked as a translator at the British Broadcasting Corporation's headquarters and was also an acquaintance of some"},
{'generated_text': 'The woman worked as a secretary and went to school full-time, and also worked as a waitress'},
{'generated_text': 'The woman worked as a beautician with her baby and the little girl is now at the age where'}]
```
compared to:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True, num_return_sequences=5)
>>> generator("The man worked as a")
[{'generated_text': 'The man worked as a janitor and the owner of the house he worked at caught him cheating on'},
{'generated_text': 'The man worked as a software engineer.\n\nFor over 10 years, he had been at Amazon'},
{'generated_text': 'The man worked as a car salesman - and was a man of his word to her\nA T'},
{'generated_text': 'The man worked as a private contractor for five years. He went to the Bahamas in the summer of'},
{'generated_text': 'The man worked as a computer systems consultant. After leaving the job, he became a prolific internet hacker'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
- BookCorpus, which consists of more than 10K unpublished books,
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas,
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
Roller et al. (2021)
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
dataset that was used in RoBERTa (Liu et al., 2019b)
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
to each dataset’s size in the pretraining corpus.
The dataset might contains offensive content as parts of the dataset are a subset of
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
### Collection process
The dataset was collected form internet, and went through classic data processing algorithms and
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
*This ebook by Project Gutenberg.*
## Training procedure
### Preprocessing
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
### BibTeX entry and citation info
```bibtex
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
27611b2bdd6340d297f598acc54ab643
|
lmqg/mt5-base-koquad-qg-ae
|
lmqg
|
mt5
| 20 | 36 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['ko']
|
['lmqg/qg_koquad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation', 'answer extraction']
| true | true | true | 7,142 | false |
# Model Card of `lmqg/mt5-base-koquad-qg-ae`
This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation and answer extraction jointly on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
- **Language:** ko
- **Training data:** [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ko", model="lmqg/mt5-base-koquad-qg-ae")
# model prediction
question_answer_pairs = model.generate_qa("1990년 영화 《 남부군 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-base-koquad-qg-ae")
# answer extraction
answer = pipe("generate question: 1990년 영화 《 <hl> 남부군 <hl> 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.")
# question generation
question = pipe("extract answers: 또한 스피어스는 많은 새로운 여성 아티스트들에게 영향을 끼쳤는데, 대표적으로 데미 로바토, 케이티 페리, 크리스티니아 드바지, 레이디 가가, 리틀 부츠, 셀레나 고메즈 & 더씬, 픽시 로트 이 있다. 2007년 비욘세 놀스는 Total Request Live와의 인터뷰에서 '나는 브리트니를 사랑하고 팬이에요. 특히 새 앨범 Blackout을 좋아해요'라고 말했다. 린제이 로한은 '언제나 브리트니 스피어스에게 영감을 받는다. 학창시절 그녀처럼 타블로이드에 오르기를 꿈꿔왔다'고 말하며 롤 모델로 꼽았다. 스피어스는 현대 음악가들에게 음악적 영감으로 언급되기도 했다. <hl> 마일리 사이러스는 자신의 히트곡 Party in the U.S.A. 가 브리트니에게 영감과 영향을 받은 곡이라고 밝혔다. <hl> 베리 매닐로우의 앨범 15 Minutes 역시 브리트니에게 영감을 얻었다고 언급되었다.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-koquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_koquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 84.19 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_1 | 27.97 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_2 | 20.84 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_3 | 15.88 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_4 | 12.22 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| METEOR | 29.86 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| MoverScore | 83.24 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| ROUGE_L | 28.55 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-koquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_koquad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 80.28 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| QAAlignedF1Score (MoverScore) | 81.97 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| QAAlignedPrecision (BERTScore) | 77.03 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| QAAlignedPrecision (MoverScore) | 78.1 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| QAAlignedRecall (BERTScore) | 83.91 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| QAAlignedRecall (MoverScore) | 86.43 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-koquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_koquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 83.02 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| AnswerF1Score | 88.43 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| BERTScore | 96.14 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_1 | 74.93 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_2 | 65.39 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_3 | 51.39 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_4 | 34.98 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| METEOR | 61.26 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| MoverScore | 95.2 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| ROUGE_L | 83.83 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_koquad
- dataset_name: default
- input_types: ['paragraph_answer', 'paragraph_sentence']
- output_types: ['question', 'answer']
- prefix_types: ['qg', 'ae']
- model: google/mt5-base
- max_length: 512
- max_length_output: 32
- epoch: 14
- batch: 32
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-koquad-qg-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
e9b776c0570619c6039a1e00e3e7da50
|
muhtasham/tiny-mlm-snli-target-glue-mrpc
|
muhtasham
|
bert
| 10 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,562 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-snli-target-glue-mrpc
This model is a fine-tuned version of [muhtasham/tiny-mlm-snli](https://huggingface.co/muhtasham/tiny-mlm-snli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1053
- Accuracy: 0.6814
- F1: 0.7601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5879 | 4.35 | 500 | 0.5553 | 0.7279 | 0.8189 |
| 0.4565 | 8.7 | 1000 | 0.5597 | 0.7598 | 0.8388 |
| 0.3208 | 13.04 | 1500 | 0.6303 | 0.7426 | 0.8217 |
| 0.2133 | 17.39 | 2000 | 0.7777 | 0.7230 | 0.8094 |
| 0.137 | 21.74 | 2500 | 1.1053 | 0.6814 | 0.7601 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
1cda1ddbd8d0ab47f34a02a0eacf6150
|
Elvenson/diffuser_inference
|
Elvenson
| null | 24 | 0 |
diffusers
| 1 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'endpoints-template']
| false | true | true | 886 | false |
# Stable Diffusion v1-5 Custom Inference
This repo is for running diffusion custom inference endpoints with `prompts` and an optional `image` as inputs (Unlike normal text-to-image inference). To
achieve this goal, this repo implements a `handler.py` script. For more information regarding custom inference, please visit
this [link](https://huggingface.co/docs/inference-endpoints/guides/custom_handler).
For more information about the model, license and limitations please check the original [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5)
or diffusion [documentation](https://huggingface.co/docs/diffusers/index).
### Local test custom handler
To test custom inference locally, please run the following command:
```commandline
python local_request.py --prompts="whale in the universe" --image="test_image.jpg"
```
**Note**: `--image` parameter is optional.
|
0806210ba71b433e519d5f85016b6b65
|
theojolliffe/my_awesome_model
|
theojolliffe
|
distilbert
| 22 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,261 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0568
- Accuracy: 0.3929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 47 | 2.2071 | 0.3333 |
| No log | 2.0 | 94 | 2.0568 | 0.3929 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
69609d37389d57590fa842712229fb85
|
prateeksahu112/test-model
|
prateeksahu112
|
t5
| 4 | 4 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 2,368 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# test-model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0322
- Validation Loss: 0.9818
- Train Rouge1: 63.3560
- Train Rouge2: 39.8622
- Train Rougel: 62.5870
- Train Rougelsum: 62.5573
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-----:|
| 1.2033 | 1.1011 | 63.0488 | 39.8152 | 62.3015 | 62.2699 | 0 |
| 1.1660 | 1.0732 | 63.6556 | 40.2704 | 62.8821 | 62.8498 | 1 |
| 1.1394 | 1.0532 | 63.8815 | 40.5348 | 63.1276 | 63.0965 | 2 |
| 1.1149 | 1.0386 | 64.2783 | 40.8596 | 63.5115 | 63.4840 | 3 |
| 1.0969 | 1.0245 | 63.6975 | 40.1645 | 62.9323 | 62.8990 | 4 |
| 1.0831 | 1.0122 | 63.7146 | 40.3383 | 62.9457 | 62.9173 | 5 |
| 1.0678 | 1.0044 | 63.3129 | 39.9492 | 62.5462 | 62.5154 | 6 |
| 1.0551 | 0.9949 | 62.5523 | 39.2999 | 61.7963 | 61.7831 | 7 |
| 1.0417 | 0.9869 | 63.3126 | 40.0112 | 62.5606 | 62.5360 | 8 |
| 1.0322 | 0.9818 | 63.3560 | 39.8622 | 62.5870 | 62.5573 | 9 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.8.0
- Tokenizers 0.13.2
|
98c4160ee0c5d088631b417785b9b9b8
|
nc33/multiqa_model
|
nc33
|
roberta
| 23 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,702 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multiqa_model
This model is a fine-tuned version of [nc33/multiqa_model](https://huggingface.co/nc33/multiqa_model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1150
- Precision: 0.0855
- Recall: 0.0485
- F1: 0.0619
- Accuracy: 0.9626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 327 | 0.1121 | 0.0708 | 0.0280 | 0.0402 | 0.9631 |
| 0.0786 | 2.0 | 654 | 0.1098 | 0.0531 | 0.0254 | 0.0343 | 0.9599 |
| 0.0786 | 3.0 | 981 | 0.1085 | 0.0657 | 0.0243 | 0.0354 | 0.9634 |
| 0.0681 | 4.0 | 1308 | 0.1133 | 0.0765 | 0.0453 | 0.0569 | 0.9618 |
| 0.0641 | 5.0 | 1635 | 0.1150 | 0.0855 | 0.0485 | 0.0619 | 0.9626 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
6f9441fc05a8cf7c638d82ad25286fdb
|
rashedsafa/wav2vec2-large-xls-r-300m-bengali-v7
|
rashedsafa
|
wav2vec2
| 13 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,918 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bengali-v7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2999
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 9.965 | 0.85 | 400 | 4.0076 | 1.0 |
| 3.5381 | 1.71 | 800 | 3.3463 | 1.0 |
| 3.3333 | 2.56 | 1200 | 3.2927 | 1.0 |
| 3.307 | 3.41 | 1600 | 3.3024 | 1.0 |
| 3.3386 | 4.26 | 2000 | 3.2984 | 1.0 |
| 3.3277 | 5.12 | 2400 | 3.2999 | 1.0 |
| 3.3145 | 5.97 | 2800 | 3.2999 | 1.0 |
| 3.3306 | 6.82 | 3200 | 3.2999 | 1.0 |
| 3.326 | 7.68 | 3600 | 3.2999 | 1.0 |
| 3.3143 | 8.53 | 4000 | 3.2999 | 1.0 |
| 3.3311 | 9.38 | 4400 | 3.2999 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
a3670812eba5a983270ce84bae1e2e49
|
TransQuest/monotransquest-hter-en_de-wiki
|
TransQuest
|
xlm-roberta
| 8 | 12 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en-de']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['Quality Estimation', 'monotransquest', 'hter']
| false | true | true | 5,310 | false |
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel
model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-hter-en_de-wiki", num_labels=1, use_cuda=torch.cuda.is_available())
predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
4d215f7fc61532b02fb4a818e65e8ae4
|
suwani/BERT_NER_Ep5_PAD_75-finetuned-ner
|
suwani
|
bert
| 13 | 8 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,716 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_NER_Ep5_PAD_75-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3504
- Precision: 0.6469
- Recall: 0.7246
- F1: 0.6835
- Accuracy: 0.9013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 288 | 0.3695 | 0.5799 | 0.6200 | 0.5993 | 0.8792 |
| 0.4695 | 2.0 | 576 | 0.3443 | 0.5823 | 0.7252 | 0.6460 | 0.8862 |
| 0.4695 | 3.0 | 864 | 0.3189 | 0.6407 | 0.7030 | 0.6704 | 0.8978 |
| 0.2184 | 4.0 | 1152 | 0.3458 | 0.6383 | 0.7335 | 0.6826 | 0.8980 |
| 0.2184 | 5.0 | 1440 | 0.3504 | 0.6469 | 0.7246 | 0.6835 | 0.9013 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
b03f3f69b116d197d0d7180b47acb562
|
anton-l/xtreme_s_xlsr_minds14_upd
|
anton-l
|
wav2vec2
| 15 | 6 |
transformers
| 0 |
audio-classification
| true | false | false |
apache-2.0
| null |
['xtreme_s']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['minds14', 'google/xtreme_s', 'generated_from_trainer']
| true | true | true | 1,265 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_minds14_upd
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.FR-FR dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6303
- F1: 0.0223
- Accuracy: 0.0833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
eff474d178a7d6accd85be2fdd4a58b1
|
jonatasgrosman/exp_w2v2t_pl_unispeech_s957
|
jonatasgrosman
|
unispeech
| 10 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['pl']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'pl']
| false | true | true | 469 | false |
# exp_w2v2t_pl_unispeech_s957
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
bdaa395fa15d15038ba3bd4652a471d1
|
allenai/aspire-biencoder-biomed-scib
|
allenai
|
bert
| 7 | 9 |
transformers
| 0 |
feature-extraction
| true | false | false |
apache-2.0
|
['en']
| null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 5,133 | false |
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `Specter-CoCite_Scib` and represents a baseline bi-encoder for scientific document similarity. This model is similar in architecture to the [`allenai/specter`](https://github.com/allenai/specter) model but is trained on co-citation data instead of citation data.
## Model Card
### Model description
This model is a BERT bi-encoder model trained for similarity of title-abstract pairs in biomedical scientific papers. The model is **initialized with the SciBert model**. This model inputs the title and abstract of a paper and represents it with a single vector obtained by a scalar mix of the CLS token at every layer of the SciBert encoder. These scalar mix parameters can be important for performance in some datasets. Importantly, these scalar mix weights are not included as part of this HF model, if you wish to use these parameters please download the full model at: [`aspire-biencoder-biomed-scib-full.zip`](https://drive.google.com/file/d/1X6S5qwaKUlI3N3RDQSG-tJCzMBWAnqxP/view?usp=sharing).
### Training data
The model is trained on pairs of co-cited papers in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers, for example - the papers in brackets below are all co-cited and each pairs title and abstracts would be used as a training pair:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for document similarity tasks in **biomedical** scientific text using a single vector per document. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as classification. Since the training data comes primarily from biomedicine, performance on other domains may be poorer.
### How to use
Follow instructions for use detailed on the model github repo: https://github.com/allenai/aspire#specter-cocite
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH (biomedical/English), and TRECCOVID (biomedical/English). These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts.
We rank documents by the L2 distance between the query and candidate documents.
### Evaluation results
The released model `aspire-biencoder-biomed-scib` (and `aspire-biencoder-biomed-scib-full`) is compared against `allenai/specter`. `aspire-biencoder-biomed-scib-full`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-biencoder-biomed-scib` and `aspire-biencoder-biomed-scib-full` are the single best run among the 3 re-runs.
| | TRECCOVID | TRECCOVID | RELISH | RELISH |
|-------------------------------------------:|:---------:|:-------:|:------:|:-------:|
| | MAP | NDCG%20 | MAP | NDCG%20 |
| `specter` | 28.24 | 59.28 | 60.62| 77.20 |
| `aspire-biencoder-biomed-scib-full`<sup>*</sup> | 30.60 | 62.07 | 61.43| 78.01 |
| `aspire-biencoder-biomed-scib` | 30.74 | 60.16 | 61.52| 78.07 |
| `aspire-biencoder-biomed-scib-full` | 31.45 | 63.15 | 61.34| 77.89 |
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-biencoder-compsci-spec`](https://huggingface.co/allenai/aspire-biencoder-compsci-spec): If you wanted to run on computer science papers.
[`aspire-biencoder-biomed-spec`](https://huggingface.co/allenai/aspire-biencoder-biomed-spec): This is an alternative bi-encoder model identical to the above model, except that it is initialized with `allenai/specter` instead of SciBert. This usually under-performs the model released here.
|
39da6cda43a83d524b76766eec86b349
|
lmqg/t5-large-squad-qag
|
lmqg
|
t5
| 13 | 322 |
transformers
| 2 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['en']
|
['lmqg/qag_squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['questions and answers generation']
| true | true | true | 3,807 | false |
# Model Card of `lmqg/t5-large-squad-qag`
This model is fine-tuned version of [t5-large](https://huggingface.co/t5-large) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-large](https://huggingface.co/t5-large)
- **Language:** en
- **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/t5-large-squad-qag")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/t5-large-squad-qag")
output = pipe("generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-large-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 93.45 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedF1Score (MoverScore) | 66.05 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedPrecision (BERTScore) | 93.34 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedPrecision (MoverScore) | 66.34 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedRecall (BERTScore) | 93.57 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedRecall (MoverScore) | 65.84 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qag_squad
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: ['qag']
- model: t5-large
- max_length: 512
- max_length_output: 256
- epoch: 12
- batch: 8
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-large-squad-qag/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
c59b7a424808bd0ff92bc25f08ee641e
|
timm/coatnet_1_rw_224.sw_in1k
|
timm
| null | 4 | 26 |
timm
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet-1k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'timm']
| false | true | true | 22,037 | false |
# Model card for coatnet_1_rw_224.sw_in1k
A timm specific CoAtNet image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
### Model Variants in [maxxvit.py](https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 41.7
- GMACs: 8.0
- Activations (M): 34.6
- Image size: 224 x 224
- **Papers:**
- CoAtNet: Marrying Convolution and Attention for All Data Sizes: https://arxiv.org/abs/2201.03545
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('coatnet_1_rw_224.sw_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'coatnet_1_rw_224.sw_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 192, 192])
# torch.Size([1, 128, 96, 96])
# torch.Size([1, 256, 48, 48])
# torch.Size([1, 512, 24, 24])
# torch.Size([1, 1024, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'coatnet_1_rw_224.sw_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
697da22d5cbc994c514d4ff7d34aac29
|
sd-concepts-library/coop-himmelblau
|
sd-concepts-library
| null | 11 | 0 | null | 5 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,316 | false |
### coop himmelblau on Stable Diffusion
This is the `<coop himmelblau>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
219eafbd51a77a669034b0dad765cf2a
|
TomO/xlm-roberta-base-finetuned-marc-en
|
TomO
|
xlm-roberta
| 12 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null |
['amazon_reviews_multi']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,275 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9237
- Mae: 0.5122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1089 | 1.0 | 235 | 0.9380 | 0.4878 |
| 0.9546 | 2.0 | 470 | 0.9237 | 0.5122 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
c25f182ccf34dece5cb28216ee83c1f7
|
cj-mills/xlm-roberta-base-finetuned-panx-en
|
cj-mills
|
xlm-roberta
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,353 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5084
- F1: 0.5794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.7119 | 1.0 | 19 | 1.0009 | 0.2266 |
| 0.891 | 2.0 | 38 | 0.6405 | 0.5281 |
| 0.6023 | 3.0 | 57 | 0.5084 | 0.5794 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
6a05784fb810f5c811bf432771ab29cf
|
caffsean/t5-base-finetuned-keyword-to-text-generation
|
caffsean
|
t5
| 16 | 3 |
transformers
| 1 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,126 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-keyword-to-text-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4643
- Rouge1: 2.1108
- Rouge2: 0.3331
- Rougel: 1.7368
- Rougelsum: 1.7391
- Gen Len: 16.591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 375 | 3.4862 | 2.0718 | 0.326 | 1.7275 | 1.7308 | 16.7995 |
| 3.5928 | 2.0 | 750 | 3.4761 | 2.0829 | 0.3253 | 1.7192 | 1.7224 | 16.773 |
| 3.5551 | 3.0 | 1125 | 3.4701 | 2.1028 | 0.3272 | 1.7274 | 1.7296 | 16.6505 |
| 3.5225 | 4.0 | 1500 | 3.4671 | 2.11 | 0.3305 | 1.7343 | 1.7362 | 16.699 |
| 3.5225 | 5.0 | 1875 | 3.4653 | 2.1134 | 0.3319 | 1.7418 | 1.7437 | 16.5485 |
| 3.4987 | 6.0 | 2250 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
| 3.4939 | 7.0 | 2625 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
| 3.498 | 8.0 | 3000 | 3.4643 | 2.1108 | 0.3331 | 1.7368 | 1.7391 | 16.591 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
50c66555f928a5f0a934d2fa791a7e59
|
PlanTL-GOB-ES/roberta-base-bne
|
PlanTL-GOB-ES
|
roberta
| 10 | 6,511 |
transformers
| 11 |
fill-mask
| true | false | false |
apache-2.0
|
['es']
|
['bne']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['national library of spain', 'spanish', 'bne', 'roberta-base-bne']
| false | true | true | 10,932 | false |
# RoBERTa base trained with data from the National Library of Spain (BNE)
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Overview](#overview)
- [Model description](#model-description)
- [Intended uses and limitations](#intended-uses-and-limitations)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citation Information](#citation-information)
- [Disclaimer](#disclaimer)
</details>
## Overview
- **Architecture:** roberta-base
- **Language:** Spanish
- **Task:** fill-mask
- **Data:** BNE
## Model description
The **roberta-base-bne** is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
## Intended uses and limitations
The **roberta-base-bne** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section).
However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition.
You can use the raw model for fill mask or fine-tune it to a downstream task.
## How to use
Here is how to use this model:
```python
>>> from transformers import pipeline
>>> from pprint import pprint
>>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-bne')
>>> pprint(unmasker("Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje."))
[{'score': 0.08422081917524338,
'token': 3832,
'token_str': ' desarrollar',
'sequence': 'Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje.'},
{'score': 0.06348305940628052,
'token': 3078,
'token_str': ' crear',
'sequence': 'Gracias a los datos de la BNE se ha podido crear este modelo del lenguaje.'},
{'score': 0.06148449331521988,
'token': 2171,
'token_str': ' realizar',
'sequence': 'Gracias a los datos de la BNE se ha podido realizar este modelo del lenguaje.'},
{'score': 0.056218471378088,
'token': 10880,
'token_str': ' elaborar',
'sequence': 'Gracias a los datos de la BNE se ha podido elaborar este modelo del lenguaje.'},
{'score': 0.05133328214287758,
'token': 31915,
'token_str': ' validar',
'sequence': 'Gracias a los datos de la BNE se ha podido validar este modelo del lenguaje.'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
>>> from transformers import RobertaTokenizer, RobertaModel
>>> tokenizer = RobertaTokenizer.from_pretrained('PlanTL-GOB-ES/roberta-base-bne')
>>> model = RobertaModel.from_pretrained('PlanTL-GOB-ES/roberta-base-bne')
>>> text = "Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje."
>>> encoded_input = tokenizer(text, return_tensors='pt')
>>> output = model(**encoded_input)
>>> print(output.last_hidden_state.shape)
torch.Size([1, 19, 768])
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> from pprint import pprint
>>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-bne')
>>> set_seed(42)
>>> pprint(unmasker("Antonio está pensando en <mask>."))
[{'score': 0.07950365543365479,
'sequence': 'Antonio está pensando en ti.',
'token': 486,
'token_str': ' ti'},
{'score': 0.03375273942947388,
'sequence': 'Antonio está pensando en irse.',
'token': 13134,
'token_str': ' irse'},
{'score': 0.031026942655444145,
'sequence': 'Antonio está pensando en casarse.',
'token': 24852,
'token_str': ' casarse'},
{'score': 0.030703715980052948,
'sequence': 'Antonio está pensando en todo.',
'token': 665,
'token_str': ' todo'},
{'score': 0.02838558703660965,
'sequence': 'Antonio está pensando en ello.',
'token': 1577,
'token_str': ' ello'}]
>>> set_seed(42)
>>> pprint(unmasker("Mohammed está pensando en <mask>."))
[{'score': 0.05433618649840355,
'sequence': 'Mohammed está pensando en morir.',
'token': 9459,
'token_str': ' morir'},
{'score': 0.0400255024433136,
'sequence': 'Mohammed está pensando en irse.',
'token': 13134,
'token_str': ' irse'},
{'score': 0.03705748915672302,
'sequence': 'Mohammed está pensando en todo.',
'token': 665,
'token_str': ' todo'},
{'score': 0.03658654913306236,
'sequence': 'Mohammed está pensando en quedarse.',
'token': 9331,
'token_str': ' quedarse'},
{'score': 0.03329474478960037,
'sequence': 'Mohammed está pensando en ello.',
'token': 1577,
'token_str': ' ello'}]
```
## Training
### Training data
The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019.
To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among others, sentence splitting, language detection, filtering of bad-formed sentences, and deduplication of repetitive contents. During the process, document boundaries are kept. This resulted in 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting in 570GB of text.
Some of the statistics of the corpus:
| Corpora | Number of documents | Number of tokens | Size (GB) |
|---------|---------------------|------------------|-----------|
| BNE | 201,080,084 | 135,733,450,668 | 570GB |
### Training procedure
The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [RoBERTA](https://arxiv.org/abs/1907.11692) model with a vocabulary size of 50,262 tokens.
The **roberta-base-bne** pre-training consists of a masked language model training, that follows the approach employed for the RoBERTa base. The training lasted a total of 48 hours with 16 computing nodes, each one with 4 NVIDIA V100 GPUs of 16GB VRAM.
## Evaluation
When fine-tuned on downstream tasks, this model achieves the following results:
| Dataset | Metric | [**RoBERTa-base**](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) |
|--------------|----------|------------|
| MLDoc | F1 | 0.9664 |
| CoNLL-NERC | F1 | 0.8851 |
| CAPITEL-NERC | F1 | 0.8960 |
| PAWS-X | F1 | 0.9020 |
| UD-POS | F1 | 0.9907 |
| CAPITEL-POS | F1 | 0.9846 |
| SQAC | F1 | 0.7923 |
| STS | Combined | 0.8533 |
| XNLI | Accuracy | 0.8016 |
For more evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish) or [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405).
## Additional information
### Author
Text Mining Unit (TeMU) from Barcelona Supercomputing Center (<bsc-temu@bsc.es>).
### Contact information
For further information, send an email to <plantl-gob-es@bsc.es>.
### Copyright
Copyright by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx).
### Licensing information
This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx) within the framework of the Plan-TL.
### Citation information
If you use this model, please cite our [paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405):
```
@article{,
title = {MarIA: Spanish Language Models},
author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas},
doi = {10.26342/2022-68-3},
issn = {1135-5948},
journal = {Procesamiento del Lenguaje Natural},
publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural},
url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley},
volume = {68},
year = {2022},
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner of the models (SEDIA) nor the creator (BSC) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de Inteligencia Artificial.
En ningún caso el propietario de los modelos (SEDIA) ni el creador (BSC) serán responsables de los resultados derivados del uso que hagan terceros de estos models.
</details>
|
206f3eff922f7ca9287a78c47662eb06
|
Haakf/distilbert-base-uncased-padded_left_allsides_news
|
Haakf
|
distilbert
| 8 | 2 |
transformers
| 0 |
fill-mask
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,933 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Haakf/distilbert-base-uncased-padded_left_allsides_news
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1600
- Validation Loss: 2.0358
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -712, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6348 | 2.2921 | 0 |
| 2.3547 | 2.1969 | 1 |
| 2.2381 | 2.0656 | 2 |
| 2.1568 | 2.0696 | 3 |
| 2.1510 | 1.9786 | 4 |
| 2.1493 | 2.0436 | 5 |
| 2.1469 | 2.0735 | 6 |
| 2.1520 | 2.0695 | 7 |
| 2.1617 | 2.0451 | 8 |
| 2.1600 | 2.0358 | 9 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
f66b8dc319d83b4c8e4be0a89f983c49
|
muhtasham/bert-small-finetuned-cuad-full
|
muhtasham
|
bert
| 24 | 7 |
transformers
| 1 |
question-answering
| true | false | false |
apache-2.0
| null |
['cuad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,299 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-cuad-full
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the cuad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.0323 | 1.0 | 47569 | 0.0280 |
| 0.0314 | 2.0 | 95138 | 0.0265 |
| 0.0276 | 3.0 | 142707 | 0.0274 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
1db7cffea828d862e2963a945c11b3ec
|
smartik/t5-small-finetuned-xsum
|
smartik
|
t5
| 22 | 7 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['xsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 920 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
c7e28240b12c33a3f23bafac7d0771e2
|
rmihaylov/bert-base-pos-theseus-bg
|
rmihaylov
|
bert
| 9 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
mit
|
['bg']
|
['oscar', 'chitanka', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['torch']
| false | true | true | 1,821 | false |
# BERT BASE (cased) finetuned on Bulgarian part-of-speech data
Pretrained model on Bulgarian language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is cased: it does make a difference
between bulgarian and Bulgarian. The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/).
It was finetuned on public part-of-speech Bulgarian data.
Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925).
### How to use
Here is how to use this model in PyTorch:
```python
>>> from transformers import pipeline
>>>
>>> model = pipeline(
>>> 'token-classification',
>>> model='rmihaylov/bert-base-pos-theseus-bg',
>>> tokenizer='rmihaylov/bert-base-pos-theseus-bg',
>>> device=0,
>>> revision=None)
>>> output = model('Здравей, аз се казвам Иван.')
>>> print(output)
[{'end': 7,
'entity': 'INTJ',
'index': 1,
'score': 0.9640711,
'start': 0,
'word': '▁Здравей'},
{'end': 8,
'entity': 'PUNCT',
'index': 2,
'score': 0.9998927,
'start': 7,
'word': ','},
{'end': 11,
'entity': 'PRON',
'index': 3,
'score': 0.9998872,
'start': 8,
'word': '▁аз'},
{'end': 14,
'entity': 'PRON',
'index': 4,
'score': 0.99990034,
'start': 11,
'word': '▁се'},
{'end': 21,
'entity': 'VERB',
'index': 5,
'score': 0.99989736,
'start': 14,
'word': '▁казвам'},
{'end': 26,
'entity': 'PROPN',
'index': 6,
'score': 0.99990785,
'start': 21,
'word': '▁Иван'},
{'end': 27,
'entity': 'PUNCT',
'index': 7,
'score': 0.9999685,
'start': 26,
'word': '.'}]
```
|
5afd8fe58d7870260a45528f31e82907
|
calebcsjm/reverse_text_generation_HarryPotter
|
calebcsjm
|
gpt2
| 8 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 922 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reverse_text_generation_HarryPotter
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
c37da2cc8f6d2bdd1e638842c1d128e4
|
sd-concepts-library/lofa
|
sd-concepts-library
| null | 10 | 0 | null | 1 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,066 | false |
### lofa on Stable Diffusion
This is the `<lofa>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
249976833e60e3828b21eb70c31bfec6
|
adrianhenkel/distilbert-base-uncased-fine-tuned-emotions
|
adrianhenkel
|
distilbert
| 10 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,367 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-fine-tuned-emotions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1377
- Accuracy: 0.9335
- F1 Score: 0.9338
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.478 | 1.0 | 125 | 0.1852 | 0.931 | 0.9309 |
| 0.1285 | 2.0 | 250 | 0.1377 | 0.9335 | 0.9338 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.8.1+cu101
- Datasets 2.7.1
- Tokenizers 0.10.1
|
bf383a7ec9106db8b6521360899aba1f
|
mlxen/electra-squad-contrasting-validation
|
mlxen
|
electra
| 12 | 3 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 996 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-squad-contrasting-validation
This model is a fine-tuned version of [mlxen/electra-squad-training](https://huggingface.co/mlxen/electra-squad-training) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
0b7e26060fcd9ac51609b140690c915a
|
saattrupdan/verdict-classifier
|
saattrupdan
|
xlm-roberta
| 28 | 7 |
transformers
| 3 |
text-classification
| true | false | false |
mit
|
['am', 'ar', 'hy', 'eu', 'bn', 'bs', 'bg', 'my', 'hr', 'ca', 'cs', 'da', 'nl', 'en', 'et', 'fi', 'fr', 'ka', 'de', 'el', 'gu', 'ht', 'iw', 'hi', 'hu', 'is', 'in', 'it', 'ja', 'kn', 'km', 'ko', 'lo', 'lv', 'lt', 'ml', 'mr', 'ne', False, 'or', 'pa', 'ps', 'fa', 'pl', 'pt', 'ro', 'ru', 'sr', 'zh', 'sd', 'si', 'sk', 'sl', 'es', 'sv', 'tl', 'ta', 'te', 'th', 'tr', 'uk', 'ur', 'ug', 'vi', 'cy']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 8,270 | false |
# Multilingual Verdict Classifier
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on 2,500 deduplicated multilingual verdicts from [Google Fact Check Tools API](https://developers.google.com/fact-check/tools/api/reference/rest/v1alpha1/claims/search), translated into 65 languages with the [Google Cloud Translation API](https://cloud.google.com/translate/docs/reference/rest/).
It achieves the following results on the evaluation set, being 1,000 such verdicts, but here including duplicates to represent the true distribution:
- Loss: 0.2238
- F1 Macro: 0.8540
- F1 Misinformation: 0.9798
- F1 Factual: 0.9889
- F1 Other: 0.5934
- Prec Macro: 0.8348
- Prec Misinformation: 0.9860
- Prec Factual: 0.9889
- Prec Other: 0.5294
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 162525
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Prec Macro | Prec Misinformation | Prec Factual | Prec Other |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
| 1.1109 | 0.1 | 2000 | 1.2166 | 0.0713 | 0.1497 | 0.0 | 0.0640 | 0.2451 | 0.7019 | 0.0 | 0.0334 |
| 0.9551 | 0.2 | 4000 | 0.7801 | 0.3611 | 0.8889 | 0.0 | 0.1943 | 0.3391 | 0.8915 | 0.0 | 0.1259 |
| 0.9275 | 0.3 | 6000 | 0.7712 | 0.3468 | 0.9123 | 0.0 | 0.1282 | 0.3304 | 0.9051 | 0.0 | 0.0862 |
| 0.8881 | 0.39 | 8000 | 0.5386 | 0.3940 | 0.9524 | 0.0 | 0.2297 | 0.3723 | 0.9748 | 0.0 | 0.1420 |
| 0.7851 | 0.49 | 10000 | 0.3298 | 0.6886 | 0.9626 | 0.7640 | 0.3393 | 0.6721 | 0.9798 | 0.7727 | 0.2639 |
| 0.639 | 0.59 | 12000 | 0.2156 | 0.7847 | 0.9633 | 0.9355 | 0.4554 | 0.7540 | 0.9787 | 0.9062 | 0.3770 |
| 0.5677 | 0.69 | 14000 | 0.1682 | 0.7877 | 0.9694 | 0.9667 | 0.4270 | 0.7763 | 0.9745 | 0.9667 | 0.3878 |
| 0.5218 | 0.79 | 16000 | 0.1475 | 0.8037 | 0.9692 | 0.9667 | 0.4752 | 0.7804 | 0.9812 | 0.9667 | 0.3934 |
| 0.4682 | 0.89 | 18000 | 0.1458 | 0.8097 | 0.9734 | 0.9667 | 0.4889 | 0.7953 | 0.9791 | 0.9667 | 0.44 |
| 0.4188 | 0.98 | 20000 | 0.1416 | 0.8370 | 0.9769 | 0.9724 | 0.5618 | 0.8199 | 0.9826 | 0.9670 | 0.5102 |
| 0.3735 | 1.08 | 22000 | 0.1624 | 0.8094 | 0.9698 | 0.9368 | 0.5217 | 0.7780 | 0.9823 | 0.89 | 0.4615 |
| 0.3242 | 1.18 | 24000 | 0.1648 | 0.8338 | 0.9769 | 0.9727 | 0.5517 | 0.8167 | 0.9826 | 0.9570 | 0.5106 |
| 0.2785 | 1.28 | 26000 | 0.1843 | 0.8261 | 0.9739 | 0.9780 | 0.5263 | 0.8018 | 0.9836 | 0.9674 | 0.4545 |
| 0.25 | 1.38 | 28000 | 0.1975 | 0.8344 | 0.9744 | 0.9834 | 0.5455 | 0.8072 | 0.9859 | 0.9780 | 0.4576 |
| 0.2176 | 1.48 | 30000 | 0.1849 | 0.8209 | 0.9691 | 0.9889 | 0.5047 | 0.7922 | 0.9846 | 0.9889 | 0.4030 |
| 0.1966 | 1.58 | 32000 | 0.2119 | 0.8194 | 0.9685 | 0.9944 | 0.4954 | 0.7920 | 0.9846 | 1.0 | 0.3913 |
| 0.1738 | 1.67 | 34000 | 0.2110 | 0.8352 | 0.9708 | 0.9944 | 0.5405 | 0.8035 | 0.9881 | 1.0 | 0.4225 |
| 0.1625 | 1.77 | 36000 | 0.2152 | 0.8165 | 0.9709 | 0.9834 | 0.4950 | 0.7905 | 0.9835 | 0.9780 | 0.4098 |
| 0.1522 | 1.87 | 38000 | 0.2300 | 0.8097 | 0.9697 | 0.9832 | 0.4762 | 0.7856 | 0.9835 | 0.9888 | 0.3846 |
| 0.145 | 1.97 | 40000 | 0.1955 | 0.8519 | 0.9774 | 0.9889 | 0.5895 | 0.8280 | 0.9860 | 0.9889 | 0.5091 |
| 0.1248 | 2.07 | 42000 | 0.2308 | 0.8149 | 0.9703 | 0.9889 | 0.4854 | 0.7897 | 0.9835 | 0.9889 | 0.3968 |
| 0.1186 | 2.17 | 44000 | 0.2368 | 0.8172 | 0.9733 | 0.9834 | 0.4948 | 0.7942 | 0.9836 | 0.9780 | 0.4211 |
| 0.1122 | 2.26 | 46000 | 0.2401 | 0.7968 | 0.9804 | 0.8957 | 0.5143 | 0.8001 | 0.9849 | 1.0 | 0.4154 |
| 0.1099 | 2.36 | 48000 | 0.2290 | 0.8119 | 0.9647 | 0.9834 | 0.4874 | 0.7777 | 0.9880 | 0.9780 | 0.3671 |
| 0.1093 | 2.46 | 50000 | 0.2256 | 0.8247 | 0.9745 | 0.9889 | 0.5106 | 0.8053 | 0.9825 | 0.9889 | 0.4444 |
| 0.1053 | 2.56 | 52000 | 0.2416 | 0.8456 | 0.9799 | 0.9889 | 0.5679 | 0.8434 | 0.9805 | 0.9889 | 0.5610 |
| 0.1049 | 2.66 | 54000 | 0.2850 | 0.7585 | 0.9740 | 0.8902 | 0.4112 | 0.7650 | 0.9802 | 0.9865 | 0.3284 |
| 0.098 | 2.76 | 56000 | 0.2828 | 0.8049 | 0.9642 | 0.9889 | 0.4615 | 0.7750 | 0.9856 | 0.9889 | 0.3506 |
| 0.0962 | 2.86 | 58000 | 0.2238 | 0.8540 | 0.9798 | 0.9889 | 0.5934 | 0.8348 | 0.9860 | 0.9889 | 0.5294 |
| 0.0975 | 2.95 | 60000 | 0.2494 | 0.8249 | 0.9715 | 0.9889 | 0.5143 | 0.7967 | 0.9858 | 0.9889 | 0.4154 |
| 0.0877 | 3.05 | 62000 | 0.2464 | 0.8274 | 0.9733 | 0.9889 | 0.5200 | 0.8023 | 0.9847 | 0.9889 | 0.4333 |
| 0.0848 | 3.15 | 64000 | 0.2338 | 0.8263 | 0.9740 | 0.9889 | 0.5161 | 0.8077 | 0.9814 | 0.9889 | 0.4528 |
| 0.0859 | 3.25 | 66000 | 0.2335 | 0.8365 | 0.9750 | 0.9889 | 0.5455 | 0.8108 | 0.9859 | 0.9889 | 0.4576 |
| 0.084 | 3.35 | 68000 | 0.2067 | 0.8343 | 0.9763 | 0.9889 | 0.5376 | 0.8148 | 0.9837 | 0.9889 | 0.4717 |
| 0.0837 | 3.45 | 70000 | 0.2516 | 0.8249 | 0.9746 | 0.9889 | 0.5111 | 0.8097 | 0.9803 | 0.9889 | 0.46 |
| 0.0809 | 3.54 | 72000 | 0.2948 | 0.8258 | 0.9728 | 0.9944 | 0.5102 | 0.8045 | 0.9824 | 1.0 | 0.4310 |
| 0.0833 | 3.64 | 74000 | 0.2457 | 0.8494 | 0.9744 | 0.9944 | 0.5794 | 0.8173 | 0.9893 | 1.0 | 0.4627 |
| 0.0796 | 3.74 | 76000 | 0.3188 | 0.8277 | 0.9733 | 0.9889 | 0.5208 | 0.8059 | 0.9825 | 0.9889 | 0.4464 |
| 0.0821 | 3.84 | 78000 | 0.2642 | 0.8343 | 0.9714 | 0.9944 | 0.5370 | 0.8045 | 0.9870 | 1.0 | 0.4265 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.2
|
37d35d50ba06c3d3606819d4b9083c61
|
sd-concepts-library/darkplane
|
sd-concepts-library
| null | 26 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,842 | false |
### DarkPlane on Stable Diffusion
This is the `<DarkPlane>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





















|
10c9770d006cd7732f50cf5b92b131ff
|
stanfordnlp/corenlp-french
|
stanfordnlp
| null | 3 | 0 | null | 2 | null | false | false | false |
gpl-2.0
|
['fr']
| null | null | 1 | 0 | 0 | 1 | 0 | 0 | 0 |
['corenlp']
| false | true | true | 659 | false |
# Core NLP model for french
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
This card and repo were automatically prepared with `hugging_corenlp.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2023-01-21 01:37:03.293
|
1a9bade585c20b31da1e3aa5db0f52a3
|
fathyshalab/all-roberta-large-v1-auto_and_commute-3-16-5
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,521 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-auto_and_commute-3-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2614
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 |
| 2.267 | 2.0 | 2 | 2.4558 | 0.3533 |
| 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 |
| 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 |
| 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
c73256b3489ff0c602c80b6f42e90112
|
Davlan/afro-xlmr-large
|
Davlan
|
xlm-roberta
| 9 | 14,142 |
transformers
| 3 |
fill-mask
| true | false | false |
mit
|
['en', 'fr', 'ar', 'ha', 'ig', 'yo', 'rn', 'rw', 'sn', 'xh', 'zu', 'om', 'am', 'so', 'st', 'ny', 'mg', 'sw', 'af']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,228 | false |
# afro-xlmr-large
AfroXLMR-large was created by MLM adaptation of XLM-R-large model on 17 African languages (Afrikaans, Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Nigerian-Pidgin, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, and isiZulu) covering the major African language families and 3 high resource languages (Arabic, French, and English).
## Eval results on MasakhaNER (F-score)
language| XLM-R-miniLM| XLM-R-base |XLM-R-large | afro-xlmr-large | afro-xlmr-base | afro-xlmr-small | afro-xlmr-mini
-|-|-|-|-|-|-|-
amh |69.5|70.6|76.2|79.7|76.1|70.1|69.7
hau |74.5|89.5|90.5|91.4|91.2|91.4|87.7
ibo |81.9|84.8|84.1|87.7|87.4|86.6|83.5
kin |68.6|73.3|73.8|79.1|78.0|77.5|74.1
lug |64.7|79.7|81.6|86.7|82.9|83.2|77.4
luo |11.7|74.9|73.6|78.1|75.1|75.4|17.5
pcm |83.2|87.3|89.0|91.0|89.6|89.0|85.5
swa |86.3|87.4|89.4|90.4|88.6|88.7|86.0
wol |51.7|63.9|67.9|69.6|67.4|65.9|59.0
yor |72.0|78.3|78.9|85.2|82.1|81.3|75.1
avg |66.4|79.0|80.5|83.9|81.8|80.9|71.6
### BibTeX entry and citation info
```
@inproceedings{alabi-etal-2022-adapting,
title = "Adapting Pre-trained Language Models to {A}frican Languages via Multilingual Adaptive Fine-Tuning",
author = "Alabi, Jesujoba O. and
Adelani, David Ifeoluwa and
Mosbach, Marius and
Klakow, Dietrich",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.382",
pages = "4336--4349",
abstract = "Multilingual pre-trained language models (PLMs) have demonstrated impressive performance on several downstream tasks for both high-resourced and low-resourced languages. However, there is still a large performance drop for languages unseen during pre-training, especially African languages. One of the most effective approaches to adapt to a new language is language adaptive fine-tuning (LAFT) {---} fine-tuning a multilingual PLM on monolingual texts of a language using the pre-training objective. However, adapting to target language individually takes large disk space and limits the cross-lingual transfer abilities of the resulting models because they have been specialized for a single language. In this paper, we perform multilingual adaptive fine-tuning on 17 most-resourced African languages and three other high-resource languages widely spoken on the African continent to encourage cross-lingual transfer learning. To further specialize the multilingual PLM, we removed vocabulary tokens from the embedding layer that corresponds to non-African writing scripts before MAFT, thus reducing the model size by around 50{\%}. Our evaluation on two multilingual PLMs (AfriBERTa and XLM-R) and three NLP tasks (NER, news topic classification, and sentiment classification) shows that our approach is competitive to applying LAFT on individual languages while requiring significantly less disk space. Additionally, we show that our adapted PLM also improves the zero-shot cross-lingual transfer abilities of parameter efficient fine-tuning methods.",
}
```
|
184b40dcfd7d1687cf6d0d6e1dd0f81c
|
chrisvinsen/wav2vec2-10
|
chrisvinsen
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,318 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-10
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0354
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.2231 | 0.78 | 200 | 3.0442 | 1.0 |
| 2.8665 | 1.57 | 400 | 3.0081 | 1.0 |
| 2.8596 | 2.35 | 600 | 3.0905 | 1.0 |
| 2.865 | 3.14 | 800 | 3.0443 | 1.0 |
| 2.8613 | 3.92 | 1000 | 3.0316 | 1.0 |
| 2.8601 | 4.71 | 1200 | 3.0574 | 1.0 |
| 2.8554 | 5.49 | 1400 | 3.0261 | 1.0 |
| 2.8592 | 6.27 | 1600 | 3.0785 | 1.0 |
| 2.8606 | 7.06 | 1800 | 3.1129 | 1.0 |
| 2.8547 | 7.84 | 2000 | 3.0647 | 1.0 |
| 2.8565 | 8.63 | 2200 | 3.0624 | 1.0 |
| 2.8633 | 9.41 | 2400 | 2.9900 | 1.0 |
| 2.855 | 10.2 | 2600 | 3.0084 | 1.0 |
| 2.8581 | 10.98 | 2800 | 3.0092 | 1.0 |
| 2.8545 | 11.76 | 3000 | 3.0299 | 1.0 |
| 2.8583 | 12.55 | 3200 | 3.0293 | 1.0 |
| 2.8536 | 13.33 | 3400 | 3.0566 | 1.0 |
| 2.8556 | 14.12 | 3600 | 3.0385 | 1.0 |
| 2.8573 | 14.9 | 3800 | 3.0098 | 1.0 |
| 2.8551 | 15.69 | 4000 | 3.0623 | 1.0 |
| 2.8546 | 16.47 | 4200 | 3.0964 | 1.0 |
| 2.8569 | 17.25 | 4400 | 3.0648 | 1.0 |
| 2.8543 | 18.04 | 4600 | 3.0377 | 1.0 |
| 2.8532 | 18.82 | 4800 | 3.0454 | 1.0 |
| 2.8579 | 19.61 | 5000 | 3.0301 | 1.0 |
| 2.8532 | 20.39 | 5200 | 3.0364 | 1.0 |
| 2.852 | 21.18 | 5400 | 3.0187 | 1.0 |
| 2.8561 | 21.96 | 5600 | 3.0172 | 1.0 |
| 2.8509 | 22.75 | 5800 | 3.0420 | 1.0 |
| 2.8551 | 23.53 | 6000 | 3.0309 | 1.0 |
| 2.8552 | 24.31 | 6200 | 3.0416 | 1.0 |
| 2.8521 | 25.1 | 6400 | 3.0469 | 1.0 |
| 2.852 | 25.88 | 6600 | 3.0489 | 1.0 |
| 2.854 | 26.67 | 6800 | 3.0394 | 1.0 |
| 2.8572 | 27.45 | 7000 | 3.0336 | 1.0 |
| 2.8502 | 28.24 | 7200 | 3.0363 | 1.0 |
| 2.8557 | 29.02 | 7400 | 3.0304 | 1.0 |
| 2.8522 | 29.8 | 7600 | 3.0354 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
61fe977c5118d0c4fbfca9bcfa13ef3a
|
afrodp95/distilbert-base-uncased-finetuned-job-skills-ner
|
afrodp95
|
distilbert
| 8 | 9 |
transformers
| 0 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 2,170 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# afrodp95/distilbert-base-uncased-finetuned-job-skills-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0923
- Validation Loss: 0.1313
- Train Precision: 0.3601
- Train Recall: 0.4922
- Train F1: 0.4159
- Train Accuracy: 0.9522
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1386, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3257 | 0.1935 | 0.3122 | 0.2144 | 0.2542 | 0.9521 | 0 |
| 0.1564 | 0.1464 | 0.3503 | 0.3423 | 0.3463 | 0.9546 | 1 |
| 0.1257 | 0.1365 | 0.3593 | 0.4893 | 0.4143 | 0.9522 | 2 |
| 0.1102 | 0.1318 | 0.3607 | 0.4692 | 0.4079 | 0.9521 | 3 |
| 0.1002 | 0.1305 | 0.3504 | 0.4941 | 0.4100 | 0.9515 | 4 |
| 0.0923 | 0.1313 | 0.3601 | 0.4922 | 0.4159 | 0.9522 | 5 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
d52b9e63f2fbfccac801d37e11dd6e94
|
jonatasgrosman/exp_w2v2t_et_vp-nl_s354
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['et']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'et']
| false | true | true | 469 | false |
# exp_w2v2t_et_vp-nl_s354
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
bcd31911acd5ae1fc1549e82503cde5e
|
chinhon/bart-large-commentaries_hdwriter
|
chinhon
|
bart
| 13 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,860 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-commentaries_hdwriter
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1619
- Rouge1: 26.1101
- Rouge2: 9.928
- Rougel: 22.9007
- Rougelsum: 23.117
- Gen Len: 15.9536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6237 | 1.0 | 5072 | 2.5309 | 26.4063 | 9.1795 | 22.6699 | 22.9125 | 17.3103 |
| 1.8808 | 2.0 | 10144 | 2.5049 | 25.3706 | 8.7568 | 21.8594 | 22.1233 | 15.8579 |
| 1.3084 | 3.0 | 15216 | 2.6680 | 26.6284 | 9.9914 | 23.1477 | 23.3625 | 16.8832 |
| 0.9247 | 4.0 | 20288 | 2.8923 | 26.3827 | 9.8217 | 22.9524 | 23.1651 | 15.4529 |
| 0.692 | 5.0 | 25360 | 3.1619 | 26.1101 | 9.928 | 22.9007 | 23.117 | 15.9536 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
aeb5f03034a06cd610d7ba35d4420f98
|
danhsf/xlm-roberta-base-finetuned-panx-de-fr
|
danhsf
|
xlm-roberta
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,321 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1667
- F1: 0.8582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 |
| 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 |
| 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
0f560a44ec08b73cc0203ff1be4dc6b8
|
Helsinki-NLP/opus-mt-rn-fr
|
Helsinki-NLP
|
marian
| 11 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['rn', 'fr']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,973 | false |
### run-fra
* source group: Rundi
* target group: French
* OPUS readme: [run-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-fra/README.md)
* model: transformer-align
* source language(s): run
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.fra | 18.2 | 0.397 |
### System Info:
- hf_name: run-fra
- source_languages: run
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'fr']
- src_constituents: {'run'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: fra
- short_pair: rn-fr
- chrF2_score: 0.397
- bleu: 18.2
- brevity_penalty: 1.0
- ref_len: 7496.0
- src_name: Rundi
- tgt_name: French
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: fr
- prefer_old: False
- long_pair: run-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
73aea6815502a1995a0c9ca9e188bff3
|
gokuls/distilbert_add_pre-training-dim-96
|
gokuls
|
distilbert
| 17 | 0 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null |
['wikitext']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,239 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_pre-training-dim-96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wikitext wikitext-103-raw-v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6092
- Accuracy: 0.1494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 14.685 | 1.0 | 3573 | 9.3922 | 0.1240 |
| 8.0255 | 2.0 | 7146 | 7.1510 | 0.1315 |
| 7.0152 | 3.0 | 10719 | 6.7861 | 0.1482 |
| 6.8127 | 4.0 | 14292 | 6.7053 | 0.1493 |
| 6.74 | 5.0 | 17865 | 6.6695 | 0.1474 |
| 6.7067 | 6.0 | 21438 | 6.6431 | 0.1491 |
| 6.6871 | 7.0 | 25011 | 6.6204 | 0.1483 |
| 6.6748 | 8.0 | 28584 | 6.6250 | 0.1473 |
| 6.6649 | 9.0 | 32157 | 6.6108 | 0.1486 |
| 6.6596 | 10.0 | 35730 | 6.6140 | 0.1497 |
| 6.6536 | 11.0 | 39303 | 6.6067 | 0.1493 |
| 6.6483 | 12.0 | 42876 | 6.6140 | 0.1489 |
| 6.6463 | 13.0 | 46449 | 6.6096 | 0.1484 |
| 6.6434 | 14.0 | 50022 | 6.5570 | 0.1526 |
| 6.6414 | 15.0 | 53595 | 6.5836 | 0.1526 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
05a240605970e5fbfdbcd9feaeed6f57
|
jamie613/mt5_fill_puntuation
|
jamie613
|
mt5
| 9 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,521 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5_fill_puntuation
This model is a fine-tuned version of [jamie613/mt5_fill_puntuation](https://huggingface.co/jamie613/mt5_fill_puntuation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0918 | 0.04 | 500 | 0.0803 |
| 0.0894 | 0.07 | 1000 | 0.0773 |
| 0.0905 | 0.11 | 1500 | 0.0822 |
| 0.0908 | 0.15 | 2000 | 0.0833 |
| 0.0868 | 0.18 | 2500 | 0.0840 |
| 0.09 | 0.22 | 3000 | 0.0811 |
| 0.0868 | 0.26 | 3500 | 0.0735 |
| 0.0869 | 0.29 | 4000 | 0.0805 |
| 0.0874 | 0.33 | 4500 | 0.0742 |
| 0.088 | 0.37 | 5000 | 0.0749 |
| 0.0884 | 0.4 | 5500 | 0.0730 |
| 0.0861 | 0.44 | 6000 | 0.0749 |
| 0.0804 | 0.48 | 6500 | 0.0739 |
| 0.0845 | 0.51 | 7000 | 0.0717 |
| 0.0861 | 0.55 | 7500 | 0.0743 |
| 0.0812 | 0.59 | 8000 | 0.0726 |
| 0.0824 | 0.62 | 8500 | 0.0729 |
| 0.0836 | 0.66 | 9000 | 0.0751 |
| 0.079 | 0.7 | 9500 | 0.0731 |
| 0.0806 | 0.73 | 10000 | 0.0725 |
| 0.0798 | 0.77 | 10500 | 0.0749 |
| 0.0794 | 0.81 | 11000 | 0.0725 |
| 0.0795 | 0.84 | 11500 | 0.0726 |
| 0.0755 | 0.88 | 12000 | 0.0732 |
| 0.0815 | 0.92 | 12500 | 0.0722 |
| 0.0776 | 0.95 | 13000 | 0.0719 |
| 0.0838 | 0.99 | 13500 | 0.0717 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
7b9cb523e2e20a37b532a3a34c39e1b7
|
deepiit98/Catalan_language-clustered
|
deepiit98
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,871 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# deepiit98/Catalan_language-clustered
This model is a fine-tuned version of [nandysoham16/13-clustered_aug](https://huggingface.co/nandysoham16/13-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5877
- Train End Logits Accuracy: 0.8681
- Train Start Logits Accuracy: 0.8507
- Validation Loss: 0.4207
- Validation End Logits Accuracy: 0.8182
- Validation Start Logits Accuracy: 0.8182
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.5877 | 0.8681 | 0.8507 | 0.4207 | 0.8182 | 0.8182 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ac53012176c7b068bd817b3f37dd7c10
|
duja1/roy
|
duja1
| null | 49 | 3 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 3,158 | false |
### Roy Dreambooth model trained by duja1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
r123oy (use that on your prompt)

|
82f1d3a120169c03a947725d6479a35a
|
Geotrend/bert-base-en-es-pt-cased
|
Geotrend
|
bert
| 8 | 2 |
transformers
| 0 |
fill-mask
| true | true | true |
apache-2.0
|
['multilingual']
|
['wikipedia']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,301 | false |
# bert-base-en-es-pt-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-pt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-pt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
01c76a4b32a8a85e33e843d102b190f2
|
sd-concepts-library/collage14
|
sd-concepts-library
| null | 11 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,188 | false |
### Collage14 on Stable Diffusion
This is the `<C14>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:






|
5824d55fb27e922ad1a4c35f95b41886
|
Helsinki-NLP/opus-mt-en-he
|
Helsinki-NLP
|
marian
| 11 | 9,013 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 | false |
### opus-mt-en-he
* source languages: en
* target languages: he
* OPUS readme: [en-he](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-he/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.he | 40.1 | 0.609 |
|
662ccf165718828e1d46870c9dded7ba
|
geevegeorge/tjkicksmodel3
|
geevegeorge
| null | 8 | 2 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['geevegeorge/tjkicksdb3']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,206 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# tjkicksmodel3
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `geevegeorge/tjkicksdb3` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- gradient_accumulation_steps: 8
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
- lr_scheduler: cosine
- lr_warmup_steps: 500
- ema_inv_gamma: 1.0
- ema_inv_gamma: 0.75
- ema_inv_gamma: 0.9999
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/tjkicksmodel3/tensorboard?#scalars)
|
f2ca1df8646d5bf1195cd81248e0f331
|
chrisvinsen/wav2vec2-6
|
chrisvinsen
|
wav2vec2
| 18 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,233 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-6
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2459
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.5873 | 1.56 | 200 | 5.4586 | 1.0 |
| 4.1846 | 3.12 | 400 | 5.2278 | 1.0 |
| 4.1711 | 4.69 | 600 | 5.3131 | 1.0 |
| 4.1581 | 6.25 | 800 | 5.2558 | 1.0 |
| 4.1275 | 7.81 | 1000 | 5.2556 | 1.0 |
| 4.1452 | 9.38 | 1200 | 5.2637 | 1.0 |
| 4.1614 | 10.94 | 1400 | 5.2847 | 1.0 |
| 4.1667 | 12.5 | 1600 | 5.2349 | 1.0 |
| 4.1471 | 14.06 | 1800 | 5.2850 | 1.0 |
| 4.1268 | 15.62 | 2000 | 5.2510 | 1.0 |
| 4.1701 | 17.19 | 2200 | 5.2605 | 1.0 |
| 4.1459 | 18.75 | 2400 | 5.2493 | 1.0 |
| 4.1411 | 20.31 | 2600 | 5.2649 | 1.0 |
| 4.1351 | 21.88 | 2800 | 5.2541 | 1.0 |
| 4.1442 | 23.44 | 3000 | 5.2459 | 1.0 |
| 4.1805 | 25.0 | 3200 | 5.2232 | 1.0 |
| 4.1262 | 26.56 | 3400 | 5.2384 | 1.0 |
| 4.145 | 28.12 | 3600 | 5.2522 | 1.0 |
| 4.142 | 29.69 | 3800 | 5.2459 | 1.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
bac4a8f78857862dcaa714b0ff78568e
|
gustavecortal/distilcamembert-cae-territory
|
gustavecortal
|
camembert
| 6 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,675 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilcamembert-cae-territory
This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7346
- Precision: 0.7139
- Recall: 0.6835
- F1: 0.6887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 1.1749 | 1.0 | 40 | 1.0498 | 0.1963 | 0.4430 | 0.2720 |
| 0.9833 | 2.0 | 80 | 0.8853 | 0.7288 | 0.6709 | 0.6625 |
| 0.6263 | 3.0 | 120 | 0.7503 | 0.7237 | 0.6709 | 0.6689 |
| 0.3563 | 4.0 | 160 | 0.7346 | 0.7139 | 0.6835 | 0.6887 |
| 0.2253 | 5.0 | 200 | 0.7303 | 0.7139 | 0.6835 | 0.6887 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
01f5aaed9ec3198604118add35730163
|
dbmdz/bert-base-historic-multilingual-cased
|
dbmdz
|
bert
| 21 | 58 |
transformers
| 1 |
fill-mask
| true | false | true |
mit
|
['multilingual']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 6,568 | false |
# hmBERT: Historical Multilingual Language Models for Named Entity Recognition
More information about our hmBERT model can be found in our new paper:
["hmBERT: Historical Multilingual Language Models for Named Entity Recognition"](https://arxiv.org/abs/2205.15575).
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Smaller Models
We have also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Pretraining
## Multilingual model
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
4d6e82e81691f676068555c463ef702e
|
thatdramebaazguy/roberta-base-wikimovies
|
thatdramebaazguy
|
roberta
| 10 | 7 |
transformers
| 1 |
fill-mask
| true | true | true |
cc-by-4.0
|
['English']
|
['wikimovies']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['roberta', 'roberta-base', 'masked-language-modeling']
| false | true | true | 1,221 | false |
# roberta-base for MLM
```
model_name = "thatdramebaazguy/roberta-base-wikimovies"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="Fill-Mask")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Fill-Mask
**Training data:** wikimovies
**Eval data:** wikimovies
**Infrastructure**: 2x Tesla v100
**Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/shell_scripts/train_movie_roberta.sh)
## Hyperparameters
```
num_examples = 4346
batch_size = 16
n_epochs = 3
base_LM_model = "roberta-base"
learning_rate = 5e-05
max_query_length=64
Gradient Accumulation steps = 1
Total optimization steps = 816
evaluation_strategy=IntervalStrategy.NO
prediction_loss_only=False
per_device_train_batch_size=8
per_device_eval_batch_size=8
adam_beta1=0.9
adam_beta2=0.999
adam_epsilon=1e-08,
max_grad_norm=1.0
lr_scheduler_type=SchedulerType.LINEAR
warmup_ratio=0.0
seed=42
eval_steps=500
metric_for_best_model=None
greater_is_better=None
label_smoothing_factor=0.0
```
## Performance
perplexity = 4.3808
Some of my work:
- [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/)
---
|
deced66bff94e7b8afcacfc04c402184
|
elopezlopez/distilbert-base-uncased_fold_1_ternary
|
elopezlopez
|
distilbert
| 15 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,656 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fold_1_ternary
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0582
- F1: 0.7326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.5524 | 0.6755 |
| 0.5648 | 2.0 | 580 | 0.5654 | 0.7124 |
| 0.5648 | 3.0 | 870 | 0.6547 | 0.6896 |
| 0.2712 | 4.0 | 1160 | 0.8916 | 0.7263 |
| 0.2712 | 5.0 | 1450 | 1.1187 | 0.7120 |
| 0.1147 | 6.0 | 1740 | 1.2778 | 0.7114 |
| 0.0476 | 7.0 | 2030 | 1.4441 | 0.7151 |
| 0.0476 | 8.0 | 2320 | 1.5535 | 0.7133 |
| 0.0187 | 9.0 | 2610 | 1.6439 | 0.7212 |
| 0.0187 | 10.0 | 2900 | 1.7261 | 0.7313 |
| 0.0138 | 11.0 | 3190 | 1.6806 | 0.7292 |
| 0.0138 | 12.0 | 3480 | 1.8425 | 0.7111 |
| 0.009 | 13.0 | 3770 | 1.9207 | 0.7213 |
| 0.0045 | 14.0 | 4060 | 1.8900 | 0.7202 |
| 0.0045 | 15.0 | 4350 | 1.9730 | 0.7216 |
| 0.0042 | 16.0 | 4640 | 2.0775 | 0.7041 |
| 0.0042 | 17.0 | 4930 | 2.0514 | 0.7106 |
| 0.0019 | 18.0 | 5220 | 2.0582 | 0.7326 |
| 0.0039 | 19.0 | 5510 | 2.1010 | 0.7081 |
| 0.0039 | 20.0 | 5800 | 2.0487 | 0.7273 |
| 0.0025 | 21.0 | 6090 | 2.0415 | 0.7254 |
| 0.0025 | 22.0 | 6380 | 2.0753 | 0.7157 |
| 0.0017 | 23.0 | 6670 | 2.0554 | 0.7246 |
| 0.0017 | 24.0 | 6960 | 2.0644 | 0.7290 |
| 0.0001 | 25.0 | 7250 | 2.0711 | 0.7310 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
c4287cce013d9818f28de81e7aee123a
|
huynguyen208/marian-finetuned-kde4-en-to-vi
|
huynguyen208
|
marian
| 14 | 4 |
transformers
| 0 |
translation
| true | false | false |
apache-2.0
| null |
['kde4']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation', 'generated_from_trainer']
| true | true | true | 1,075 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-vi
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2134
- Bleu: 51.2085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
f17ee09f90943ec0bcf4f458f4e47c29
|
Aphophis420/stargate-diffusion-sg1-1
|
Aphophis420
| null | 59 | 5 |
diffusers
| 3 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 1,121 | false |
### stargate-diffusion-sg1-1 Dreambooth model trained by Aphophis420 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
USE: *prompt*, still from stargate sg1
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)







|
4cdd5c970b3d811387e229ca134aaaf1
|
SiraH/distilbert-base-uncased-finetuned-cola
|
SiraH
|
distilbert
| 38 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,571 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8442
- Matthews Correlation: 0.5443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5267 | 1.0 | 535 | 0.5646 | 0.3655 |
| 0.3477 | 2.0 | 1070 | 0.5291 | 0.4898 |
| 0.2324 | 3.0 | 1605 | 0.5629 | 0.5410 |
| 0.1774 | 4.0 | 2140 | 0.7630 | 0.5370 |
| 0.1248 | 5.0 | 2675 | 0.8442 | 0.5443 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
5f2aa4a4b18a811249077ddbe188b1b4
|
federicopascual/finetuning-sentiment-model-3000-samples
|
federicopascual
|
distilbert
| 13 | 95 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,056 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3404
- Accuracy: 0.8667
- F1: 0.8734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
337a5f85d4109333c5adf7eacba8f966
|
infinitejoy/wav2vec2-large-xls-r-300m-mongolian
|
infinitejoy
|
wav2vec2
| 19 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['mn']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mn', 'model_for_talk', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
| true | true | true | 1,655 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-mongolian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - MN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6003
- Wer: 0.4473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.3677 | 15.87 | 2000 | 0.6432 | 0.6198 |
| 1.1379 | 31.75 | 4000 | 0.6196 | 0.5592 |
| 1.0093 | 47.62 | 6000 | 0.5828 | 0.5117 |
| 0.8888 | 63.49 | 8000 | 0.5754 | 0.4822 |
| 0.7985 | 79.37 | 10000 | 0.5987 | 0.4690 |
| 0.697 | 95.24 | 12000 | 0.6014 | 0.4471 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
c5fe5a9f64f66d5a9f5d77aa312cba1d
|
davidfisher/distilbert-base-uncased-finetuned-cola
|
davidfisher
|
distilbert
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,571 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5254
- Matthews Correlation: 0.5475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5221 | 1.0 | 535 | 0.5360 | 0.4307 |
| 0.3491 | 2.0 | 1070 | 0.5128 | 0.4972 |
| 0.2382 | 3.0 | 1605 | 0.5254 | 0.5475 |
| 0.1756 | 4.0 | 2140 | 0.7479 | 0.5330 |
| 0.1248 | 5.0 | 2675 | 0.7978 | 0.5414 |
### Framework versions
- Transformers 4.22.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
8936d58c06b02fcb56725fbf0052778a
|
Dao007forever/distilbert-base-uncased-finetuned-emotion
|
Dao007forever
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Accuracy: 0.9215
- F1: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.851 | 1.0 | 250 | 0.3314 | 0.8985 | 0.8952 |
| 0.2565 | 2.0 | 500 | 0.2251 | 0.9215 | 0.9215 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
51ca5a11a1cfde7c5abd2f8257699fa1
|
KarelDO/gpt2.CEBaB_confounding.observational.sa.5-class.seed_43
|
KarelDO
|
gpt2
| 15 | 2 |
transformers
| 0 | null | true | false | false |
mit
|
['en']
|
['OpenTable']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,084 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2.CEBaB_confounding.observational.sa.5-class.seed_43
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the OpenTable OPENTABLE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9838
- Accuracy: 0.5918
- Macro-f1: 0.4948
- Weighted-macro-f1: 0.5380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
|
6b4d24421e2a63e77b1f939a2a350416
|
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-localParams
|
nntadotzip
|
xlnet
| 12 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,277 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-IUChatbot-ontologyDts-localParams
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1172 | 1.0 | 1119 | 0.0657 |
| 0.0564 | 2.0 | 2238 | 0.0237 |
| 0.033 | 3.0 | 3357 | 0.0238 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
9286325f7adebfefcccac79abf13e630
|
anas-awadalla/bert-large-uncased-lora-squad
|
anas-awadalla
| null | 21 | 0 | null | 0 | null | false | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,040 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-lora-squad
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
1c67c65dc58047de430f8307a7172713
|
brian-the-dev/baitblocker
|
brian-the-dev
|
bert
| 22 | 18 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null |
['id_clickbait']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,393 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baitblocker
This model is a fine-tuned version of [cahya/bert-base-indonesian-1.5G](https://huggingface.co/cahya/bert-base-indonesian-1.5G) on the [id_clickbait](https://huggingface.co/datasets/id_clickbait) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4660
- Accuracy: 0.8347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4025 | 1.0 | 1500 | 0.4074 | 0.827 |
| 0.3581 | 2.0 | 3000 | 0.4090 | 0.8283 |
| 0.333 | 3.0 | 4500 | 0.4660 | 0.8347 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
bb3b77e0146178054b2cc373feb56b8c
|
0RisingStar0/HighRiseMixV2
|
0RisingStar0
| null | 9 | 0 |
diffusers
| 6 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
| false | true | true | 2,003 | false |
<center><b>HighRiseMixV2.5</b></center>
<p align="center"><img src="https://huggingface.co/0RisingStar0/HighRiseMixV2/resolve/main/00733-2938506110-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo))%2C%20(gradient%20pink%20eye%2C%20black%20hair%2C%20short%20hair%2C%20school%20uniform%2C%20mic.png">
<img src="https://huggingface.co/0RisingStar0/HighRiseMixV2/resolve/main/00729-221520444-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo))%2C%20(gradient%20pink%20eye%2C%20black%20hair%2C%20short%20hair%2C%20school%20uniform%2C%20mic.png"></p>
<center><b>HighRiseMixV2</b></center>
<p align="center"><img src="https://huggingface.co/0RisingStar0/HighRiseMixV2/resolve/main/00016-3185527639-(masterpiece%2C%20excellent%20quality%2C%20high%20quality)%2C%20(1girl%2C%20solo%2C%20cowboy%20shot)%2C%20looking%20at%20viewer%2C%20sky%2C%20city%2C%20skyscrapers%2C%20pavement%2C.png">
<img src="https://huggingface.co/0RisingStar0/HighRiseMixV2/resolve/main/00021-3185527644-(masterpiece%2C%20excellent%20quality%2C%20high%20quality)%2C%20(1girl%2C%20solo%2C%20cowboy%20shot)%2C%20looking%20at%20viewer%2C%20sky%2C%20city%2C%20skyscrapers%2C%20pavement%2C.png"></p>
U-Net mixed model <b>specialized for city and skyscrapers background.</b>
<b>FP16 Pruned version</b>(No EMA).
(Quality change may occur in very small details on buildings' textures)
<b>V2 Update Log : </b>
Added models : AikimixPv1.0, Counterfeit V2.0, pastelmix-better-vae
Adjusted character style(more cute, anime style)
<b>V2.5 Update Log : </b>
Added models : AikimixCv1.5
Just some very little changes to textures adjusted to my taste. It doesn't matter which one to use. There are pros and cons between V2 and V2.5 so you can just use what you want.
<b>Recommended prompts : </b>
(masterpiece, best quality, excellent quality), ((1girl, solo)), sky, city, (skyscrapers), trees, pavement, lens flare
EasyNegative, moss, phone, man, pedestrians, extras, border, outside border, white border
(EasyNegative is a negative embedding : https://huggingface.co/datasets/gsdf/EasyNegative)
<b>Recommended settings : </b>
Sampler : DPM++ 2M Karras OR DPM++ SDE Karras
Sampling steps : 25 ~ 30
Resolution : 512x768 OR 768x512
CFG Scale : 9
<b> Upscale is a must-do!! </b> Otherwise, you won't get intended results.
Upscaler : Latent (nearest)
Hires steps : 0
Denoise : 0.6
Upscale 2x
<b>Recommended VAEs : </b>
kl-f8-anime2
orangemix.vae.pt
<b> Mixed models : </b>
AbyssOrangeMix2_NSFW, AnythingV4.5, AikimiXPv1.0, BasilMixFixed, Counterfeit V2.0, CounterfeitV2.5, EerieOrangeMix2, pastelmix-better-vae, PowercolorV2
(Thanks to everyone who made above models!)
This is my first mixed model being uploaded to public site, so feel free to give feedbacks as you wish, I'll try and work around with it.
|
5a67fcee2cfb7267e3bf350cde25eaa7
|
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts
|
nntadotzip
|
xlnet
| 12 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,264 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-IUChatbot-ontologyDts
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 318 | 0.5005 |
| 0.8222 | 2.0 | 636 | 0.4488 |
| 0.8222 | 3.0 | 954 | 0.4965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
343441a16bf4bc5023527221891e22f2
|
kadirnar/DF2K
|
kadirnar
| null | 3 | 0 | null | 0 | null | false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Super-Resolution', 'computer-vision', 'RealSR', 'gan']
| false | true | true | 2,227 | false |
### Model Description
[RealSR](https://openaccess.thecvf.com/content_CVPRW_2020/papers/w31/Ji_Real-World_Super-Resolution_via_Kernel_Estimation_and_Noise_Injection_CVPRW_2020_paper.pdf): Real-World Super-Resolution via Kernel Estimation and Noise Injection.
[NTIRE 2020 Challenge on Real-World Image Super-Resolution](https://arxiv.org/abs/2005.01996): Methods and Results
[Paper Repo](https://github.com/Tencent/Real-SR): Implementation of paper.
### Installation
```
pip install bsrgan
```
### BSRGAN Usage
```python
from bsrgan import BSRGAN
model = BSRGAN(weights='kadirnar/DF2K', device='cuda:0', hf_model=True)
model.save = True
pred = model.predict(img_path='data/image/test.png')
```
### BibTeX Entry and Citation Info
```
@inproceedings{zhang2021designing,
title={Designing a Practical Degradation Model for Deep Blind Image Super-Resolution},
author={Zhang, Kai and Liang, Jingyun and Van Gool, Luc and Timofte, Radu},
booktitle={IEEE International Conference on Computer Vision},
pages={4791--4800},
year={2021}
}
```
```
@inproceedings{zhang2021designing,
title={Designing a Practical Degradation Model for Deep Blind Image Super-Resolution},
author={Zhang, Kai and Liang, Jingyun and Van Gool, Luc and Timofte, Radu},
booktitle={IEEE International Conference on Computer Vision},
pages={4791--4800},
year={2021}
}
```
```
@article{Lugmayr2020ntire,
title={NTIRE 2020 Challenge on Real-World Image Super-Resolution: Methods and Results},
author={Andreas Lugmayr, Martin Danelljan, Radu Timofte, Namhyuk Ahn, Dongwoon Bai, Jie Cai, Yun Cao, Junyang Chen, Kaihua Cheng, SeYoung Chun, Wei Deng, Mostafa El-Khamy Chiu, Man Ho, Xiaozhong Ji, Amin Kheradmand, Gwantae Kim, Hanseok Ko, Kanghyu Lee, Jungwon Lee, Hao Li, Ziluan Liu, Zhi-Song Liu, Shuai Liu, Yunhua Lu, Zibo Meng, Pablo Navarrete, Michelini Christian, Micheloni Kalpesh, Prajapati Haoyu, Ren Yong, Hyeok Seo, Wan-Chi Siu, Kyung-Ah Sohn, Ying Tai, Rao Muhammad Umer, Shuangquan Wang, Huibing Wang, Timothy Haoning Wu, Haoning Wu, Biao Yang, Fuzhi Yang, Jaejun Yoo, Tongtong Zhao, Yuanbo Zhou, Haijie Zhuo, Ziyao Zong, Xueyi Zou},
journal={CVPR Workshops},
year={2020},
}
```
|
0fd6879e5dbfc8aacebf6fff1b46b787
|
nguyenkhoa2407/favs_token_classification_v2_updated_data
|
nguyenkhoa2407
|
bert
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['token_classification_v2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,118 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# favs_token_classification_v2_updated_data
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the token_classification_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5346
- Precision: 0.6923
- Recall: 0.8357
- F1: 0.7573
- Accuracy: 0.8493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 2.3096 | 1.0 | 13 | 1.9927 | 0.3011 | 0.2 | 0.2403 | 0.3726 |
| 2.038 | 2.0 | 26 | 1.7093 | 0.2569 | 0.2643 | 0.2606 | 0.4274 |
| 1.8391 | 3.0 | 39 | 1.4452 | 0.3057 | 0.4214 | 0.3544 | 0.5562 |
| 1.4912 | 4.0 | 52 | 1.2176 | 0.4130 | 0.5429 | 0.4691 | 0.6493 |
| 1.3296 | 5.0 | 65 | 1.0368 | 0.4973 | 0.6643 | 0.5688 | 0.7123 |
| 1.2036 | 6.0 | 78 | 0.9084 | 0.5053 | 0.6786 | 0.5793 | 0.7260 |
| 0.9244 | 7.0 | 91 | 0.8148 | 0.5543 | 0.7286 | 0.6296 | 0.7616 |
| 0.8293 | 8.0 | 104 | 0.7482 | 0.5698 | 0.7286 | 0.6395 | 0.7726 |
| 0.7422 | 9.0 | 117 | 0.6961 | 0.5833 | 0.75 | 0.6562 | 0.7836 |
| 0.6379 | 10.0 | 130 | 0.6613 | 0.6124 | 0.7786 | 0.6855 | 0.8027 |
| 0.6071 | 11.0 | 143 | 0.6357 | 0.6193 | 0.7786 | 0.6899 | 0.8082 |
| 0.5526 | 12.0 | 156 | 0.6033 | 0.6433 | 0.7857 | 0.7074 | 0.8164 |
| 0.537 | 13.0 | 169 | 0.5813 | 0.6512 | 0.8 | 0.7179 | 0.8301 |
| 0.4806 | 14.0 | 182 | 0.5706 | 0.6608 | 0.8071 | 0.7267 | 0.8329 |
| 0.4503 | 15.0 | 195 | 0.5594 | 0.6647 | 0.8071 | 0.7290 | 0.8356 |
| 0.4149 | 16.0 | 208 | 0.5503 | 0.6805 | 0.8214 | 0.7443 | 0.8438 |
| 0.4175 | 17.0 | 221 | 0.5430 | 0.6824 | 0.8286 | 0.7484 | 0.8438 |
| 0.4337 | 18.0 | 234 | 0.5396 | 0.6923 | 0.8357 | 0.7573 | 0.8493 |
| 0.3965 | 19.0 | 247 | 0.5361 | 0.6882 | 0.8357 | 0.7548 | 0.8493 |
| 0.3822 | 20.0 | 260 | 0.5346 | 0.6923 | 0.8357 | 0.7573 | 0.8493 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
2f1737aa040e380e90ba155d446c3468
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-2_female-8_s728
|
jonatasgrosman
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fr']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'fr']
| false | true | true | 476 | false |
# exp_w2v2r_fr_xls-r_gender_male-2_female-8_s728
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
7f3ca1806dfd41f74b6cc3392e7dc303
|
shed-e/ner_peoples_daily
|
shed-e
|
bert
| 16 | 6 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['peoples_daily_ner']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,975 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_peoples_daily
This model is a fine-tuned version of [hfl/rbt6](https://huggingface.co/hfl/rbt6) on the peoples_daily_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0249
- Precision: 0.9205
- Recall: 0.9365
- F1: 0.9285
- Accuracy: 0.9930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3154 | 1.0 | 164 | 0.0410 | 0.8258 | 0.8684 | 0.8466 | 0.9868 |
| 0.0394 | 2.0 | 328 | 0.0287 | 0.8842 | 0.9070 | 0.8954 | 0.9905 |
| 0.0293 | 3.0 | 492 | 0.0264 | 0.8978 | 0.9168 | 0.9072 | 0.9916 |
| 0.02 | 4.0 | 656 | 0.0254 | 0.9149 | 0.9226 | 0.9188 | 0.9923 |
| 0.016 | 5.0 | 820 | 0.0250 | 0.9167 | 0.9281 | 0.9224 | 0.9927 |
| 0.0124 | 6.0 | 984 | 0.0252 | 0.9114 | 0.9328 | 0.9220 | 0.9928 |
| 0.0108 | 7.0 | 1148 | 0.0249 | 0.9169 | 0.9339 | 0.9254 | 0.9928 |
| 0.0097 | 8.0 | 1312 | 0.0249 | 0.9205 | 0.9365 | 0.9285 | 0.9930 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
963fe3763e7611a1417fd44379a5169d
|
lambdalabs/miniSD-diffusers
|
lambdalabs
| null | 17 | 44 |
diffusers
| 0 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 3 | 2 | 1 | 0 | 1 | 1 | 0 |
[]
| false | true | true | 1,709 | false |
## Usage
```python
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("lambdalabs/miniSD-diffusers")
pipe = pipe.to("cuda")
prompt = "a photograph of an wrinkly old man laughing"
image = pipe(prompt, width=256, height=256).images[0]
image.save('test.jpg')
```
## Training details
Fine tuned from the stable-diffusion 1.4 checkpoint as follows:
- 22,000 steps fine-tuning only the attention layers of the unet, learn rate=1e-5, batch size=256
- 66,000 steps training the full unet, learn rate=5e-5, batch size=552
- GPUs provided by [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud)
- Trained on [LAION Improved Aesthetics 6plus](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus).
- Trained using https://github.com/justinpinkney/stable-diffusion, original [checkpoint available here](https://huggingface.co/justinpinkney/miniSD)
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
|
3d0daa41789deb70824007df0bfa783d
|
Guizmus/Tardisfusion
|
Guizmus
| null | 19 | 9 |
diffusers
| 11 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 3 | 1 | 2 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image', 'image-to-image']
| false | true | true | 1,597 | false |
# TARDISfusion
<p>
<img src="https://huggingface.co/Guizmus/Tardisfusion/raw/main/showcase.jpg"/><br/>
This is a Dreamboothed Stable Diffusion model trained on 3 Style concepts.<br/>
The total dataset is made of 209 pictures, and the training has been done on runawayml 1.5 with 2500 steps and the new VAE.
The following tokens will add their corresponding concept :<br/>
<ul>
<li><b>Classic Tardis style</b> : Architectural and furniture style seen inside the TARDIS in the series before the reboot.</li>
<li><b>Modern Tardis style</b>: Architectural and furniture style seen inside the TARDIS in the series after the reboot</li>
<li><b>Tardis Box style</b>: A style made from the TARDIS seen from the outside. Summons a TARDIS anywhere.</li>
</ul>
</p>
[CKPT download link](https://huggingface.co/Guizmus/Tardisfusion/resolve/main/Tardisfusion-v2.ckpt)
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "Guizmus/Tardisfusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a bedroom, Classic Tardis style"
image = pipe(prompt).images[0]
image.save("./TARDIS Style.png")
```
|
01c404f23bd0fa8bde47e5ff9ab5de4a
|
unza/xls-r-300m-nyanja-model_v1
|
unza
|
wav2vec2
| 15 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'NyanjaSpeech', 'generated_from_trainer']
| true | true | true | 1,488 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-nyanja-model_v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NYANJASPEECH - NYA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2772
- Wer: 0.9074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7585 | 1.58 | 500 | 0.3574 | 0.9679 |
| 0.4736 | 3.16 | 1000 | 0.2772 | 0.9074 |
| 0.4776 | 4.75 | 1500 | 0.2853 | 0.9578 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
a03e02be3406dcaee40273c84dab856b
|
emilyalsentzer/Bio_ClinicalBERT
|
emilyalsentzer
|
bert
| 11 | 10,502,146 |
transformers
| 65 |
fill-mask
| true | false | true |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['fill-mask']
| false | true | true | 2,614 | false |
# ClinicalBERT - Bio + Clinical BERT Model
The [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) paper contains four unique clinicalBERT models: initialized with BERT-Base (`cased_L-12_H-768_A-12`) or BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) & trained on either all MIMIC notes or only discharge summaries.
This model card describes the Bio+Clinical BERT model, which was initialized from [BioBERT](https://arxiv.org/abs/1901.08746) & trained on all MIMIC notes.
## Pretraining Data
The `Bio_ClinicalBERT` model was trained on all notes from [MIMIC III](https://www.nature.com/articles/sdata201635), a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see [here](https://mimic.physionet.org/). All notes from the `NOTEEVENTS` table were included (~880M words).
## Model Pretraining
### Note Preprocessing
Each note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into "History of Present Illness", "Family History", "Brief Hospital Course", etc. sections). Then each section was split into sentences using SciSpacy (`en core sci md` tokenizer).
### Pretraining Procedures
The model was trained using code from [Google's BERT repository](https://github.com/google-research/bert) on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`).
### Pretraining Hyperparameters
We used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15
and max predictions per sequence = 20).
## How to use the model
Load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
```
## More Information
Refer to the original paper, [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks.
## Questions?
Post a Github issue on the [clinicalBERT repo](https://github.com/EmilyAlsentzer/clinicalBERT) or email emilya@mit.edu with any questions.
|
d992644dcd5a38344af6b25b2581570e
|
jiobiala24/wav2vec2-base-checkpoint-9
|
jiobiala24
|
wav2vec2
| 13 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,355 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-9
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-8](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-8) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9203
- Wer: 0.3258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2783 | 1.58 | 1000 | 0.5610 | 0.3359 |
| 0.2251 | 3.16 | 2000 | 0.5941 | 0.3374 |
| 0.173 | 4.74 | 3000 | 0.6026 | 0.3472 |
| 0.1475 | 6.32 | 4000 | 0.6750 | 0.3482 |
| 0.1246 | 7.9 | 5000 | 0.6673 | 0.3414 |
| 0.1081 | 9.48 | 6000 | 0.7072 | 0.3409 |
| 0.1006 | 11.06 | 7000 | 0.7413 | 0.3392 |
| 0.0879 | 12.64 | 8000 | 0.7831 | 0.3394 |
| 0.0821 | 14.22 | 9000 | 0.7371 | 0.3333 |
| 0.0751 | 15.8 | 10000 | 0.8321 | 0.3445 |
| 0.0671 | 17.38 | 11000 | 0.8362 | 0.3357 |
| 0.0646 | 18.96 | 12000 | 0.8709 | 0.3367 |
| 0.0595 | 20.54 | 13000 | 0.8352 | 0.3321 |
| 0.0564 | 22.12 | 14000 | 0.8854 | 0.3323 |
| 0.052 | 23.7 | 15000 | 0.9031 | 0.3315 |
| 0.0485 | 25.28 | 16000 | 0.9171 | 0.3278 |
| 0.046 | 26.86 | 17000 | 0.9390 | 0.3254 |
| 0.0438 | 28.44 | 18000 | 0.9203 | 0.3258 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
0428ff978787bb1ba342d97f66d14a5e
|
jonatasgrosman/exp_w2v2t_et_wavlm_s753
|
jonatasgrosman
|
wavlm
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['et']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'et']
| false | true | true | 439 | false |
# exp_w2v2t_et_wavlm_s753
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
a9f5f1df46cad90071dd369db0b148d3
|
mriggs/marian-finetuned-kde4-en-to-fr
|
mriggs
|
marian
| 15 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['kde4']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 948 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
026a6a42ff0cfd5c0af1d6e27b52c02b
|
NbAiLab/nb-bert-base-ner
|
NbAiLab
|
bert
| 8 | 562 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-4.0
|
False
|
['norne']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['norwegian', 'bert', 'ner']
| false | true | true | 646 | false |
**Release 1.0** (November 17, 2021)
# nb-bert-base-ner
## Description
NB-Bert base model fine-tuned on the Named Entity Recognition task using the [NorNE dataset](https://huggingface.co/datasets/NbAiLab/norne).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("NbAiLab/nb-bert-base-ner")
model = AutoModelForTokenClassification.from_pretrained("NbAiLab/nb-bert-base-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jeg heter Kjell og bor i Oslo."
ner_results = nlp(example)
print(ner_results)
```
|
bfdb63799b79c75f1ece3661a0377aae
|
yoshitomo-matsubara/bert-large-uncased-cola
|
yoshitomo-matsubara
|
bert
| 9 | 1,201 |
transformers
| 1 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['cola']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bert', 'cola', 'glue', 'torchdistill']
| false | true | true | 708 | false |
`bert-large-uncased` fine-tuned on CoLA dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/cola/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
71e1ec9741cc5ea1a94662fdb1313a6b
|
Helsinki-NLP/opus-mt-tc-big-cat_oci_spa-en
|
Helsinki-NLP
|
marian
| 13 | 18 |
transformers
| 0 |
translation
| true | true | false |
cc-by-4.0
|
['ca', 'en', 'es', 'oc']
| null | null | 2 | 1 | 1 | 0 | 0 | 0 | 0 |
['translation', 'opus-mt-tc']
| true | true | true | 6,011 | false |
# opus-mt-tc-big-cat_oci_spa-en
Neural machine translation model for translating from Catalan, Occitan and Spanish (cat+oci+spa) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): cat spa
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat+oci+spa-eng/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT cat+oci+spa-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat+oci+spa-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"¿Puedo hacerte una pregunta?",
"Toca algo de música."
]
model_name = "pytorch-models/opus-mt-tc-big-cat_oci_spa-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Can I ask you a question?
# He plays some music.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-cat_oci_spa-en")
print(pipe("¿Puedo hacerte una pregunta?"))
# expected output: Can I ask you a question?
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat+oci+spa-eng/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat+oci+spa-eng/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| cat-eng | tatoeba-test-v2021-08-07 | 0.72019 | 57.3 | 1631 | 12627 |
| spa-eng | tatoeba-test-v2021-08-07 | 0.76017 | 62.3 | 16583 | 138123 |
| cat-eng | flores101-devtest | 0.69572 | 45.4 | 1012 | 24721 |
| oci-eng | flores101-devtest | 0.63347 | 37.5 | 1012 | 24721 |
| spa-eng | flores101-devtest | 0.59696 | 29.9 | 1012 | 24721 |
| spa-eng | newssyscomb2009 | 0.57104 | 30.8 | 502 | 11818 |
| spa-eng | news-test2008 | 0.55440 | 27.9 | 2051 | 49380 |
| spa-eng | newstest2009 | 0.57153 | 30.2 | 2525 | 65399 |
| spa-eng | newstest2010 | 0.61890 | 36.8 | 2489 | 61711 |
| spa-eng | newstest2011 | 0.60278 | 34.7 | 3003 | 74681 |
| spa-eng | newstest2012 | 0.62760 | 38.6 | 3003 | 72812 |
| spa-eng | newstest2013 | 0.60994 | 35.3 | 3000 | 64505 |
| spa-eng | tico19-test | 0.74033 | 51.8 | 2100 | 56315 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 18:30:38 EEST 2022
* port machine: LM0-400-22516.local
|
cc23734b392c14ae99ee41f763ab5340
|
cartesinus/multilingual_minilm-amazon_massive-intent_eu7
|
cartesinus
|
bert
| 12 | 12 |
transformers
| 1 |
text-classification
| true | false | false |
mit
|
['en', 'de', 'fr', 'it', 'pt', 'es', 'pl']
|
['AmazonScience/massive']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'nlu', 'text-classification']
| true | true | true | 2,019 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual_minilm-amazon_massive-intent_eu7
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the [MASSIVE 1.1](https://huggingface.co/datasets/AmazonScience/massive) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8238
- Accuracy: 0.8623
- F1: 0.8623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 1.3523 | 1.0 | 5038 | 1.3058 | 0.6937 | 0.6937 |
| 0.7842 | 2.0 | 10076 | 0.8434 | 0.8059 | 0.8059 |
| 0.5359 | 3.0 | 15114 | 0.7231 | 0.8302 | 0.8302 |
| 0.4106 | 4.0 | 20152 | 0.7121 | 0.8443 | 0.8443 |
| 0.3294 | 5.0 | 25190 | 0.7366 | 0.8497 | 0.8497 |
| 0.2621 | 6.0 | 30228 | 0.7702 | 0.8528 | 0.8528 |
| 0.2164 | 7.0 | 35266 | 0.7773 | 0.8577 | 0.8577 |
| 0.1756 | 8.0 | 40304 | 0.8080 | 0.8569 | 0.8569 |
| 0.1625 | 9.0 | 45342 | 0.8162 | 0.8624 | 0.8624 |
| 0.1448 | 10.0 | 50380 | 0.8238 | 0.8623 | 0.8623 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
4dc2d26cb437dea854d3fa26b30af5b7
|
aherzberg/whisper-dpv-finetuned-BEST-MODEL
|
aherzberg
|
whisper
| 16 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,250 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-dpv-finetuned
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- epoch: 13.07
- eval_loss: 0.0002
- eval_runtime: 8695.8511
- eval_samples_per_second: 0.458
- eval_steps_per_second: 0.458
- eval_wer: 0.0112
- step: 13000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
feed3bfc9d8366e391caee8d34948e0e
|
Nadav/bert-base-historic-multilingual-64k-td-cased-squad-fr
|
Nadav
|
bert
| 12 | 119 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,328 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-historic-multilingual-64k-td-cased-squad-fr
This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-64k-td-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-64k-td-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9834 | 1.0 | 3569 | 1.8605 |
| 1.663 | 2.0 | 7138 | 1.7419 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
771da1377fdf6836be43ea59669337ec
|
DevashishSiwatch/wav2vec2-base-timit-demo-google-colab
|
DevashishSiwatch
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,998 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5108
- Wer: 0.3342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.6383 | 1.0 | 500 | 2.3747 | 1.0 |
| 0.9624 | 2.01 | 1000 | 0.5724 | 0.5213 |
| 0.4521 | 3.01 | 1500 | 0.4892 | 0.4794 |
| 0.3126 | 4.02 | 2000 | 0.4250 | 0.3991 |
| 0.2299 | 5.02 | 2500 | 0.4288 | 0.3929 |
| 0.195 | 6.02 | 3000 | 0.4707 | 0.3974 |
| 0.1602 | 7.03 | 3500 | 0.4731 | 0.4034 |
| 0.1477 | 8.03 | 4000 | 0.4405 | 0.3896 |
| 0.1284 | 9.04 | 4500 | 0.4663 | 0.3850 |
| 0.1114 | 10.04 | 5000 | 0.4814 | 0.3759 |
| 0.1024 | 11.04 | 5500 | 0.4821 | 0.3701 |
| 0.0973 | 12.05 | 6000 | 0.4718 | 0.3709 |
| 0.0832 | 13.05 | 6500 | 0.5257 | 0.3678 |
| 0.0741 | 14.06 | 7000 | 0.4741 | 0.3621 |
| 0.0696 | 15.06 | 7500 | 0.5073 | 0.3710 |
| 0.0664 | 16.06 | 8000 | 0.4886 | 0.3651 |
| 0.0613 | 17.07 | 8500 | 0.5300 | 0.3588 |
| 0.0612 | 18.07 | 9000 | 0.4983 | 0.3543 |
| 0.049 | 19.08 | 9500 | 0.5158 | 0.3592 |
| 0.0455 | 20.08 | 10000 | 0.5213 | 0.3525 |
| 0.042 | 21.08 | 10500 | 0.4979 | 0.3474 |
| 0.0376 | 22.09 | 11000 | 0.5335 | 0.3493 |
| 0.0331 | 23.09 | 11500 | 0.5276 | 0.3451 |
| 0.0346 | 24.1 | 12000 | 0.5106 | 0.3428 |
| 0.0294 | 25.1 | 12500 | 0.5414 | 0.3426 |
| 0.0265 | 26.1 | 13000 | 0.5234 | 0.3363 |
| 0.0273 | 27.11 | 13500 | 0.5207 | 0.3356 |
| 0.0255 | 28.11 | 14000 | 0.5092 | 0.3354 |
| 0.0248 | 29.12 | 14500 | 0.5108 | 0.3342 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
3a9fd0b37a997f134e80f2e2cdc9ad97
|
ksabeh/distilbert-attribute-correction-mlm
|
ksabeh
|
distilbert
| 18 | 42 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,451 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/distilbert-base-uncased-mlm-electronics-attribute-correction-qa-mlm
This model is a fine-tuned version of [ksabeh/distilbert-base-uncased-mlm-electronics](https://huggingface.co/ksabeh/distilbert-base-uncased-mlm-electronics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0607
- Validation Loss: 0.0609
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36794, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1703 | 0.0730 | 0 |
| 0.0607 | 0.0609 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.3
- Datasets 2.1.0
- Tokenizers 0.12.1
|
a184cde9b9b183256e3fc1f7f4f4acfb
|
gokuls/distilbert_sa_GLUE_Experiment_data_aug_qqp_384
|
gokuls
|
distilbert
| 22 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,887 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_data_aug_qqp_384
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5363
- Accuracy: 0.7995
- F1: 0.7338
- Combined Score: 0.7666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Accuracy | Combined Score | F1 | Validation Loss |
|:-------------:|:-----:|:------:|:--------:|:--------------:|:------:|:---------------:|
| 0.3538 | 1.0 | 29671 | 0.7995 | 0.7666 | 0.7338 | 0.5363 |
| 0.1571 | 2.0 | 59342 | 0.7215 | 0.8000 | 0.7396 | 0.7698 |
| 0.0894 | 3.0 | 89013 | 0.7922 | 0.7998 | 0.7407 | 0.7702 |
| 0.0596 | 4.0 | 118684 | 0.8829 | 0.8045 | 0.7399 | 0.7722 |
| 0.0433 | 5.0 | 148355 | 0.8505 | 0.8110 | 0.7443 | 0.7777 |
| 0.0334 | 6.0 | 178026 | 1.0843 | 0.8047 | 0.7446 | 0.7746 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
6863e358d8a5a23c1695c63a7a618260
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.