repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Kevincp560/bigbird-pegasus-large-bigpatent-finetuned-pubMed
|
Kevincp560
|
bigbird_pegasus
| 10 | 2 |
transformers
| 2 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['pub_med_summarization_dataset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,931 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-pegasus-large-bigpatent-finetuned-pubMed
This model is a fine-tuned version of [google/bigbird-pegasus-large-bigpatent](https://huggingface.co/google/bigbird-pegasus-large-bigpatent) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5403
- Rouge1: 45.0851
- Rouge2: 19.5488
- Rougel: 27.391
- Rougelsum: 41.112
- Gen Len: 231.608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.1198 | 1.0 | 500 | 1.6285 | 43.0579 | 18.1792 | 26.421 | 39.0769 | 214.924 |
| 1.6939 | 2.0 | 1000 | 1.5696 | 44.0679 | 18.9331 | 26.84 | 40.0684 | 222.814 |
| 1.6195 | 3.0 | 1500 | 1.5506 | 44.7352 | 19.3532 | 27.2418 | 40.7454 | 229.396 |
| 1.5798 | 4.0 | 2000 | 1.5403 | 45.0415 | 19.5019 | 27.2969 | 40.951 | 231.044 |
| 1.5592 | 5.0 | 2500 | 1.5403 | 45.0851 | 19.5488 | 27.391 | 41.112 | 231.608 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
6f78db0bcb1aa92c35d82598852c16de
|
sd-concepts-library/joe-mad
|
sd-concepts-library
| null | 9 | 0 | null | 3 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 998 | false |
### Joe Mad on Stable Diffusion
This is the `<joe-mad>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
0b1a27a907e49c51949ca691c442a4d8
|
paola-md/recipe-lr2e05-wd0.005-bs16
|
paola-md
|
roberta
| 6 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,468 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.005-bs16
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2780
- Rmse: 0.5272
- Mse: 0.2780
- Mae: 0.4314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.277 | 1.0 | 1245 | 0.2743 | 0.5237 | 0.2743 | 0.4112 |
| 0.2738 | 2.0 | 2490 | 0.2811 | 0.5302 | 0.2811 | 0.4288 |
| 0.2724 | 3.0 | 3735 | 0.2780 | 0.5272 | 0.2780 | 0.4314 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
685620d5f40ee9f82c0087adbe5e00f8
|
akashsingh123/wav2vec2-base-timit-demo-colab
|
akashsingh123
|
wav2vec2
| 9 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 991 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
58b9b14d33b0a733b56c7e10180d5db8
|
jonatasgrosman/exp_w2v2t_es_vp-nl_s203
|
jonatasgrosman
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['es']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'es']
| false | true | true | 469 | false |
# exp_w2v2t_es_vp-nl_s203
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
6b22464aeb54c31d913ef0891f03f5fa
|
Passexam4sure/DVA-C01-Dumps-2023
|
Passexam4sure
| null | 2 | 0 |
adapter-transformers
| 0 |
text-classification
| false | false | false |
artistic-2.0
|
['en']
|
['fka/awesome-chatgpt-prompts']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['code']
| false | true | true | 1,114 | false |
DVA-C01 PDFs, which stand for AWS Certified Developer - Associate Exam DVA-C01, can be reliable for exam preparation for a few reasons:
1: They provide a digital copy of the exam's content, including the topics and objectives that will be covered on the test.
2: They are easy to access and can be downloaded and used on a variety of devices, making it convenient to study on-the-go.
3: Some DVA-C01 PDFs may include practice questions and answer explanations, which can help you prepare and identify areas where you may need more study.
4: Many DVA-C01 PDFs are created by experts, who have already taken the exam and have an in-depth knowledge of the exam's format, content, and difficulty level.
However, it's important to note that not all DVA-C01 PDFs are reliable or of the same quality, so it's recommended to look for the ones from reputable sources, and also to use them in conjunction with other resources such as AWS official documentation, hands-on practice and online training to achieve best results.
Click Here To Get DVA-C01 Dumps 2023: https://www.passexam4sure.com/amazon/dva-c01-dumps.html
|
66d48b17519f3b7992deb91b9de78022
|
tensorspeech/tts-mb_melgan-synpaflex-fr
|
tensorspeech
| null | 4 | 0 |
tensorflowtts
| 2 |
text-to-speech
| false | false | false |
apache-2.0
|
['fr']
|
['synpaflex']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['tensorflowtts', 'audio', 'text-to-speech', 'mel-to-wav']
| false | true | true | 2,262 | false |
# Multi-band MelGAN trained on Synpaflex (Fr)
This repository provides a pretrained [Multi-band MelGAN](https://arxiv.org/abs/2005.05106) trained on Synpaflex dataset (French). For a detail of the model, we encourage you to read more about
[TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS).
## Install TensorFlowTTS
First of all, please install TensorFlowTTS with the following command:
```
pip install TensorFlowTTS
```
### Converting your Text to Wav
```python
import soundfile as sf
import numpy as np
import tensorflow as tf
from tensorflow_tts.inference import AutoProcessor
from tensorflow_tts.inference import TFAutoModel
processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-synpaflex-fr")
tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-synpaflex-fr")
mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-synpaflex-fr")
text = "Oh, je voudrais tant que tu te souviennes Des jours heureux quand nous étions amis"
input_ids = processor.text_to_sequence(text)
# tacotron2 inference (text-to-mel)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
)
# melgan inference (mel-to-wav)
audio = mb_melgan.inference(mel_outputs)[0, :, 0]
# save to file
sf.write('./audio.wav', audio, 22050, "PCM_16")
```
#### Referencing Multi-band MelGAN
```
@misc{yang2020multiband,
title={Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech},
author={Geng Yang and Shan Yang and Kai Liu and Peng Fang and Wei Chen and Lei Xie},
year={2020},
eprint={2005.05106},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
#### Referencing TensorFlowTTS
```
@misc{TFTTS,
author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata,
Trinh Le and Yunchao He},
title = {TensorflowTTS},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}},
}
```
|
34cea1eba528cbaaf60977b952fcefd8
|
Yagorka/ddpm-butterflies-128
|
Yagorka
| null | 33 | 2 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,201 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Yagorka/ddpm-butterflies-128/tensorboard?#scalars)
|
f09589100f214ce288edd3f391d43cb2
|
Rajan/donut-base-sroie_300
|
Rajan
|
vision-encoder-decoder
| 15 | 0 |
transformers
| 0 | null | true | false | false |
mit
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 980 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie_300
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
0c547a007c352ceaf535441e4a37a9da
|
jmunoz/finetuning-sentiment-model-3000-samples_jmnew
|
jmunoz
|
distilbert
| 13 | 11 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,060 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples_jmnew
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3148
- Accuracy: 0.8733
- F1: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
bed56e0ef3539b82e508528b86b0ea75
|
gustavecortal/roberta-reman-tec
|
gustavecortal
|
roberta
| 11 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,547 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cold_remanandtec_gpu_v1
This model is a fine-tuned version of [ibm/ColD-Fusion](https://huggingface.co/ibm/ColD-Fusion) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0737
- F1: 0.9462
- Roc Auc: 0.9592
- Recall: 0.9362
- Precision: 0.9565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:------:|:---------:|
| 0.3606 | 1.0 | 1521 | 0.1974 | 0.8936 | 0.9247 | 0.8936 | 0.8936 |
| 0.2715 | 2.0 | 3042 | 0.1247 | 0.8989 | 0.9167 | 0.8511 | 0.9524 |
| 0.1811 | 3.0 | 4563 | 0.0737 | 0.9462 | 0.9592 | 0.9362 | 0.9565 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
fb60f03636dde4705f50adb15ed07ad4
|
farsipal/whisper-sm-el-intlv-xs
|
farsipal
|
whisper
| 19 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['el']
|
['mozilla-foundation/common_voice_11_0', 'google/fleurs']
| null | 2 | 1 | 1 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard', 'automatic-speech-recognition', 'greek']
| true | true | true | 2,095 | false |
# Whisper small (Greek) Trained on Interleaved Datasets
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on interleaved mozilla-foundation/common_voice_11_0 (el) and google/fleurs (el_gr) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4741
- Wer: 20.0687
## Model description
The model was developed during the Whisper Fine-Tuning Event in December 2022.
More details on the model can be found [in the original paper](https://cdn.openai.com/papers/whisper.pdf)
## Intended uses & limitations
The model is fine-tuned for transcription in the Greek language.
## Training and evaluation data
This model was trained by interleaving the training and evaluation splits from two different datasets:
- mozilla-foundation/common_voice_11_0 (el)
- google/fleurs (el_gr)
## Training procedure
The python script used is a modified version of the script provided by Hugging Face and can be found [here](https://github.com/kamfonas/whisper-fine-tuning-event/blob/minor-mods-by-farsipal/run_speech_recognition_seq2seq_streaming.py)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0186 | 4.98 | 1000 | 0.3619 | 21.0067 |
| 0.0012 | 9.95 | 2000 | 0.4347 | 20.3009 |
| 0.0005 | 14.93 | 3000 | 0.4741 | 20.0687 |
| 0.0003 | 19.9 | 4000 | 0.4974 | 20.1152 |
| 0.0003 | 24.88 | 5000 | 0.5066 | 20.2266 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0
- Datasets 2.7.1.dev0
- Tokenizers 0.12.1
|
2aa9ae4a2dece1e04bcd061018c3828b
|
Helsinki-NLP/opus-mt-gaa-sv
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-gaa-sv
* source languages: gaa
* target languages: sv
* OPUS readme: [gaa-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.sv | 30.1 | 0.489 |
|
3d7e08903ac7b5f3dcb415236f19a0f0
|
aGabillon/distilbert-base-uncased-finetuned-emotion
|
aGabillon
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,345 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2294
- Accuracy: 0.9215
- F1: 0.9219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8304 | 1.0 | 250 | 0.3312 | 0.899 | 0.8962 |
| 0.2547 | 2.0 | 500 | 0.2294 | 0.9215 | 0.9219 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
3cf70c7b93174315559d8b38eff1d10c
|
muhtasham/mini-mlm-tweet-target-tweet
|
muhtasham
|
bert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['tweet_eval']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,546 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mini-mlm-tweet-target-tweet
This model is a fine-tuned version of [muhtasham/mini-mlm-tweet](https://huggingface.co/muhtasham/mini-mlm-tweet) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4122
- Accuracy: 0.7353
- F1: 0.7377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8264 | 4.9 | 500 | 0.7479 | 0.7219 | 0.7190 |
| 0.3705 | 9.8 | 1000 | 0.8205 | 0.7487 | 0.7479 |
| 0.1775 | 14.71 | 1500 | 1.0049 | 0.7273 | 0.7286 |
| 0.092 | 19.61 | 2000 | 1.1698 | 0.7353 | 0.7351 |
| 0.0513 | 24.51 | 2500 | 1.4122 | 0.7353 | 0.7377 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
b171c645a3b32c4be08701746f3920db
|
Das282000Prit/fyp-finetuned-brown
|
Das282000Prit
|
bert
| 8 | 2 |
transformers
| 0 |
fill-mask
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,530 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Das282000Prit/fyp-finetuned-brown
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.5777
- Validation Loss: 3.0737
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -844, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5777 | 3.0737 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
d625b5ef0608008fd3831014f1d29661
|
pollner/yelp
|
pollner
|
bert
| 12 | 12 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['yelp_review_full']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,312 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yelp
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0380
- Accuracy: 0.587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.2336 | 0.447 |
| No log | 2.0 | 250 | 1.0153 | 0.562 |
| No log | 3.0 | 375 | 1.0380 | 0.587 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
df05bffffa744ec5cabed8564a0c1f07
|
PeterBanning71/gpt2-small-spanish-finetuned-rap
|
PeterBanning71
|
gpt2
| 11 | 9 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null |
['amazon_reviews_multi']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 1,299 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-spanish-finetuned-rap
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 27 | 4.8244 |
| No log | 2.0 | 54 | 4.7367 |
| No log | 3.0 | 81 | 4.7161 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
02013bff7c576039336b8050f242adc2
|
jamesesguerra/distilbart-cnn-12-6-finetuned-1.1.0
|
jamesesguerra
|
bart
| 14 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,477 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-1.1.0
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0274
- Rouge1: 84.662
- Rouge2: 83.5616
- Rougel: 84.4282
- Rougelsum: 84.4667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 0.0911 | 1.0 | 97 | 0.0286 | 85.8678 | 84.7683 | 85.7147 | 85.6949 |
| 0.0442 | 2.0 | 194 | 0.0274 | 84.662 | 83.5616 | 84.4282 | 84.4667 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
e6882c089991599ae0d48a5b1abf918b
|
Lemswasabi/wav2vec2-large-xlsr-53-842h-luxembourgish-11h-with-lm
|
Lemswasabi
|
wav2vec2
| 20 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
mit
|
['lb']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'generated_from_trainer']
| false | true | true | 1,826 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
## Model description
We fine-tuned a wav2vec 2.0 large XLSR-53 checkpoint with 842h of unlabelled Luxembourgish speech
collected from [RTL.lu](https://www.rtl.lu/). Then the model was fine-tuned on 11h of labelled
Luxembourgish speech from the same domain. Additionally, we rescore the output transcription
with a 5-gram language model trained on text corpora from RTL.lu and the Luxembourgish parliament.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
## Citation
This model is a result of our paper `IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS` submitted to the [IEEE SLT 2022 workshop](https://slt2022.org/)
```
@misc{lb-wav2vec2,
author = {Nguyen, Le Minh and Nayak, Shekhar and Coler, Matt.},
keywords = {Luxembourgish, multilingual speech recognition, language modelling, wav2vec 2.0 XLSR-53, under-resourced language},
title = {IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS},
year = {2022},
copyright = {2023 IEEE}
}
```
|
d1b22decf902d6d2d471d82334fa83e0
|
Xessen/bert-turkish-cased
|
Xessen
|
bert
| 4 | 0 |
transformers
| 0 |
fill-mask
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 974 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-turkish-cased
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.8.0
- Tokenizers 0.13.2
|
ac74c6d11760f1d6a51328bcfedd80f4
|
Evel/VividWatercolors
|
Evel
| null | 17 | 174 |
diffusers
| 9 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
| false | true | true | 4,819 | false |
Introducing my new Vivid Watercolors dreambooth model.
The model is trained with beautiful, artist-agnostic watercolor images using the midjourney method.
The token is "wtrcolor style"
It can be challenging to use, but with the right prompts, but it can create stunning artwork.
See an example prompt that I use in tests:
wtrcolor style, Digital art of (subject), official art, frontal, smiling, masterpiece, Beautiful, ((watercolor)), face paint, paint splatter, intricate details. Highly detailed, detailed eyes, [dripping:0.5], Trending on artstation, by [artist]
Using "watercolor" in the pronpt is necessary to get a good watercolor texture, try words like face (paint, paint splatter, dripping).
For a negative prompt I use this one:
(bad_prompt:0.8), ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), (((dead eyes))), (((out of frame))), ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), blur, (((watermarked)), ((out of focus)), (((low contrast))), (((zoomed in))), (((crossed eyes))), (((disfigured)), ((bad art)), (weird colors), (((oversaturated art))), multiple persons, multiple faces, (vector), (vector-art), (((high contrast)))
Here's some txt2img exemples:











Here an img2img exemple:


In img2img you may need to increase the prompt like: (((wtrcolor style)))
You can play with the settings, is easier to get good results with the right prompt:
For me, the sweet spot is around 30 steps, euler a, cfg 8-9. (Clip skip 2 kinda lead to softer results)
See the tests here: https://imgur.com/a/ghVhVhy
|
93c1135878e11577239831f580589916
|
anas-awadalla/roberta-base-few-shot-k-16-finetuned-squad-seed-0
|
anas-awadalla
|
roberta
| 17 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 985 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-16-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
6cef4d6a440f9e45ed920b833ab397c0
|
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5
|
gary109
|
wav2vec2
| 14 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'gary109/AI_Light_Dance', 'generated_from_trainer']
| true | true | true | 2,030 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0163
- Wer: 0.6622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8867 | 1.0 | 376 | 1.0382 | 0.6821 |
| 0.8861 | 2.0 | 752 | 1.0260 | 0.6686 |
| 0.8682 | 3.0 | 1128 | 1.0358 | 0.6604 |
| 0.8662 | 4.0 | 1504 | 1.0234 | 0.6665 |
| 0.8463 | 5.0 | 1880 | 1.0333 | 0.6666 |
| 0.8573 | 6.0 | 2256 | 1.0163 | 0.6622 |
| 0.8628 | 7.0 | 2632 | 1.0209 | 0.6551 |
| 0.8493 | 8.0 | 3008 | 1.0525 | 0.6582 |
| 0.8371 | 9.0 | 3384 | 1.0409 | 0.6515 |
| 0.8229 | 10.0 | 3760 | 1.0597 | 0.6523 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
e0f894c1f6dabb594bec06ee0c6c0422
|
aminjalali/distilbert-base-uncased-finetuned-emotion
|
aminjalali
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,343 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2123
- Accuracy: 0.926
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8198 | 1.0 | 250 | 0.3147 | 0.904 | 0.9003 |
| 0.2438 | 2.0 | 500 | 0.2123 | 0.926 | 0.9258 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
3e300f7f596af396afc4e2ce64f88c72
|
AndrewR/distilgpt2-finetuned-katpoems-lm
|
AndrewR
|
gpt2
| 14 | 0 |
transformers
| 1 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,245 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-katpoems-lm
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 59 | 4.6509 |
| No log | 2.0 | 118 | 4.6476 |
| No log | 3.0 | 177 | 4.6519 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
50c686a3bc622eb3092f8b242c3c96e7
|
jonatasgrosman/exp_w2v2t_fr_hubert_s990
|
jonatasgrosman
|
hubert
| 10 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fr']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'fr']
| false | true | true | 452 | false |
# exp_w2v2t_fr_hubert_s990
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
de6e1caf43d994462db2a3f588523dab
|
ogimgio/finetuned-die-berufliche-praxis-im-rahmen-des-pflegeprozesses-ausuben
|
ogimgio
|
bert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,367 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-die-berufliche-praxis-im-rahmen-des-pflegeprozesses-ausuben
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4610
- Accuracy: 0.7900
- F1: 0.7788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4867 | 1.0 | 1365 | 0.4591 | 0.7879 | 0.7762 |
| 0.39 | 2.0 | 2730 | 0.4610 | 0.7900 | 0.7788 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.13.2
|
553f506a2c6039d8ddb455fdb21f107f
|
sonoisa/t5-base-japanese
|
sonoisa
|
t5
| 8 | 26,723 |
transformers
| 17 |
feature-extraction
| true | false | true |
cc-by-sa-4.0
|
['ja']
|
['wikipedia', 'oscar', 'cc100']
| null | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
['t5', 'text2text-generation', 'seq2seq']
| false | true | true | 3,433 | false |
# 日本語T5事前学習済みモデル
This is a T5 (Text-to-Text Transfer Transformer) model pretrained on Japanese corpus.
次の日本語コーパス(約100GB)を用いて事前学習を行ったT5 (Text-to-Text Transfer Transformer) モデルです。
* [Wikipedia](https://ja.wikipedia.org)の日本語ダンプデータ (2020年7月6日時点のもの)
* [OSCAR](https://oscar-corpus.com)の日本語コーパス
* [CC-100](http://data.statmt.org/cc-100/)の日本語コーパス
このモデルは事前学習のみを行なったものであり、特定のタスクに利用するにはファインチューニングする必要があります。
本モデルにも、大規模コーパスを用いた言語モデルにつきまとう、学習データの内容の偏りに由来する偏った(倫理的ではなかったり、有害だったり、バイアスがあったりする)出力結果になる問題が潜在的にあります。
この問題が発生しうることを想定した上で、被害が発生しない用途にのみ利用するよう気をつけてください。
SentencePieceトークナイザーの学習には上記Wikipediaの全データを用いました。
# 転移学習のサンプルコード
https://github.com/sonoisa/t5-japanese
# ベンチマーク
## livedoorニュース分類タスク
livedoorニュースコーパスを用いたニュース記事のジャンル予測タスクの精度は次の通りです。
Google製多言語T5モデルに比べて、モデルサイズが25%小さく、6ptほど精度が高いです。
日本語T5 ([t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese), パラメータ数は222M, [再現用コード](https://github.com/sonoisa/t5-japanese/blob/main/t5_japanese_classification.ipynb))
| label | precision | recall | f1-score | support |
| ----------- | ----------- | ------- | -------- | ------- |
| 0 | 0.96 | 0.94 | 0.95 | 130 |
| 1 | 0.98 | 0.99 | 0.99 | 121 |
| 2 | 0.96 | 0.96 | 0.96 | 123 |
| 3 | 0.86 | 0.91 | 0.89 | 82 |
| 4 | 0.96 | 0.97 | 0.97 | 129 |
| 5 | 0.96 | 0.96 | 0.96 | 141 |
| 6 | 0.98 | 0.98 | 0.98 | 127 |
| 7 | 1.00 | 0.99 | 1.00 | 127 |
| 8 | 0.99 | 0.97 | 0.98 | 120 |
| accuracy | | | 0.97 | 1100 |
| macro avg | 0.96 | 0.96 | 0.96 | 1100 |
| weighted avg | 0.97 | 0.97 | 0.97 | 1100 |
比較対象: 多言語T5 ([google/mt5-small](https://huggingface.co/google/mt5-small), パラメータ数は300M)
| label | precision | recall | f1-score | support |
| ----------- | ----------- | ------- | -------- | ------- |
| 0 | 0.91 | 0.88 | 0.90 | 130 |
| 1 | 0.84 | 0.93 | 0.89 | 121 |
| 2 | 0.93 | 0.80 | 0.86 | 123 |
| 3 | 0.82 | 0.74 | 0.78 | 82 |
| 4 | 0.90 | 0.95 | 0.92 | 129 |
| 5 | 0.89 | 0.89 | 0.89 | 141 |
| 6 | 0.97 | 0.98 | 0.97 | 127 |
| 7 | 0.95 | 0.98 | 0.97 | 127 |
| 8 | 0.93 | 0.95 | 0.94 | 120 |
| accuracy | | | 0.91 | 1100 |
| macro avg | 0.91 | 0.90 | 0.90 | 1100 |
| weighted avg | 0.91 | 0.91 | 0.91 | 1100 |
## JGLUEベンチマーク
[JGLUE](https://github.com/yahoojapan/JGLUE)ベンチマークの結果は次のとおりです(順次追加)。
- MARC-ja: 準備中
- JSTS: 準備中
- JNLI: 準備中
- JSQuAD: EM=0.900, F1=0.945, [再現用コード](https://github.com/sonoisa/t5-japanese/blob/main/t5_JSQuAD.ipynb)
- JCommonsenseQA: 準備中
# 免責事項
本モデルの作者は本モデルを作成するにあたって、その内容、機能等について細心の注意を払っておりますが、モデルの出力が正確であるかどうか、安全なものであるか等について保証をするものではなく、何らの責任を負うものではありません。本モデルの利用により、万一、利用者に何らかの不都合や損害が発生したとしても、モデルやデータセットの作者や作者の所属組織は何らの責任を負うものではありません。利用者には本モデルやデータセットの作者や所属組織が責任を負わないことを明確にする義務があります。
# ライセンス
[CC-BY SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.ja)
[Common Crawlの利用規約](http://commoncrawl.org/terms-of-use/)も守るようご注意ください。
|
8c669e59375f2aba7fd8798a2af00ee8
|
AIARTCHAN/aichan_blend
|
AIARTCHAN
| null | 48 | 0 | null | 32 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'aiartchan']
| false | true | true | 1,431 | false |
Mixed stable diffusion models from the ai image channel and elsewhere. Feel free to download.
## file download code example
```python
# urlretrieve, no progressbar
from urllib.request import urlretrieve
from huggingface_hub import hf_hub_url
repo_id = "AIARTCHAN/aichan_blend"
filename = "AbyssOrangeMix2_nsfw-pruned.safetensors"
url = hf_hub_url(repo_id, filename)
urlretrieve(url, filename)
```
```python
# with tqdm, urllib
import shutil
from urllib.request import urlopen
from huggingface_hub import hf_hub_url
from tqdm import tqdm
repo_id = "AIARTCHAN/aichan_blend"
filename = "AbyssOrangeMix2_nsfw-pruned.safetensors"
url = hf_hub_url(repo_id, filename)
with urlopen(url) as resp:
total = int(resp.headers.get("Content-Length", 0))
with tqdm.wrapattr(
resp, "read", total=total, desc="Download..."
) as src:
with open(filename, "wb") as dst:
shutil.copyfileobj(src, dst)
```
```python
# with tqdm, requests
import shutil
import requests
from huggingface_hub import hf_hub_url
from tqdm import tqdm
repo_id = "AIARTCHAN/aichan_blend"
filename = "AbyssOrangeMix2_nsfw-pruned.safetensors"
url = hf_hub_url(repo_id, filename)
resp = requests.get(url, stream=True)
total = int(resp.headers.get("Content-Length", 0))
with tqdm.wrapattr(
resp.raw, "read", total=total, desc="Download..."
) as src:
with open(filename, "wb") as dst:
shutil.copyfileobj(src, dst)
```
|
7ccc3cd42908a2e30491a00fba9dada6
|
zates/distilbert-base-uncased-finetuned-squad-seed-69
|
zates
|
distilbert
| 14 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad_v2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,295 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-seed-69
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2185 | 1.0 | 8235 | 1.2774 |
| 0.9512 | 2.0 | 16470 | 1.2549 |
| 0.7704 | 3.0 | 24705 | 1.4246 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
004b089516cc051701764f9ed71f0e25
|
sentence-transformers/nli-distilroberta-base-v2
|
sentence-transformers
|
roberta
| 15 | 734 |
sentence-transformers
| 0 |
sentence-similarity
| true | true | true |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
| false | true | true | 3,555 | false |
# sentence-transformers/nli-distilroberta-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/nli-distilroberta-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-distilroberta-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/nli-distilroberta-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-distilroberta-base-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
b9b0d2a001b9cc42ca3fb02ec0498b8f
|
cdefghijkl/wnt1
|
cdefghijkl
| null | 18 | 4 |
diffusers
| 2 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 609 | false |
### wnt1 Dreambooth model trained by cdefghijkl with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
9b820d09c1d04cc3f737715c5de0ea94
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-3
|
SetFit
|
distilbert
| 10 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,462 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9681
- Accuracy: 0.549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1073 | 1.0 | 5 | 1.1393 | 0.0 |
| 1.0392 | 2.0 | 10 | 1.1729 | 0.0 |
| 1.0302 | 3.0 | 15 | 1.1694 | 0.2 |
| 0.9176 | 4.0 | 20 | 1.1846 | 0.2 |
| 0.8339 | 5.0 | 25 | 1.1663 | 0.2 |
| 0.7533 | 6.0 | 30 | 1.1513 | 0.4 |
| 0.6327 | 7.0 | 35 | 1.1474 | 0.4 |
| 0.4402 | 8.0 | 40 | 1.1385 | 0.4 |
| 0.3752 | 9.0 | 45 | 1.0965 | 0.2 |
| 0.3448 | 10.0 | 50 | 1.0357 | 0.2 |
| 0.2582 | 11.0 | 55 | 1.0438 | 0.2 |
| 0.1903 | 12.0 | 60 | 1.0561 | 0.2 |
| 0.1479 | 13.0 | 65 | 1.0569 | 0.2 |
| 0.1129 | 14.0 | 70 | 1.0455 | 0.2 |
| 0.1071 | 15.0 | 75 | 1.0416 | 0.4 |
| 0.0672 | 16.0 | 80 | 1.1164 | 0.4 |
| 0.0561 | 17.0 | 85 | 1.1846 | 0.6 |
| 0.0463 | 18.0 | 90 | 1.2040 | 0.6 |
| 0.0431 | 19.0 | 95 | 1.2078 | 0.6 |
| 0.0314 | 20.0 | 100 | 1.2368 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
57addf7e990469dc4c0baa32058e2e84
|
Ramuvannela/bert-fine-tuned-cola
|
Ramuvannela
|
bert
| 13 | 17 |
transformers
| 0 |
text-classification
| true | true | false |
apache-2.0
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,387 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8073
- Matthews Correlation: 0.6107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4681 | 1.0 | 1069 | 0.5613 | 0.4892 |
| 0.321 | 2.0 | 2138 | 0.6681 | 0.5851 |
| 0.1781 | 3.0 | 3207 | 0.8073 | 0.6107 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ef9803fcabf8b54afc90c15344c733cb
|
dehio/german-qg-t5-quad
|
dehio
|
t5
| 17 | 3 |
transformers
| 1 |
text2text-generation
| true | false | false |
mit
|
['de']
|
['deepset/germanquad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation']
| true | true | true | 1,364 | false |
# german-qg-t5-quad
This model is fine-tuned in question generation in German. The expected answer must be highlighted with a
<hl> token.
## Task example
#### Input
generate question: Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des <hl> britischen Common Laws <hl> sind, setzt sich das amerikanische Recht bedeutend davon ab. Dies rührt größtenteils von dem langen Zeitraum her, [...]
#### Expected output
Von welchem Gesetzt stammt das Amerikanische ab?
## Model description
This model is a fine-tuned version of [valhalla/t5-base-qg-hl](https://huggingface.co/valhalla/t5-base-qg-hl) on the [GermanQUAD](https://www.deepset.ai/germanquad) dataset.
## Training and evaluation data
The training script can be accessed [here](https://github.com/d-e-h-i-o/german-qg).
### Evaluation
The model achieves a BLEU-4 score of **11.30** on the GermanQuAD test set (n=2204).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
9eaae1cb0a29cd2f069fe66667851b4f
|
lewtun/mt5-small-finetuned-mlsum
|
lewtun
|
mt5
| 21 | 5 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['mlsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,419 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-mlsum
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the mlsum dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 1.1475
- Rouge2: 0.1284
- Rougel: 1.0634
- Rougelsum: 1.0778
- Gen Len: 3.7939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| nan | 1.0 | 808 | nan | 1.1475 | 0.1284 | 1.0634 | 1.0778 | 3.7939 |
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
fcc2401ad0add2b79c19307d09b79533
|
svjack/Stable-Diffusion-FineTuned-zh-v2
|
svjack
| null | 16 | 19 |
diffusers
| 3 |
text-to-image
| false | false | false |
other
|
['zh']
| null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'zh', 'Chinese']
| false | true | true | 8,626 | false |
# Chinese Stable Diffusion Model Card
<!--

-->
svjack/Stable-Diffusion-FineTuned-zh-v0 is a Chinese-specific latent text-to-image diffusion model capable of generating images given any Chinese text input.
This model was trained by using a powerful text-to-image model, [diffusers](https://github.com/huggingface/diffusers)
For more information about our training method, see [train_zh_model.py](https://github.com/svjack/Stable-Diffusion-Chinese-Extend/blob/main/train_zh_model.py).
With the help of a good baseline model [Taiyi-Stable-Diffusion-1B-Chinese-v0.1](IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1) from [IDEA-CCNL](https://github.com/IDEA-CCNL/Fengshenbang-LM)
<!--
[](https://colab.research.google.com/github/rinnakk/japanese-stable-diffusion/blob/master/scripts/txt2img.ipynb)
-->
## Model Details
- **Developed by:** Zhipeng Yang
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** Chinese
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model (LDM)](https://arxiv.org/abs/2112.10752) that used [Stable Diffusion](https://github.com/CompVis/stable-diffusion) as a pre-trained model.
- **Resources for more information:** [https://github.com/svjack/Stable-Diffusion-Chinese-Extend](https://github.com/svjack/Stable-Diffusion-Chinese-Extend)
## Examples
Firstly, install our package as follows. This package is modified [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Chinese Stable Diffusion.
```bash
diffusers==0.6.0
transformers
torch
datasets
accelerate
sentencepiece
```
Run this command to log in with your HF Hub token if you haven't before:
```bash
huggingface-cli login
```
Running the pipeline with the LMSDiscreteScheduler scheduler:
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained("svjack/Stable-Diffusion-FineTuned-zh-v2")
pipeline.safety_checker = lambda images, clip_input: (images, False)
pipeline = pipeline.to("cuda")
prompt = '女孩们打开了另一世界的大门'
image = pipeline(prompt, guidance_scale=7.5).images[0]
```
### Generator Results comparison
[https://github.com/svjack/Stable-Diffusion-Chinese-Extend](https://github.com/svjack/Stable-Diffusion-Chinese-Extend)




<!--
_Note: `JapaneseStableDiffusionPipeline` is almost same as diffusers' `StableDiffusionPipeline` but added some lines to initialize our models properly._
## Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1._
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with Japanese captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Japanese Stable Diffusion was trained on Japanese datasets including [LAION-5B](https://laion.ai/blog/laion-5b/) with Japanese captions,
which consists of images that are primarily limited to Japanese descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model.
Further, the ability of the model to generate content with non-Japanese prompts is significantly worse than with Japanese-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
We used the following dataset for training the model:
- Approximately 100 million images with Japanese captions, including the Japanese subset of [LAION-5B](https://laion.ai/blog/laion-5b/).
**Training Procedure**
Japanese Stable Diffusion has the same architecture as Stable Diffusion and was trained by using Stable Diffusion. Because Stable Diffusion was trained on English dataset and the CLIP tokenizer is basically for English, we had 2 stages to transfer to a language-specific model, inspired by [PITI](https://arxiv.org/abs/2205.12952).
1. Train a Japanese-specific text encoder with our Japanese tokenizer from scratch with the latent diffusion model fixed. This stage is expected to map Japanese captions to Stable Diffusion's latent space.
2. Fine-tune the text encoder and the latent diffusion model jointly. This stage is expected to generate Japanese-style images more.
[//]: # (_Note: Japanese Stable Diffusion is still running and this checkpoint is the current best one. We might update to a better checkpoint via this repository._)
-->
|
a7b8d80bc5de525c0bc97d4e3a0136c5
|
anuragshas/wav2vec2-large-xls-r-300m-mr
|
anuragshas
|
wav2vec2
| 19 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['mr']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
| true | true | true | 3,129 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-mr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5479
- Wer: 0.5740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 3.7378 | 18.18 | 400 | 3.5047 | 1.0 |
| 3.1707 | 36.36 | 800 | 2.6166 | 0.9912 |
| 1.4942 | 54.55 | 1200 | 0.5778 | 0.6927 |
| 1.2058 | 72.73 | 1600 | 0.5168 | 0.6362 |
| 1.0558 | 90.91 | 2000 | 0.5105 | 0.6069 |
| 0.9488 | 109.09 | 2400 | 0.5151 | 0.6089 |
| 0.8588 | 127.27 | 2800 | 0.5157 | 0.5989 |
| 0.7991 | 145.45 | 3200 | 0.5179 | 0.5740 |
| 0.7545 | 163.64 | 3600 | 0.5348 | 0.5740 |
| 0.7144 | 181.82 | 4000 | 0.5518 | 0.5724 |
| 0.7041 | 200.0 | 4400 | 0.5479 | 0.5740 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-mr --dataset mozilla-foundation/common_voice_8_0 --config mr --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-mr"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "mr", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "या पानास लेखाचे स्वरूप यायला हावे"
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 49.177 | 32.811 |
|
14cb6031836aa13db235280c5b6c4fb7
|
JoshuaRubin/bert-base-uncased-finetuned-math_punctuation-ignore_word_parts
|
JoshuaRubin
|
bert
| 19 | 10 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,928 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-math_punctuation-ignore_word_parts
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1981
- Precision: 0.7843
- Recall: 0.7485
- F Score: 0.7648
- Auc: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F Score | Auc |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:-------:|:------:|
| 0.1064 | 0.64 | 500 | 0.1082 | 0.7558 | 0.6580 | 0.6964 | 0.9086 |
| 0.0781 | 1.27 | 1000 | 0.1025 | 0.7594 | 0.7226 | 0.7365 | 0.9261 |
| 0.0757 | 1.91 | 1500 | 0.1001 | 0.7945 | 0.6899 | 0.7302 | 0.9272 |
| 0.0538 | 2.54 | 2000 | 0.1061 | 0.7689 | 0.7348 | 0.7480 | 0.9298 |
| 0.0425 | 3.18 | 2500 | 0.1123 | 0.7806 | 0.7361 | 0.7560 | 0.9300 |
| 0.0377 | 3.81 | 3000 | 0.1159 | 0.7841 | 0.7437 | 0.7610 | 0.9292 |
| 0.0235 | 4.45 | 3500 | 0.1259 | 0.7786 | 0.7368 | 0.7561 | 0.9276 |
| 0.0227 | 5.08 | 4000 | 0.1436 | 0.7699 | 0.7448 | 0.7555 | 0.9277 |
| 0.0159 | 5.72 | 4500 | 0.1466 | 0.7715 | 0.7333 | 0.7514 | 0.9252 |
| 0.0106 | 6.35 | 5000 | 0.1574 | 0.7710 | 0.7456 | 0.7566 | 0.9276 |
| 0.0111 | 6.99 | 5500 | 0.1560 | 0.7694 | 0.7500 | 0.7595 | 0.9286 |
| 0.0074 | 7.62 | 6000 | 0.1645 | 0.7789 | 0.7511 | 0.7639 | 0.9305 |
| 0.0056 | 8.26 | 6500 | 0.1745 | 0.7887 | 0.7453 | 0.7648 | 0.9265 |
| 0.005 | 8.89 | 7000 | 0.1760 | 0.7779 | 0.7497 | 0.7629 | 0.9281 |
| 0.0038 | 9.53 | 7500 | 0.1873 | 0.7826 | 0.7505 | 0.7634 | 0.9273 |
| 0.0031 | 10.17 | 8000 | 0.1896 | 0.7855 | 0.7477 | 0.7644 | 0.9258 |
| 0.0026 | 10.8 | 8500 | 0.1929 | 0.7849 | 0.7485 | 0.7650 | 0.9263 |
| 0.0017 | 11.44 | 9000 | 0.1981 | 0.7843 | 0.7485 | 0.7648 | 0.9248 |
### Framework versions
- Transformers 4.25.1
- Pytorch 2.0.0.dev20230111
- Datasets 2.8.0
- Tokenizers 0.13.2
|
945ed115fac358b64369544945bc5e9a
|
Helsinki-NLP/opus-mt-bg-fi
|
Helsinki-NLP
|
marian
| 10 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-bg-fi
* source languages: bg
* target languages: fi
* OPUS readme: [bg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bg.fi | 23.7 | 0.505 |
|
635e516d078fc4837f5ce4f55813c633
|
dragonSwing/viwav2vec2-base-3k
|
dragonSwing
|
wav2vec2
| 5 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
cc-by-sa-4.0
|
['vi']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['speech', 'automatic-speech-recognition']
| false | true | true | 1,333 | false |
# Wav2Vec2 base model trained of 3K hours of Vietnamese speech
The base model is pre-trained on 16kHz sampled speech audio from Vietnamese speech corpus containing 3K hours of spontaneous, reading, and broadcasting speech. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Vietnamese Automatic Speech Recognition.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
[Facebook's Wav2Vec2 blog](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
[Paper](https://arxiv.org/abs/2006.11477)
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the English pre-trained model.
```python
import torch
from transformers import Wav2Vec2Model
model = Wav2Vec2Model.from_pretrained("dragonSwing/viwav2vec2-base-3k")
# Sanity check
inputs = torch.rand([1, 16000])
outputs = model(inputs)
```
|
13f24e1b2a25e95a77a6bd007c487bec
|
theojolliffe/bart-model2-3110-e4
|
theojolliffe
|
bart
| 12 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,998 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-model2-3110-e4
This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1-feedback](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1-feedback) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0700
- Rouge1: 70.0692
- Rouge2: 68.1457
- Rougel: 69.8943
- Rougelsum: 70.0389
- Gen Len: 19.8966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.5951 | 1.0 | 553 | 0.3089 | 62.5675 | 54.7411 | 61.2646 | 61.3675 | 19.7241 |
| 0.2541 | 2.0 | 1106 | 0.1432 | 66.113 | 61.964 | 64.6141 | 64.9187 | 19.8966 |
| 0.1547 | 3.0 | 1659 | 0.0964 | 68.6902 | 64.938 | 67.6197 | 67.9181 | 19.8966 |
| 0.1141 | 4.0 | 2212 | 0.1015 | 68.9122 | 66.4279 | 68.4906 | 68.5758 | 19.8966 |
| 0.0728 | 5.0 | 2765 | 0.0819 | 69.2271 | 66.8276 | 68.6915 | 68.849 | 19.8966 |
| 0.0563 | 6.0 | 3318 | 0.0700 | 70.0692 | 68.1457 | 69.8943 | 70.0389 | 19.8966 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
01f8506ac04b47d373cf946c683b074f
|
UchihaMadara/model2
|
UchihaMadara
|
bert
| 16 | 22 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,307 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2319
- Accuracy: 0.9479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 224 | 0.2074 | 0.9453 |
| No log | 2.0 | 448 | 0.2421 | 0.9440 |
| 0.2593 | 3.0 | 672 | 0.2319 | 0.9479 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
21030e756a998cc7c397cb8a1bf43aaa
|
viba98/lineal-ic
|
viba98
| null | 26 | 50 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 3 | 2 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 1,339 | false |
### lineal-ic Dreambooth model trained by viba98 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
linealic







|
6a777096b7011372f40f58716c379528
|
Isaacp/bert-base-uncased-issues-128
|
Isaacp
|
bert
| 10 | 4 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,919 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0986 | 1.0 | 291 | 1.6929 |
| 1.6401 | 2.0 | 582 | 1.4304 |
| 1.4881 | 3.0 | 873 | 1.3916 |
| 1.4 | 4.0 | 1164 | 1.3796 |
| 1.3416 | 5.0 | 1455 | 1.2012 |
| 1.2807 | 6.0 | 1746 | 1.2733 |
| 1.2396 | 7.0 | 2037 | 1.2646 |
| 1.1993 | 8.0 | 2328 | 1.2098 |
| 1.1661 | 9.0 | 2619 | 1.1862 |
| 1.1406 | 10.0 | 2910 | 1.2223 |
| 1.1294 | 11.0 | 3201 | 1.2056 |
| 1.1042 | 12.0 | 3492 | 1.1655 |
| 1.0827 | 13.0 | 3783 | 1.2525 |
| 1.0738 | 14.0 | 4074 | 1.1685 |
| 1.0626 | 15.0 | 4365 | 1.1182 |
| 1.0629 | 16.0 | 4656 | 1.2456 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.0+cu116
- Datasets 1.16.1
- Tokenizers 0.10.3
|
578d3b1963f6092af04429b6c1866444
|
Ramu/distilbert-base-uncased-finetuned-emotion
|
Ramu
|
distilbert
| 14 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,343 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2167
- Accuracy: 0.926
- F1: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8112 | 1.0 | 250 | 0.3147 | 0.903 | 0.8992 |
| 0.2454 | 2.0 | 500 | 0.2167 | 0.926 | 0.9262 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
a16ba4f7fd7f22a8b25654f9199cb581
|
Helsinki-NLP/opus-mt-en-ru
|
Helsinki-NLP
|
marian
| 11 | 55,612 |
transformers
| 10 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,075 | false |
### opus-mt-en-ru
* source languages: en
* target languages: ru
* OPUS readme: [en-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-11.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.zip)
* test set translations: [opus-2020-02-11.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.test.txt)
* test set scores: [opus-2020-02-11.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.en.ru | 31.1 | 0.581 |
| newstest2013.en.ru | 23.5 | 0.513 |
| newstest2015-enru.en.ru | 27.5 | 0.564 |
| newstest2016-enru.en.ru | 26.4 | 0.548 |
| newstest2017-enru.en.ru | 29.1 | 0.572 |
| newstest2018-enru.en.ru | 25.4 | 0.554 |
| newstest2019-enru.en.ru | 27.1 | 0.533 |
| Tatoeba.en.ru | 48.4 | 0.669 |
|
89359987ac224d646552ade8c0a24bb6
|
timm/maxvit_small_tf_224.in1k
|
timm
| null | 4 | 137 |
timm
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet-1k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'timm']
| false | true | true | 22,015 | false |
# Model card for maxvit_small_tf_224.in1k
An official MaxViT image classification model. Trained in tensorflow on ImageNet-1k by paper authors.
Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman.
### Model Variants in [maxxvit.py](https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 68.9
- GMACs: 11.7
- Activations (M): 53.2
- Image size: 224 x 224
- **Papers:**
- MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('maxvit_small_tf_224.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'maxvit_small_tf_224.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 192, 192])
# torch.Size([1, 128, 96, 96])
# torch.Size([1, 256, 48, 48])
# torch.Size([1, 512, 24, 24])
# torch.Size([1, 1024, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'maxvit_small_tf_224.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
5347fa1479b3d16975ce93741b5275f8
|
Deep98/IPod-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,856 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/IPod-clustered
This model is a fine-tuned version of [nandysoham16/15-clustered_aug](https://huggingface.co/nandysoham16/15-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4336
- Train End Logits Accuracy: 0.8819
- Train Start Logits Accuracy: 0.8819
- Validation Loss: 0.3193
- Validation End Logits Accuracy: 0.8636
- Validation Start Logits Accuracy: 0.8636
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.4336 | 0.8819 | 0.8819 | 0.3193 | 0.8636 | 0.8636 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ee2bd3044ebe8b030a63ce519e661895
|
M-Quan/wav2vec2-E
|
M-Quan
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,621 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-E
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4832
- Wer: 0.3432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5034 | 4.0 | 500 | 1.1620 | 0.8995 |
| 0.5738 | 8.0 | 1000 | 0.4625 | 0.4396 |
| 0.2142 | 12.0 | 1500 | 0.4791 | 0.3965 |
| 0.1219 | 16.0 | 2000 | 0.4677 | 0.3703 |
| 0.0854 | 20.0 | 2500 | 0.4782 | 0.3544 |
| 0.0587 | 24.0 | 3000 | 0.4680 | 0.3516 |
| 0.044 | 28.0 | 3500 | 0.4832 | 0.3432 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.10.3
|
7fae9b8b97c7c5cfe64d9586f1fe5632
|
MultiBertGunjanPatrick/multiberts-seed-2-60k
|
MultiBertGunjanPatrick
|
bert
| 7 | 4 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert', 'multiberts', 'multiberts-seed-2']
| false | true | true | 6,479 | false |
# MultiBERTs Seed 2 Checkpoint 60k (uncased)
Seed 2 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-60k')
model = BertModel.from_pretrained("multiberts-seed-2-60k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
96b9b3ccbbd72f35ab7fcbdcbe7cde7a
|
rootcodes/wav2vec2-large-xls-r-300m-turkish-colab
|
rootcodes
|
wav2vec2
| 15 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,791 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4313
- Wer: 0.3336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0055 | 3.67 | 400 | 0.7015 | 0.6789 |
| 0.4384 | 7.34 | 800 | 0.4827 | 0.4875 |
| 0.2143 | 11.01 | 1200 | 0.4672 | 0.4554 |
| 0.1431 | 14.68 | 1600 | 0.4331 | 0.4014 |
| 0.1053 | 18.35 | 2000 | 0.4471 | 0.3822 |
| 0.0857 | 22.02 | 2400 | 0.4324 | 0.3637 |
| 0.0683 | 25.69 | 2800 | 0.4305 | 0.3423 |
| 0.0526 | 29.36 | 3200 | 0.4313 | 0.3336 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
eacb41535f3fac51d535349dd0d40238
|
NX2411/wav2vec2-large-xlsr-en-demo
|
NX2411
|
wav2vec2
| 18 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,863 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-en-demo
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1356
- Wer: 0.2015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3911 | 0.5 | 500 | 0.5397 | 0.2615 |
| 0.3413 | 1.01 | 1000 | 0.1423 | 0.2137 |
| 0.243 | 1.51 | 1500 | 0.1458 | 0.2210 |
| 0.2232 | 2.01 | 2000 | 0.1380 | 0.2143 |
| 0.162 | 2.51 | 2500 | 0.1464 | 0.2149 |
| 0.1384 | 3.02 | 3000 | 0.1348 | 0.2109 |
| 0.1164 | 3.52 | 3500 | 0.1324 | 0.2040 |
| 0.1103 | 4.02 | 4000 | 0.1310 | 0.2051 |
| 0.0857 | 4.53 | 4500 | 0.1356 | 0.2015 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
cd2d735bc8a5da827c643a0b72b3f5f6
|
akmoyu/whisper-small-mn
|
akmoyu
|
whisper
| 13 | 3 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['mn']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['hf-asr-leaderboard', 'generated_from_trainer']
| true | true | true | 1,480 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Mn - akmoyu
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8308
- Wer: 50.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0306 | 7.94 | 1000 | 0.6344 | 52.8724 |
| 0.0017 | 15.87 | 2000 | 0.7480 | 50.3659 |
| 0.0004 | 23.81 | 3000 | 0.8137 | 50.5406 |
| 0.0003 | 15.87 | 4000 | 0.8308 | 50.5188 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
|
ad97a29a384194b9518942a2ee0a4ba8
|
espnet/kan-bayashi_vctk_gst_fastspeech2
|
espnet
| null | 21 | 6 |
espnet
| 0 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['en']
|
['vctk']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 1,800 | false |
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst_fastspeech2`
♻️ Imported from https://zenodo.org/record/4036266/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
8286aa0d0d9165de1a6027d1b836df33
|
paola-md/distilr2-lr1e05-wd0.05-bs64
|
paola-md
|
roberta
| 6 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,519 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilr2-lr1e05-wd0.05-bs64
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2722
- Rmse: 0.5217
- Mse: 0.2722
- Mae: 0.4147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.277 | 1.0 | 312 | 0.2749 | 0.5243 | 0.2749 | 0.4243 |
| 0.2745 | 2.0 | 624 | 0.2731 | 0.5226 | 0.2731 | 0.4120 |
| 0.2732 | 3.0 | 936 | 0.2725 | 0.5220 | 0.2725 | 0.4156 |
| 0.2718 | 4.0 | 1248 | 0.2722 | 0.5217 | 0.2722 | 0.4147 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ca7dc9c99ddc832b37ab797934e477bc
|
vanme/vmehlin_distilbert-finetuned-squad
|
vanme
|
distilbert
| 12 | 6 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,199 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vmehlin_distilbert-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
### co2_eq_emissions:
- emissions: 49.49 g
- source: eco2AI
- training_time: 00:31:54
- geographical_location: Bavaria, Germany
- hardware_used: Intel(R) Xeon(R) Gold 5215 CPUs (2devices) & NVIDIA A40 (1 device)
|
4eb51a4b230511dfc65f2ca3bd7fb1af
|
google/multiberts-seed_3-step_100k
|
google
|
bert
| 8 | 50 |
transformers
| 0 | null | true | true | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_100k']
| false | true | true | 3,521 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 3, Step 100k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #3, captured at step 100k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_100k')
model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_100k')
model = BertModel.from_pretrained("google/multiberts-seed_3-step_100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
160c4de553b7b7c7aa0720e892ca5e50
|
Billwzl/20split_dataset_version1
|
Billwzl
|
distilbert
| 10 | 4 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,751 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20split_dataset_version1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.7475 | 1.0 | 11851 | 2.5194 |
| 2.5528 | 2.0 | 23702 | 2.4191 |
| 2.4649 | 3.0 | 35553 | 2.3646 |
| 2.4038 | 4.0 | 47404 | 2.3289 |
| 2.3632 | 5.0 | 59255 | 2.2922 |
| 2.3273 | 6.0 | 71106 | 2.2739 |
| 2.2964 | 7.0 | 82957 | 2.2494 |
| 2.2732 | 8.0 | 94808 | 2.2217 |
| 2.2526 | 9.0 | 106659 | 2.2149 |
| 2.2369 | 10.0 | 118510 | 2.2029 |
| 2.222 | 11.0 | 130361 | 2.2020 |
| 2.2135 | 12.0 | 142212 | 2.1942 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
32a81be83f80e1ba35dd6fd5318ecc23
|
AykeeSalazar/vc-bantai-vit-withoutAMBI-adunest-v2
|
AykeeSalazar
|
vit
| 9 | 13 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,580 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vc-bantai-vit-withoutAMBI-adunest-v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8271
- Accuracy: 0.7705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.4 | 100 | 0.3811 | 0.8511 |
| No log | 0.81 | 200 | 0.3707 | 0.8609 |
| No log | 1.21 | 300 | 0.5708 | 0.7325 |
| No log | 1.61 | 400 | 0.3121 | 0.8778 |
| 0.3308 | 2.02 | 500 | 0.3358 | 0.8445 |
| 0.3308 | 2.42 | 600 | 0.2820 | 0.8768 |
| 0.3308 | 2.82 | 700 | 0.4825 | 0.7695 |
| 0.3308 | 3.23 | 800 | 0.3133 | 0.8640 |
| 0.3308 | 3.63 | 900 | 0.4509 | 0.8219 |
| 0.2028 | 4.03 | 1000 | 0.5426 | 0.7551 |
| 0.2028 | 4.44 | 1100 | 0.4886 | 0.8552 |
| 0.2028 | 4.84 | 1200 | 0.5649 | 0.7695 |
| 0.2028 | 5.24 | 1300 | 0.5925 | 0.7900 |
| 0.2028 | 5.65 | 1400 | 0.4203 | 0.8439 |
| 0.1471 | 6.05 | 1500 | 0.4275 | 0.8486 |
| 0.1471 | 6.45 | 1600 | 0.3683 | 0.8727 |
| 0.1471 | 6.85 | 1700 | 0.5709 | 0.8121 |
| 0.1471 | 7.26 | 1800 | 0.6209 | 0.7680 |
| 0.1471 | 7.66 | 1900 | 0.4971 | 0.8147 |
| 0.101 | 8.06 | 2000 | 0.8792 | 0.7567 |
| 0.101 | 8.47 | 2100 | 0.3288 | 0.8670 |
| 0.101 | 8.87 | 2200 | 0.3643 | 0.8342 |
| 0.101 | 9.27 | 2300 | 0.4883 | 0.8711 |
| 0.101 | 9.68 | 2400 | 0.2892 | 0.8943 |
| 0.0667 | 10.08 | 2500 | 0.5437 | 0.8398 |
| 0.0667 | 10.48 | 2600 | 0.5841 | 0.8450 |
| 0.0667 | 10.89 | 2700 | 0.8016 | 0.8219 |
| 0.0667 | 11.29 | 2800 | 0.6389 | 0.7772 |
| 0.0667 | 11.69 | 2900 | 0.3714 | 0.8753 |
| 0.0674 | 12.1 | 3000 | 0.9811 | 0.7130 |
| 0.0674 | 12.5 | 3100 | 0.6359 | 0.8101 |
| 0.0674 | 12.9 | 3200 | 0.5691 | 0.8285 |
| 0.0674 | 13.31 | 3300 | 0.6123 | 0.8316 |
| 0.0674 | 13.71 | 3400 | 0.3655 | 0.8978 |
| 0.0525 | 14.11 | 3500 | 0.4988 | 0.8583 |
| 0.0525 | 14.52 | 3600 | 0.6153 | 0.8450 |
| 0.0525 | 14.92 | 3700 | 0.4189 | 0.8881 |
| 0.0525 | 15.32 | 3800 | 0.9713 | 0.7967 |
| 0.0525 | 15.73 | 3900 | 1.1224 | 0.7967 |
| 0.0438 | 16.13 | 4000 | 0.5725 | 0.8578 |
| 0.0438 | 16.53 | 4100 | 0.4725 | 0.8532 |
| 0.0438 | 16.94 | 4200 | 0.4696 | 0.8640 |
| 0.0438 | 17.34 | 4300 | 0.4028 | 0.8789 |
| 0.0438 | 17.74 | 4400 | 0.9452 | 0.7746 |
| 0.0462 | 18.15 | 4500 | 0.4455 | 0.8783 |
| 0.0462 | 18.55 | 4600 | 0.6328 | 0.8311 |
| 0.0462 | 18.95 | 4700 | 0.6707 | 0.8296 |
| 0.0462 | 19.35 | 4800 | 0.7771 | 0.8429 |
| 0.0462 | 19.76 | 4900 | 1.2832 | 0.7408 |
| 0.0381 | 20.16 | 5000 | 0.5415 | 0.8737 |
| 0.0381 | 20.56 | 5100 | 0.8932 | 0.7977 |
| 0.0381 | 20.97 | 5200 | 0.5182 | 0.8691 |
| 0.0381 | 21.37 | 5300 | 0.5967 | 0.8794 |
| 0.0381 | 21.77 | 5400 | 0.8271 | 0.7705 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
3ccd9c193f27dbd30f13a53ca163af96
|
ser-mei/gpt-finetuning-cervantes
|
ser-mei
|
gpt2
| 11 | 0 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,817 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-finetuning-cervantes
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 70
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.0291 | 0.96 | 13 | 4.6705 |
| 4.7952 | 1.96 | 26 | 4.4547 |
| 4.5759 | 2.96 | 39 | 4.3201 |
| 4.4032 | 3.96 | 52 | 4.2451 |
| 4.269 | 4.96 | 65 | 4.1911 |
| 4.143 | 5.96 | 78 | 4.1577 |
| 4.0229 | 6.96 | 91 | 4.1306 |
| 3.9047 | 7.96 | 104 | 4.1165 |
| 3.7886 | 8.96 | 117 | 4.1114 |
| 3.6666 | 9.96 | 130 | 4.1109 |
| 3.539 | 10.96 | 143 | 4.1201 |
| 3.4117 | 11.96 | 156 | 4.1374 |
| 3.272 | 12.96 | 169 | 4.1538 |
| 3.1283 | 13.96 | 182 | 4.1876 |
| 2.9728 | 14.96 | 195 | 4.2226 |
| 2.816 | 15.96 | 208 | 4.2695 |
| 2.6475 | 16.96 | 221 | 4.3106 |
| 2.4765 | 17.96 | 234 | 4.3678 |
| 2.302 | 18.96 | 247 | 4.4249 |
| 2.1257 | 19.96 | 260 | 4.4908 |
| 1.9537 | 20.96 | 273 | 4.5664 |
| 1.7834 | 21.96 | 286 | 4.6324 |
| 1.6177 | 22.96 | 299 | 4.6944 |
| 1.4573 | 23.96 | 312 | 4.7880 |
| 1.3057 | 24.96 | 325 | 4.8843 |
| 1.1652 | 25.96 | 338 | 4.9760 |
| 1.0341 | 26.96 | 351 | 5.0612 |
| 0.9101 | 27.96 | 364 | 5.1714 |
| 0.8017 | 28.96 | 377 | 5.2702 |
| 0.706 | 29.96 | 390 | 5.3530 |
| 0.6194 | 30.96 | 403 | 5.4535 |
| 0.5436 | 31.96 | 416 | 5.5373 |
| 0.4816 | 32.96 | 429 | 5.6153 |
| 0.4309 | 33.96 | 442 | 5.7014 |
| 0.3899 | 34.96 | 455 | 5.7749 |
| 0.3544 | 35.96 | 468 | 5.8430 |
| 0.3236 | 36.96 | 481 | 5.9237 |
| 0.3005 | 37.96 | 494 | 5.9824 |
| 0.2804 | 38.96 | 507 | 6.0264 |
| 0.263 | 39.96 | 520 | 6.0797 |
| 0.2513 | 40.96 | 533 | 6.1285 |
| 0.2376 | 41.96 | 546 | 6.1900 |
| 0.2264 | 42.96 | 559 | 6.2212 |
| 0.2183 | 43.96 | 572 | 6.2812 |
| 0.2104 | 44.96 | 585 | 6.3079 |
| 0.203 | 45.96 | 598 | 6.3501 |
| 0.1964 | 46.96 | 611 | 6.3730 |
| 0.1912 | 47.96 | 624 | 6.4190 |
| 0.1854 | 48.96 | 637 | 6.4598 |
| 0.1817 | 49.96 | 650 | 6.4618 |
| 0.1792 | 50.96 | 663 | 6.4914 |
| 0.1748 | 51.96 | 676 | 6.5385 |
| 0.1732 | 52.96 | 689 | 6.5689 |
| 0.1689 | 53.96 | 702 | 6.5761 |
| 0.1672 | 54.96 | 715 | 6.5775 |
| 0.1657 | 55.96 | 728 | 6.6362 |
| 0.1625 | 56.96 | 741 | 6.6573 |
| 0.1611 | 57.96 | 754 | 6.7019 |
| 0.1588 | 58.96 | 767 | 6.6602 |
| 0.1573 | 59.96 | 780 | 6.7015 |
| 0.1547 | 60.96 | 793 | 6.7323 |
| 0.1542 | 61.96 | 806 | 6.7368 |
| 0.1538 | 62.96 | 819 | 6.7704 |
| 0.1513 | 63.96 | 832 | 6.7963 |
| 0.1504 | 64.96 | 845 | 6.7988 |
| 0.1506 | 65.96 | 858 | 6.8386 |
| 0.1497 | 66.96 | 871 | 6.8039 |
| 0.15 | 67.96 | 884 | 6.8126 |
| 0.1497 | 68.96 | 897 | 6.8858 |
| 0.143 | 69.96 | 910 | 6.8331 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+rocm5.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
1d56e70c9c6b922679e7e172bea724c0
|
gvs/wav2vec2-large-xlsr-malayalam
|
gvs
|
wav2vec2
| 9 | 23 |
transformers
| 2 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['ml']
|
['Indic TTS Malayalam Speech Corpus', 'Openslr Malayalam Speech Corpus', 'SMC Malayalam Speech Corpus', 'IIIT-H Indic Speech Databases']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 7,487 | false |
# Wav2Vec2-Large-XLSR-53-ml
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on ml (Malayalam) using the [Indic TTS Malayalam Speech Corpus (via Kaggle)](https://www.kaggle.com/kavyamanohar/indic-tts-malayalam-speech-corpus), [Openslr Malayalam Speech Corpus](http://openslr.org/63/), [SMC Malayalam Speech Corpus](https://blog.smc.org.in/malayalam-speech-corpus/) and [IIIT-H Indic Speech Databases](http://speech.iiit.ac.in/index.php/research-svl/69.html). The notebooks used to train model are available [here](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = <load-test-split-of-combined-dataset> # Details on loading this dataset in the evaluation section
processor = Wav2Vec2Processor.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
model = Wav2Vec2ForCTC.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"])
```
## Evaluation
The model can be evaluated as follows on the test data of combined custom dataset. For more details on dataset preparation, check the notebooks mentioned at the end of this file.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
from datasets import load_dataset, load_metric
from pathlib import Path
# The custom dataset needs to be created using notebook mentioned at the end of this file
data_dir = Path('<path-to-custom-dataset>')
dataset_folders = {
'iiit': 'iiit_mal_abi',
'openslr': 'openslr',
'indic-tts': 'indic-tts-ml',
'msc-reviewed': 'msc-reviewed-speech-v1.0+20200825',
}
# Set directories for datasets
openslr_male_dir = data_dir / dataset_folders['openslr'] / 'male'
openslr_female_dir = data_dir / dataset_folders['openslr'] / 'female'
iiit_dir = data_dir / dataset_folders['iiit']
indic_tts_male_dir = data_dir / dataset_folders['indic-tts'] / 'male'
indic_tts_female_dir = data_dir / dataset_folders['indic-tts'] / 'female'
msc_reviewed_dir = data_dir / dataset_folders['msc-reviewed']
# Load the datasets
openslr_male = load_dataset("json", data_files=[f"{str(openslr_male_dir.absolute())}/sample_{i}.json" for i in range(2023)], split="train")
openslr_female = load_dataset("json", data_files=[f"{str(openslr_female_dir.absolute())}/sample_{i}.json" for i in range(2103)], split="train")
iiit = load_dataset("json", data_files=[f"{str(iiit_dir.absolute())}/sample_{i}.json" for i in range(1000)], split="train")
indic_tts_male = load_dataset("json", data_files=[f"{str(indic_tts_male_dir.absolute())}/sample_{i}.json" for i in range(5649)], split="train")
indic_tts_female = load_dataset("json", data_files=[f"{str(indic_tts_female_dir.absolute())}/sample_{i}.json" for i in range(2950)], split="train")
msc_reviewed = load_dataset("json", data_files=[f"{str(msc_reviewed_dir.absolute())}/sample_{i}.json" for i in range(1541)], split="train")
# Create test split as 20%, set random seed as well.
test_size = 0.2
random_seed=1
openslr_male_splits = openslr_male.train_test_split(test_size=test_size, seed=random_seed)
openslr_female_splits = openslr_female.train_test_split(test_size=test_size, seed=random_seed)
iiit_splits = iiit.train_test_split(test_size=test_size, seed=random_seed)
indic_tts_male_splits = indic_tts_male.train_test_split(test_size=test_size, seed=random_seed)
indic_tts_female_splits = indic_tts_female.train_test_split(test_size=test_size, seed=random_seed)
msc_reviewed_splits = msc_reviewed.train_test_split(test_size=test_size, seed=random_seed)
# Get combined test dataset
split_list = [openslr_male_splits, openslr_female_splits, indic_tts_male_splits, indic_tts_female_splits, msc_reviewed_splits, iiit_splits]
test_dataset = datasets.concatenate_datasets([split['test'] for split in split_list)
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
model = Wav2Vec2ForCTC.from_pretrained("gvs/wav2vec2-large-xlsr-malayalam")
model.to("cuda")
resamplers = {
48000: torchaudio.transforms.Resample(48_000, 16_000),
}
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“\\\\%\\\\‘\\\\”\\\\�Utrnle\\\\_]'
unicode_ignore_regex = r'[\\\\u200e]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
batch["sentence"] = re.sub(unicode_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
# Resample if its not in 16kHz
if sampling_rate != 16000:
batch["speech"] = resamplers[sampling_rate](speech_array).squeeze().numpy()
else:
batch["speech"] = speech_array.squeeze().numpy()
# If more than one dimension is present, pick first one
if batch["speech"].ndim > 1:
batch["speech"] = batch["speech"][0]
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result (WER)**: 28.43 %
## Training
A combined dataset was created using [Indic TTS Malayalam Speech Corpus (via Kaggle)](https://www.kaggle.com/kavyamanohar/indic-tts-malayalam-speech-corpus), [Openslr Malayalam Speech Corpus](http://openslr.org/63/), [SMC Malayalam Speech Corpus](https://blog.smc.org.in/malayalam-speech-corpus/) and [IIIT-H Indic Speech Databases](http://speech.iiit.ac.in/index.php/research-svl/69.html). The datasets were downloaded and was converted to HF Dataset format using [this notebook](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/blob/main/make_hf_dataset.ipynb)
The notebook used for training and evaluation can be found [here](https://github.com/gauthamsuresh09/wav2vec2-large-xlsr-53-malayalam/blob/main/fine-tune-xlsr-wav2vec2-on-malayalam-asr-with-transformers_v2.ipynb)
|
b10b5d2e90ea34de829b68d1b14c0f0d
|
shpotes/xls-r-et-cv_8_0
|
shpotes
|
wav2vec2
| 47 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['et']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'et', 'hf-asr-leaderboard']
| true | true | true | 1,795 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ET dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4623
- Wer: 0.3420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 72
- eval_batch_size: 72
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 144
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3082 | 12.5 | 500 | 0.3871 | 0.4907 |
| 0.1497 | 25.0 | 1000 | 0.4168 | 0.4278 |
| 0.1243 | 37.5 | 1500 | 0.4446 | 0.4220 |
| 0.0954 | 50.0 | 2000 | 0.4426 | 0.3946 |
| 0.0741 | 62.5 | 2500 | 0.4502 | 0.3800 |
| 0.0533 | 75.0 | 3000 | 0.4618 | 0.3653 |
| 0.0447 | 87.5 | 3500 | 0.4518 | 0.3461 |
| 0.0396 | 100.0 | 4000 | 0.4623 | 0.3420 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
01dc6c2e26af5bb98556a48de22b70ad
|
l3cube-pune/punjabi-bert
|
l3cube-pune
|
bert
| 8 | 2 |
transformers
| 1 |
fill-mask
| true | false | false |
cc-by-4.0
|
['pa']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 516 | false |
## PunjabiBERT
PunjabiBERT is a Punjabi BERT model trained on publicly available Punjabi monolingual datasets.
Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>].
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
|
ce8d0ccf25203287c2a2f5b4f80554f4
|
Geotrend/distilbert-base-en-no-cased
|
Geotrend
|
distilbert
| 6 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
|
['multilingual']
|
['wikipedia']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,224 | false |
# distilbert-base-en-no-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-no-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-no-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
e09e45f1c54a9a5851651134cff067f8
|
nc33/t5_finetuned_genboolq
|
nc33
|
t5
| 13 | 17 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,623 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_finetuned_genboolq
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5011
- Rouge1: 36.4881
- Rouge2: 17.8649
- Rougel: 34.2658
- Rougelsum: 34.2336
- Gen Len: 11.7003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.5854 | 1.0 | 2082 | 0.5182 | 35.5544 | 16.9686 | 33.3783 | 33.3536 | 11.5918 |
| 0.5479 | 2.0 | 4164 | 0.4969 | 37.0664 | 18.2443 | 34.7139 | 34.6934 | 11.8662 |
| 0.5405 | 3.0 | 6246 | 0.5011 | 36.4881 | 17.8649 | 34.2658 | 34.2336 | 11.7003 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
48a2b46f63530d4af5e44b944208065f
|
Chikashi/t5-small-finetuned-cnndm-wikihow
|
Chikashi
|
t5
| 11 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['wikihow']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,810 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm-wikihow
This model is a fine-tuned version of [Sevil/t5-small-finetuned-cnndm_3epoch_v2](https://huggingface.co/Sevil/t5-small-finetuned-cnndm_3epoch_v2) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2653
- Rouge1: 27.5037
- Rouge2: 10.8442
- Rougel: 23.4674
- Rougelsum: 26.7997
- Gen Len: 18.5558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.8459 | 0.13 | 5000 | 2.5755 | 25.2929 | 8.7852 | 21.2379 | 24.5649 | 18.4758 |
| 2.7251 | 0.25 | 10000 | 2.5189 | 25.33 | 9.0505 | 21.4892 | 24.6523 | 18.4513 |
| 2.6696 | 0.38 | 15000 | 2.4805 | 26.3909 | 9.6858 | 22.3589 | 25.7297 | 18.4649 |
| 2.647 | 0.51 | 20000 | 2.4491 | 25.9234 | 9.3936 | 22.0086 | 25.2342 | 18.5558 |
| 2.5973 | 0.64 | 25000 | 2.4251 | 26.4988 | 9.8197 | 22.6201 | 25.8407 | 18.3438 |
| 2.5916 | 0.76 | 30000 | 2.4022 | 26.3149 | 9.8432 | 22.3695 | 25.6581 | 18.4506 |
| 2.5691 | 0.89 | 35000 | 2.3801 | 26.4198 | 9.8848 | 22.4856 | 25.7847 | 18.5381 |
| 2.5365 | 1.02 | 40000 | 2.3755 | 26.5846 | 10.0287 | 22.667 | 25.9606 | 18.5608 |
| 2.4649 | 1.14 | 45000 | 2.3663 | 26.5925 | 10.0569 | 22.6191 | 25.9247 | 18.5803 |
| 2.4539 | 1.27 | 50000 | 2.3490 | 26.9735 | 10.2389 | 22.9536 | 26.282 | 18.5126 |
| 2.4578 | 1.4 | 55000 | 2.3374 | 26.7878 | 10.2275 | 22.849 | 26.1188 | 18.6162 |
| 2.4365 | 1.53 | 60000 | 2.3266 | 27.1171 | 10.403 | 23.0596 | 26.4284 | 18.6128 |
| 2.428 | 1.65 | 65000 | 2.3209 | 27.1762 | 10.578 | 23.1577 | 26.5007 | 18.5246 |
| 2.4293 | 1.78 | 70000 | 2.3145 | 27.0896 | 10.5146 | 23.1502 | 26.4338 | 18.4604 |
| 2.4335 | 1.91 | 75000 | 2.2979 | 27.3373 | 10.6273 | 23.2944 | 26.6725 | 18.5403 |
| 2.3981 | 2.03 | 80000 | 2.3008 | 27.1857 | 10.6455 | 23.1333 | 26.5203 | 18.5412 |
| 2.3395 | 2.16 | 85000 | 2.2908 | 27.3123 | 10.7063 | 23.3126 | 26.626 | 18.4265 |
| 2.3463 | 2.29 | 90000 | 2.2869 | 27.5328 | 10.7662 | 23.4527 | 26.8613 | 18.5664 |
| 2.3481 | 2.42 | 95000 | 2.2802 | 27.4799 | 10.7826 | 23.4538 | 26.7912 | 18.5449 |
| 2.3345 | 2.54 | 100000 | 2.2774 | 27.3182 | 10.724 | 23.3276 | 26.669 | 18.5908 |
| 2.3254 | 2.67 | 105000 | 2.2713 | 27.3942 | 10.777 | 23.3918 | 26.7036 | 18.5681 |
| 2.3369 | 2.8 | 110000 | 2.2666 | 27.5976 | 10.9144 | 23.5832 | 26.9147 | 18.5471 |
| 2.3269 | 2.93 | 115000 | 2.2653 | 27.5037 | 10.8442 | 23.4674 | 26.7997 | 18.5558 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
e1c874743bcb534ee23f4fb5677e5597
|
alphahg/mbart-large-50-finetuned-en-to-ko-8603428-finetuned-en-to-ko-9914408
|
alphahg
|
mbart
| 12 | 255 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,363 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-50-finetuned-en-to-ko-8603428-finetuned-en-to-ko-9914408
This model is a fine-tuned version of [alphahg/mbart-large-50-finetuned-en-to-ko-8603428](https://huggingface.co/alphahg/mbart-large-50-finetuned-en-to-ko-8603428) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.795 | 1.0 | 18752 | 0.8130 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
45affe2ca8d0d7147b5cc78c6469efd9
|
GW12/wav2vec2-custom-colab
|
GW12
|
wav2vec2
| 7 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,219 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-custom-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7785
- Wer: 0.3534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4783 | 0.3 | 500 | 0.7199 | 0.5564 |
| 0.4833 | 0.61 | 1000 | 0.8089 | 0.6181 |
| 0.5733 | 0.91 | 1500 | 0.7617 | 0.5530 |
| 0.4641 | 1.21 | 2000 | 0.7937 | 0.5731 |
| 0.4167 | 1.52 | 2500 | 0.7993 | 0.5102 |
| 0.3713 | 1.82 | 3000 | 0.7541 | 0.5437 |
| 0.3395 | 2.12 | 3500 | 0.7658 | 0.5148 |
| 0.2814 | 2.42 | 4000 | 0.7569 | 0.4783 |
| 0.2698 | 2.73 | 4500 | 0.8126 | 0.5174 |
| 0.2767 | 3.03 | 5000 | 0.7838 | 0.4676 |
| 0.2249 | 3.33 | 5500 | 0.8769 | 0.4743 |
| 0.2452 | 3.64 | 6000 | 0.8586 | 0.4778 |
| 0.1828 | 3.94 | 6500 | 0.7695 | 0.4528 |
| 0.1901 | 4.24 | 7000 | 0.7800 | 0.5021 |
| 0.2062 | 4.55 | 7500 | 0.8107 | 0.4567 |
| 0.1614 | 4.85 | 8000 | 0.7941 | 0.4094 |
| 0.1327 | 5.15 | 8500 | 0.7900 | 0.4241 |
| 0.1405 | 5.45 | 9000 | 0.8017 | 0.3992 |
| 0.1219 | 5.76 | 9500 | 0.8099 | 0.4043 |
| 0.1406 | 6.06 | 10000 | 0.8731 | 0.3913 |
| 0.0806 | 6.36 | 10500 | 0.8387 | 0.3868 |
| 0.1039 | 6.67 | 11000 | 0.8105 | 0.3905 |
| 0.0967 | 6.97 | 11500 | 0.7291 | 0.3728 |
| 0.0846 | 7.27 | 12000 | 0.8128 | 0.4201 |
| 0.0722 | 7.58 | 12500 | 0.8204 | 0.3751 |
| 0.0785 | 7.88 | 13000 | 0.7692 | 0.3760 |
| 0.0647 | 8.18 | 13500 | 0.8294 | 0.3752 |
| 0.0523 | 8.48 | 14000 | 0.7646 | 0.3763 |
| 0.0623 | 8.79 | 14500 | 0.7773 | 0.3572 |
| 0.0477 | 9.09 | 15000 | 0.7379 | 0.3635 |
| 0.064 | 9.39 | 15500 | 0.7544 | 0.3538 |
| 0.0321 | 9.7 | 16000 | 0.8118 | 0.3557 |
| 0.0541 | 10.0 | 16500 | 0.7785 | 0.3534 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.10.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
418b1e8e2dd7ce53cfe9e1ebc0b0e349
|
Kevincp560/distilbart-cnn-12-6-finetuned-pubmed
|
Kevincp560
|
bart
| 13 | 1 |
transformers
| 1 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['pub_med_summarization_dataset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,924 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-pubmed
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the pub_med_summarization_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9895
- Rouge1: 40.0985
- Rouge2: 16.5016
- Rougel: 24.8319
- Rougelsum: 36.0775
- Gen Len: 141.884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.1709 | 1.0 | 4000 | 2.0257 | 38.1012 | 15.112 | 23.4064 | 33.9373 | 141.9195 |
| 1.9495 | 2.0 | 8000 | 1.9593 | 39.529 | 16.1693 | 24.487 | 35.5238 | 141.9785 |
| 1.756 | 3.0 | 12000 | 1.9488 | 39.9623 | 16.5799 | 24.949 | 35.9194 | 141.8855 |
| 1.6032 | 4.0 | 16000 | 1.9732 | 39.672 | 16.1994 | 24.5996 | 35.7021 | 141.921 |
| 1.4817 | 5.0 | 20000 | 1.9895 | 40.0985 | 16.5016 | 24.8319 | 36.0775 | 141.884 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
4d0303a2e4933dedf9d1732052886cc4
|
Helsinki-NLP/opus-mt-fj-en
|
Helsinki-NLP
|
marian
| 10 | 75 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 803 | false |
### opus-mt-fj-en
* source languages: fj
* target languages: en
* OPUS readme: [fj-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fj-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fj-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fj-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fj.en | 31.0 | 0.471 |
| Tatoeba.fj.en | 79.7 | 0.835 |
|
63e31b76b04add9c7bfae4d583a2b8c9
|
SzegedAI/hubertusz-tiny-wiki
|
SzegedAI
|
bert
| 9 | 22 |
transformers
| 0 | null | true | true | false |
apache-2.0
|
['hu']
|
['wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback', 'hubert']
| true | true | true | 594 | false |
# hubert-tiny-wiki
This model was trained from scratch on the Wikipedia subset of Hungarian Webcorpus 2.0 with MLM and SOP tasks.
### Pre-Training Parameters:
First phase:
- Training steps: 500.000
- Sequence length: 128
- Batch size: 1024
Second phase:
- Training steps: 100.000
- Sequence length: 512
- Batch size: 384
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.10.0
- Datasets 2.4.0
- Tokenizers 0.12.1
# Acknowledgement
[](https://mi.nemzetilabor.hu/)
|
adee919f420c0ec162f34625208885fe
|
Helsinki-NLP/opus-mt-srn-sv
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-srn-sv
* source languages: srn
* target languages: sv
* OPUS readme: [srn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.srn.sv | 32.2 | 0.500 |
|
ae3d94d3278bb65560eaeaa41189067a
|
Tanvi2992/ddpm-butterflies-256
|
Tanvi2992
| null | 13 | 0 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['/content/AS/']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,205 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-256
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `/content/AS/` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Tanvi2992/ddpm-butterflies-256/tensorboard?#scalars)
|
683a8a6f05e569251641f171d9c6879b
|
KoichiYasuoka/bert-base-slavic-cyrillic-upos
|
KoichiYasuoka
|
bert
| 9 | 76 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-sa-4.0
|
['be', 'bg', 'mk', 'ru', 'sr', 'uk']
|
['universal_dependencies']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['belarusian', 'bulgarian', 'macedonian', 'russian', 'serbian', 'ukrainian', 'token-classification', 'pos', 'dependency-parsing']
| false | true | true | 1,146 | false |
# bert-base-slavic-cyrillic-upos
## Model Description
This is a BERT model pre-trained with Slavic-Cyrillic ([UD_Belarusian](https://universaldependencies.org/be/) [UD_Bulgarian](https://universaldependencies.org/bg/) [UD_Russian](https://universaldependencies.org/ru/) [UD_Serbian](https://universaldependencies.org/treebanks/sr_set/) [UD_Ukrainian](https://universaldependencies.org/treebanks/uk_iu/)) for POS-tagging and dependency-parsing, derived from [ruBert-base](https://huggingface.co/sberbank-ai/ruBert-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-slavic-cyrillic-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-slavic-cyrillic-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-slavic-cyrillic-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
5d4d89b0a4a11b69868dbc1ac48197b7
|
Yagorka/ddpm-butterflies-256
|
Yagorka
| null | 22 | 0 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,201 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-256
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Yagorka/ddpm-butterflies-256/tensorboard?#scalars)
|
6ff05779467669b2816035929fc54ffc
|
shpotes/xls-r-eus
|
shpotes
|
wav2vec2
| 34 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['eu']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'et', 'hf-asr-leaderboard']
| true | true | true | 2,720 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - EU dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2278
- Wer: 0.1787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 72
- eval_batch_size: 72
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 144
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2548 | 4.24 | 500 | 0.2470 | 0.3663 |
| 0.1435 | 8.47 | 1000 | 0.2000 | 0.2791 |
| 0.1158 | 12.71 | 1500 | 0.2030 | 0.2652 |
| 0.1094 | 16.95 | 2000 | 0.2096 | 0.2605 |
| 0.1004 | 21.19 | 2500 | 0.2150 | 0.2477 |
| 0.0945 | 25.42 | 3000 | 0.2072 | 0.2369 |
| 0.0844 | 29.66 | 3500 | 0.1981 | 0.2328 |
| 0.0877 | 33.89 | 4000 | 0.2041 | 0.2425 |
| 0.0741 | 38.14 | 4500 | 0.2353 | 0.2421 |
| 0.0676 | 42.37 | 5000 | 0.2092 | 0.2213 |
| 0.0623 | 46.61 | 5500 | 0.2217 | 0.2250 |
| 0.0574 | 50.84 | 6000 | 0.2152 | 0.2179 |
| 0.0583 | 55.08 | 6500 | 0.2207 | 0.2186 |
| 0.0488 | 59.32 | 7000 | 0.2225 | 0.2159 |
| 0.0456 | 63.56 | 7500 | 0.2293 | 0.2031 |
| 0.041 | 67.79 | 8000 | 0.2277 | 0.2013 |
| 0.0379 | 72.03 | 8500 | 0.2287 | 0.1991 |
| 0.0381 | 76.27 | 9000 | 0.2233 | 0.1954 |
| 0.0308 | 80.51 | 9500 | 0.2195 | 0.1835 |
| 0.0291 | 84.74 | 10000 | 0.2266 | 0.1825 |
| 0.0266 | 88.98 | 10500 | 0.2285 | 0.1801 |
| 0.0266 | 93.22 | 11000 | 0.2292 | 0.1801 |
| 0.0262 | 97.46 | 11500 | 0.2278 | 0.1788 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
6a1d786a5c2ffb99b23cc9f967cc7689
|
Chrispfield/distilbert-base-uncased-issues-128
|
Chrispfield
|
distilbert
| 10 | 4 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,476 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-issues-128
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4041 | 1.0 | 8 | 1.8568 |
| 2.1982 | 2.0 | 16 | 2.0790 |
| 1.7184 | 3.0 | 24 | 1.9246 |
| 1.7248 | 4.0 | 32 | 1.8485 |
| 1.5016 | 5.0 | 40 | 1.8484 |
| 1.4943 | 6.0 | 48 | 1.8691 |
| 1.526 | 7.0 | 56 | 1.7582 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
a2a5858fe067420a7778aa659b66d951
|
Helsinki-NLP/opus-mt-de-da
|
Helsinki-NLP
|
marian
| 10 | 169 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 | false |
### opus-mt-de-da
* source languages: de
* target languages: da
* OPUS readme: [de-da](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-da/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-29.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-da/opus-2020-01-29.zip)
* test set translations: [opus-2020-01-29.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-da/opus-2020-01-29.test.txt)
* test set scores: [opus-2020-01-29.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-da/opus-2020-01-29.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.da | 57.2 | 0.730 |
|
31744148a59b05ac66066ce78d875102
|
henryscheible/eval_masked_v4_cola
|
henryscheible
| null | 13 | 0 | null | 0 | null | true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,022 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval_masked_v4_cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6890
- Matthews Correlation: 0.5551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
6d306490e924c42dce595893e62b6327
|
gngpostalsrvc/BERiT_2000_custom_architecture_20_epochs
|
gngpostalsrvc
|
roberta
| 11 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 6,456 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_2000_custom_architecture_2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 16.4316 | 0.19 | 500 | 9.0685 |
| 8.2958 | 0.39 | 1000 | 7.6483 |
| 7.4324 | 0.58 | 1500 | 7.1707 |
| 7.0054 | 0.77 | 2000 | 6.8592 |
| 6.8522 | 0.97 | 2500 | 6.7710 |
| 6.7538 | 1.16 | 3000 | 6.5845 |
| 6.634 | 1.36 | 3500 | 6.4525 |
| 6.5784 | 1.55 | 4000 | 6.3129 |
| 6.5135 | 1.74 | 4500 | 6.3312 |
| 6.4552 | 1.94 | 5000 | 6.2546 |
| 6.4685 | 2.13 | 5500 | 6.2857 |
| 6.4356 | 2.32 | 6000 | 6.2285 |
| 6.3566 | 2.52 | 6500 | 6.2295 |
| 6.394 | 2.71 | 7000 | 6.1790 |
| 6.3412 | 2.9 | 7500 | 6.1880 |
| 6.3115 | 3.1 | 8000 | 6.2130 |
| 6.3163 | 3.29 | 8500 | 6.1831 |
| 6.2978 | 3.49 | 9000 | 6.1945 |
| 6.3082 | 3.68 | 9500 | 6.1485 |
| 6.2729 | 3.87 | 10000 | 6.1752 |
| 6.307 | 4.07 | 10500 | 6.1331 |
| 6.2494 | 4.26 | 11000 | 6.1082 |
| 6.2523 | 4.45 | 11500 | 6.2110 |
| 6.2455 | 4.65 | 12000 | 6.1326 |
| 6.2399 | 4.84 | 12500 | 6.1779 |
| 6.2297 | 5.03 | 13000 | 6.1587 |
| 6.2374 | 5.23 | 13500 | 6.1458 |
| 6.2265 | 5.42 | 14000 | 6.1370 |
| 6.2222 | 5.62 | 14500 | 6.1511 |
| 6.2209 | 5.81 | 15000 | 6.1320 |
| 6.2146 | 6.0 | 15500 | 6.1124 |
| 6.214 | 6.2 | 16000 | 6.1439 |
| 6.1907 | 6.39 | 16500 | 6.0981 |
| 6.2119 | 6.58 | 17000 | 6.1465 |
| 6.1858 | 6.78 | 17500 | 6.1594 |
| 6.1552 | 6.97 | 18000 | 6.0742 |
| 6.1926 | 7.16 | 18500 | 6.1176 |
| 6.1813 | 7.36 | 19000 | 6.0107 |
| 6.1812 | 7.55 | 19500 | 6.0852 |
| 6.1852 | 7.75 | 20000 | 6.0845 |
| 6.1945 | 7.94 | 20500 | 6.1260 |
| 6.1542 | 8.13 | 21000 | 6.1032 |
| 6.1685 | 8.33 | 21500 | 6.0650 |
| 6.1619 | 8.52 | 22000 | 6.1028 |
| 6.1279 | 8.71 | 22500 | 6.1269 |
| 6.1575 | 8.91 | 23000 | 6.0793 |
| 6.1401 | 9.1 | 23500 | 6.1479 |
| 6.159 | 9.3 | 24000 | 6.0319 |
| 6.1227 | 9.49 | 24500 | 6.0677 |
| 6.1201 | 9.68 | 25000 | 6.0527 |
| 6.1473 | 9.88 | 25500 | 6.1305 |
| 6.1539 | 10.07 | 26000 | 6.1079 |
| 6.091 | 10.26 | 26500 | 6.1219 |
| 6.1015 | 10.46 | 27000 | 6.1317 |
| 6.1048 | 10.65 | 27500 | 6.1149 |
| 6.0955 | 10.84 | 28000 | 6.1216 |
| 6.129 | 11.04 | 28500 | 6.0427 |
| 6.1007 | 11.23 | 29000 | 6.1289 |
| 6.1266 | 11.43 | 29500 | 6.0564 |
| 6.1203 | 11.62 | 30000 | 6.1143 |
| 6.1038 | 11.81 | 30500 | 6.0957 |
| 6.0989 | 12.01 | 31000 | 6.0707 |
| 6.0571 | 12.2 | 31500 | 6.0013 |
| 6.1017 | 12.39 | 32000 | 6.1356 |
| 6.0649 | 12.59 | 32500 | 6.0981 |
| 6.0704 | 12.78 | 33000 | 6.0588 |
| 6.088 | 12.97 | 33500 | 6.0796 |
| 6.1112 | 13.17 | 34000 | 6.0809 |
| 6.0888 | 13.36 | 34500 | 6.0776 |
| 6.0482 | 13.56 | 35000 | 6.0710 |
| 6.0588 | 13.75 | 35500 | 6.0877 |
| 6.0517 | 13.94 | 36000 | 6.0650 |
| 6.0832 | 14.14 | 36500 | 5.9890 |
| 6.0655 | 14.33 | 37000 | 6.0445 |
| 6.0705 | 14.52 | 37500 | 6.0037 |
| 6.0789 | 14.72 | 38000 | 6.0777 |
| 6.0645 | 14.91 | 38500 | 6.0475 |
| 6.0347 | 15.1 | 39000 | 6.1148 |
| 6.0478 | 15.3 | 39500 | 6.0639 |
| 6.0638 | 15.49 | 40000 | 6.0373 |
| 6.0377 | 15.69 | 40500 | 6.0116 |
| 6.0402 | 15.88 | 41000 | 6.0483 |
| 6.0382 | 16.07 | 41500 | 6.1025 |
| 6.039 | 16.27 | 42000 | 6.0488 |
| 6.0232 | 16.46 | 42500 | 6.0219 |
| 5.9946 | 16.65 | 43000 | 6.0541 |
| 6.063 | 16.85 | 43500 | 6.0436 |
| 6.0141 | 17.04 | 44000 | 6.0609 |
| 6.0196 | 17.23 | 44500 | 6.0551 |
| 6.0331 | 17.43 | 45000 | 6.0576 |
| 6.0174 | 17.62 | 45500 | 6.0498 |
| 6.0366 | 17.82 | 46000 | 6.0782 |
| 6.0299 | 18.01 | 46500 | 6.0196 |
| 6.0009 | 18.2 | 47000 | 6.0262 |
| 5.9758 | 18.4 | 47500 | 6.0824 |
| 6.0285 | 18.59 | 48000 | 6.0799 |
| 6.025 | 18.78 | 48500 | 5.9511 |
| 5.9806 | 18.98 | 49000 | 6.0086 |
| 5.9915 | 19.17 | 49500 | 6.0089 |
| 5.9957 | 19.36 | 50000 | 6.0330 |
| 6.0311 | 19.56 | 50500 | 6.0083 |
| 5.995 | 19.75 | 51000 | 6.0394 |
| 6.0034 | 19.95 | 51500 | 5.9854 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
91622576b181cfdc1af7eb599b733b53
|
Helsinki-NLP/opus-mt-tl-en
|
Helsinki-NLP
|
marian
| 11 | 1,126 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['tl', 'en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,996 | false |
### tgl-eng
* source group: Tagalog
* target group: English
* OPUS readme: [tgl-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-eng/README.md)
* model: transformer-align
* source language(s): tgl_Latn
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tgl.eng | 35.0 | 0.542 |
### System Info:
- hf_name: tgl-eng
- source_languages: tgl
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tl', 'en']
- src_constituents: {'tgl_Latn'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.test.txt
- src_alpha3: tgl
- tgt_alpha3: eng
- short_pair: tl-en
- chrF2_score: 0.542
- bleu: 35.0
- brevity_penalty: 0.975
- ref_len: 18168.0
- src_name: Tagalog
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: tl
- tgt_alpha2: en
- prefer_old: False
- long_pair: tgl-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
1c69f3e458150fa98263179339725d99
|
SirVeggie/Aeolian
|
SirVeggie
| null | 6 | 0 | null | 2 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,547 | false |
# Aeolian stable diffusion model
Original artist: WLOP\
Patreon: https://www.patreon.com/wlop/posts
An original character created and drawn by WLOP for his webcomic Ghostblade.
## Basic explanation
Token and Class words are what guide the AI to produce images similar to the trained style/object/character.
Include any mix of these words in the prompt to produce verying results, or exclude them to have a less pronounced effect.
There is usually at least a slight stylistic effect even without the words, but it is recommended to include at least one.
Adding token word/phrase class word/phrase at the start of the prompt in that order produces results most similar to the trained concept, but they can be included elsewhere as well. Some models produce better results when not including all token/class words.
3k models are are more flexible, while 5k models produce images closer to the trained concept.
I recommend 2k/3k models for normal use, and 5k/6k models for model merging and use without token/class words.
However it can be also very prompt specific. I highly recommend self-experimentation.
## Comparison
Aeolian and aeolian_3000 are quite similar with slight differences.
Epoch 5 and 6 versions were earlier in the waifu diffusion 1.3 training process, so it is easier to produce more varied, non anime, results.
## aeolian
```
token: m_aeolian
class: §¶•
base: waifu diffusion 1.2-e5
notes: 2020 step training
```
## aeolian_3000
```
token: m_aeolian
class: §¶•
base: waifu diffusion 1.2-e6
notes: 3000 step training
```
## aeolian_v2
```
token: m_concept
class: §
base: waifu diffusion 1.3
notes: 1.3 model, which may give some benefits over 1.2-e5
```
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
5568ad1e9e512cbebb0df9783a514218
|
quadpartisan/ddpm-butterflies-128
|
quadpartisan
| null | 11 | 0 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/smithsonian_butterflies_subset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,234 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/quadpartisan/ddpm-butterflies-128/tensorboard?#scalars)
|
c073681e251099deba2fde9a73d0784e
|
eicu/fastbooth-jsjessy-1200
|
eicu
| null | 18 | 2 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 428 | false |
### fastbooth-jsjessy-1200 Dreambooth model trained by eicu with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
332e4bafcd18669eecadb72302985226
|
tj-solergibert/xlm-roberta-base-finetuned-panx-de-fr
|
tj-solergibert
|
xlm-roberta
| 9 | 4 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
48244f82e9ab71ae36c406f0725457cd
|
jonatasgrosman/exp_w2v2t_fr_unispeech-sat_s655
|
jonatasgrosman
|
unispeech-sat
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fr']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'fr']
| false | true | true | 463 | false |
# exp_w2v2t_fr_unispeech-sat_s655
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
4ad5354773f5c4e9d2c1116411e49085
|
north-snocko/donut-base-sroie
|
north-snocko
|
vision-encoder-decoder
| 20 | 3 |
transformers
| 0 | null | true | false | false |
mit
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 981 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.9.0
- Tokenizers 0.12.1
|
2a2551f792d4c35cee9b85b42f49a805
|
jonatasgrosman/exp_w2v2t_ru_vp-es_s664
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ru']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ru']
| false | true | true | 469 | false |
# exp_w2v2t_ru_vp-es_s664
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
b8b2ad889e9c5176bfbea4a31a6cc8b8
|
shed-e/NER
|
shed-e
|
bert
| 12 | 4 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0637
- Precision: 0.9335
- Recall: 0.9500
- F1: 0.9417
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0888 | 1.0 | 1756 | 0.0636 | 0.9195 | 0.9366 | 0.9280 | 0.9830 |
| 0.0331 | 2.0 | 3512 | 0.0667 | 0.9272 | 0.9490 | 0.9380 | 0.9855 |
| 0.0167 | 3.0 | 5268 | 0.0637 | 0.9335 | 0.9500 | 0.9417 | 0.9862 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
8e579641581661e1061ad661339427c7
|
TestZee/t5-small-finetuned-pytorch-test
|
TestZee
|
t5
| 11 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,624 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-pytorch-test
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1006
- Rouge1: 22.0585
- Rouge2: 9.4908
- Rougel: 18.3044
- Rougelsum: 20.9764
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 15 | 2.1859 | 21.551 | 8.7109 | 18.07 | 20.2469 | 19.0 |
| No log | 2.0 | 30 | 2.1194 | 22.348 | 9.6498 | 18.7701 | 21.1714 | 19.0 |
| No log | 3.0 | 45 | 2.1006 | 22.0585 | 9.4908 | 18.3044 | 20.9764 | 19.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
2354dc2f33b9f373b2350673773ca311
|
bakhuisdennis/donut-base-mysterybox
|
bakhuisdennis
|
vision-encoder-decoder
| 12 | 2 |
transformers
| 0 | null | true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,007 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-mysterybox
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
|
dc04f1556d017d0c72f0f1c9699bd258
|
Narshion/bert-base-multilingual-cased-mwach
|
Narshion
| null | 13 | 2 | null | 0 | null | true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,004 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-mlm
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
5f6e980cb1b9e3022b30c72ae51aa45b
|
EleutherAI/pythia-160m
|
EleutherAI
|
gpt_neox
| 7 | 43,164 |
transformers
| 3 |
text-generation
| true | false | false |
apache-2.0
|
['en']
|
['the_pile']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['pytorch', 'causal-lm', 'pythia']
| false | true | true | 10,803 | false |
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/EleutherAI).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models match or exceed the performance of similar and same-sized models,
such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were re-named in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact model parameter counts.
## Pythia-160M
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:contact@eleuther.ai).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-160M for deployment, as long as your
use is in accordance with the Apache 2.0 license. Pythia models work with the
Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index).
If you decide to use pre-trained Pythia-160M as a basis for your
fine-tuned model, please conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product, and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because, unlike
this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human
instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-160M to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
The Pile was **not** deduplicated before being used to train Pythia-160M.
#### Training procedure
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).
February 2023 note: select evaluations and comparison with OPT and BLOOM
models will be added here at a later date.
### Naming convention and parameter count
*Pythia* models were re-named in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
699004d674b94498745ea849e2b212d1
|
pcuenq/wav2vec2-large-xlsr-53-eu
|
pcuenq
|
wav2vec2
| 8 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['eu']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 3,865 | false |
# Wav2Vec2-Large-XLSR-53-EU
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Basque using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eu", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-eu")
model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-eu")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Basque test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "eu", split="test")
wer = load_metric("wer")
model_name = "pcuenq/wav2vec2-large-xlsr-53-eu"
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name)
model.to("cuda")
## Text pre-processing
chars_to_ignore_regex = '[\,\¿\?\.\¡\!\-\;\:\"\“\%\‘\”\\…\’\ː\'\‹\›\`\´\®\—\→]'
chars_to_ignore_pattern = re.compile(chars_to_ignore_regex)
def remove_special_characters(batch):
batch["sentence"] = chars_to_ignore_pattern.sub('', batch["sentence"]).lower() + " "
return batch
## Audio pre-processing
import librosa
def speech_file_to_array_fn(batch):
speech_array, sample_rate = torchaudio.load(batch["path"])
batch["speech"] = librosa.resample(speech_array.squeeze().numpy(), sample_rate, 16_000)
return batch
# Text transformation and audio resampling
def cv_prepare(batch):
batch = remove_special_characters(batch)
batch = speech_file_to_array_fn(batch)
return batch
# Number of CPUs or None
num_proc = 16
test_dataset = test_dataset.map(cv_prepare, remove_columns=['path'], num_proc=num_proc)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
# WER Metric computation
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 15.34 %
## Training
The Common Voice `train` and `validation` datasets were used for training. Training was performed for 22 + 20 epochs with the following parameters:
- Batch size 16, 2 gradient accumulation steps.
- Learning rate: 2.5e-4
- Activation dropout: 0.05
- Attention dropout: 0.1
- Hidden dropout: 0.05
- Feature proj. dropout: 0.05
- Mask time probability: 0.08
- Layer dropout: 0.05
|
736713c4f5a62e5d3233829999b3c364
|
coreml/coreml-elldreths-og-4060-mix
|
coreml
| null | 3 | 0 | null | 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['coreml', 'stable-diffusion', 'text-to-image']
| false | true | true | 1,175 | false |
# Core ML Converted Model:
- This model was converted to Core ML for use on Apple Silicon devices. Instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-files-to-Core-ML).<br>
- Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
# Note: This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
# Elldreth's OG 4060 mix:
Source(s): [CivitAI](https://civitai.com/models/1259/elldreths-og-4060-mix)
This mixed model is a combination of my all-time favorites. A genuine simple mix of a very popular anime model and the powerful and Zeipher's fantastic f222.
What's it good at?
Realistic portraits
Stylized characters
Landscapes
Fantasy
Sci-Fi
Anime
Horror
It's an all-around easy-to-prompt general purpose semi-realistic to realistic model that cranks out some really nice images. No trigger words required. All models were scanned prior to mixing and totally safe.
|
221fe9205994302666cbfe23e372b66e
|
agnesluhtaru/whisper-medium-et
|
agnesluhtaru
|
whisper
| 15 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'whisper-event']
| true | true | true | 966 | false |
# whisper-medium-et
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the following datasets: Common Voice 11, VoxPopuli and FLEURS.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Estonian data from Common Voice 11, VoxPopuli and FLEURS corpora as both training and validation sets. Tested on Common Voice 11 test set.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+rocm5.1.1
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
cf85b0ea6a7f9344b59537625e9ccebd
|
jkhan447/language-detection-Bert-base-uncased
|
jkhan447
|
bert
| 28 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,028 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-detection-Bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2231
- Accuracy: 0.9512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
a36659a1bbf45ebe2dd559cef3a02dc6
|
lmqg/mt5-small-itquad-qg-ae
|
lmqg
|
mt5
| 40 | 129 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['it']
|
['lmqg/qg_itquad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation', 'answer extraction']
| true | true | true | 7,340 | false |
# Model Card of `lmqg/mt5-small-itquad-qg-ae`
This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation and answer extraction jointly on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
- **Language:** it
- **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="it", model="lmqg/mt5-small-itquad-qg-ae")
# model prediction
question_answer_pairs = model.generate_qa("Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-small-itquad-qg-ae")
# answer extraction
answer = pipe("generate question: <hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.")
# question generation
question = pipe("extract answers: <hl> Il 6 ottobre 1973 , la Siria e l' Egitto, con il sostegno di altre nazioni arabe, lanciarono un attacco a sorpresa su Israele, su Yom Kippur. <hl> Questo rinnovo delle ostilità nel conflitto arabo-israeliano ha liberato la pressione economica sottostante sui prezzi del petrolio. All' epoca, l' Iran era il secondo esportatore mondiale di petrolio e un vicino alleato degli Stati Uniti. Settimane più tardi, lo scià d' Iran ha detto in un' intervista: Naturalmente[il prezzo del petrolio] sta andando a salire Certamente! E come! Avete[Paesi occidentali] aumentato il prezzo del grano che ci vendete del 300 per cento, e lo stesso per zucchero e cemento.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 80.61 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_1 | 22.53 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_2 | 14.75 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_3 | 10.19 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_4 | 7.25 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| METEOR | 17.5 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| MoverScore | 56.63 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| ROUGE_L | 21.84 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_itquad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 81.81 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedF1Score (MoverScore) | 56.02 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedPrecision (BERTScore) | 81.17 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedPrecision (MoverScore) | 55.76 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedRecall (BERTScore) | 82.51 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| QAAlignedRecall (MoverScore) | 56.32 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-itquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_itquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 57.85 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| AnswerF1Score | 72.09 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| BERTScore | 90.24 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_1 | 39.33 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_2 | 33.64 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_3 | 29.59 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| Bleu_4 | 26.01 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| METEOR | 42.68 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| MoverScore | 81.17 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
| ROUGE_L | 45.15 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_itquad
- dataset_name: default
- input_types: ['paragraph_answer', 'paragraph_sentence']
- output_types: ['question', 'answer']
- prefix_types: ['qg', 'ae']
- model: google/mt5-small
- max_length: 512
- max_length_output: 32
- epoch: 13
- batch: 16
- lr: 0.001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-itquad-qg-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
58688b89cf9ad1c15e17e3b93a1c305f
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.