repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jonatasgrosman/exp_w2v2t_id_vp-100k_s842
|
jonatasgrosman
|
wav2vec2
| 10 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['id']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'id']
| false | true | true | 475 | false |
# exp_w2v2t_id_vp-100k_s842
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
a8c90ba73f52e898dd703a5480ecf56f
|
vuiseng9/roberta-l-squadv1.1
|
vuiseng9
|
roberta
| 15 | 13 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,067 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run05-roberta-large-squadv1.1-sl384-ds128-e2-tbs16
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
# Train
```bash
python run_qa.py \
--model_name_or_path roberta-large \
--dataset_name squad \
--do_eval \
--do_train \
--evaluation_strategy steps \
--eval_steps 500 \
--learning_rate 3e-5 \
--fp16 \
--num_train_epochs 2 \
--per_device_eval_batch_size 64 \
--per_device_train_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 1000 \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
MODEL=vuiseng9/roberta-l-squadv1.1
OUTDIR=eval-$(basename $MODEL)
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path $MODEL \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
```bash
eval_exact_match = 88.4674
eval_f1 = 94.3001
eval_samples = 10790
```
|
737aa01dfd4a8c4993c2bf21a01a682f
|
jonatasgrosman/exp_w2v2t_ru_xlsr-53_s303
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ru']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ru']
| false | true | true | 461 | false |
# exp_w2v2t_ru_xlsr-53_s303
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
c131f3deda8ec6583b70c268e1d765c8
|
Helsinki-NLP/opus-mt-es-ru
|
Helsinki-NLP
|
marian
| 10 | 1,062 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 850 | false |
### opus-mt-es-ru
* source languages: es
* target languages: ru
* OPUS readme: [es-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ru/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.es.ru | 20.9 | 0.489 |
| newstest2013.es.ru | 23.4 | 0.504 |
| Tatoeba.es.ru | 47.0 | 0.657 |
|
2634f7d46333dc4e9ce52e7d65db7265
|
sanjin7/distilbert-base-uncased_proba
|
sanjin7
|
distilbert
| 6 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 923 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_proba
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.25.1
- Pytorch 1.14.0.dev20221202
- Datasets 2.7.1
- Tokenizers 0.13.2
|
2b6b03a012aef3c4da75943a107c2289
|
symons/finetuning-sentiment-model-3000-samples
|
symons
|
distilbert
| 16 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['rotten_tomatoes_movie_review']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,079 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the rotten_tomatoes_movie_review dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8692
- Accuracy: 0.8433
- F1: 0.8407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ca04d26adbc4b9717dcfe8ab7e69ed88
|
TahaRazzaq/wav2vec2-base-urdu-demo-colab
|
TahaRazzaq
|
wav2vec2
| 22 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,032 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-urdu-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
def33ebf38b994b9291b04e34309666a
|
Siyong/MC
|
Siyong
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,799 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-base-All
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0545
- Wer: 0.8861
- Cer: 0.5014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|
| No log | 3.33 | 500 | 4.0654 | 1.0 | 0.9823 |
| No log | 6.67 | 1000 | 3.4532 | 1.0 | 0.9823 |
| No log | 10.0 | 1500 | 3.0707 | 0.9992 | 0.9781 |
| No log | 13.33 | 2000 | 2.7335 | 1.0017 | 0.9027 |
| No log | 16.67 | 2500 | 2.5896 | 1.0690 | 0.7302 |
| No log | 20.0 | 3000 | 2.3315 | 1.0690 | 0.6677 |
| No log | 23.33 | 3500 | 2.2217 | 1.0150 | 0.5966 |
| No log | 26.67 | 4000 | 2.3802 | 1.0549 | 0.5948 |
| No log | 30.0 | 4500 | 2.2208 | 0.9975 | 0.5681 |
| 2.4224 | 33.33 | 5000 | 2.2687 | 0.9800 | 0.5537 |
| 2.4224 | 36.67 | 5500 | 2.3169 | 0.9476 | 0.5493 |
| 2.4224 | 40.0 | 6000 | 2.5196 | 0.9900 | 0.5509 |
| 2.4224 | 43.33 | 6500 | 2.4816 | 0.9501 | 0.5272 |
| 2.4224 | 46.67 | 7000 | 2.4894 | 0.9485 | 0.5276 |
| 2.4224 | 50.0 | 7500 | 2.4555 | 0.9418 | 0.5305 |
| 2.4224 | 53.33 | 8000 | 2.7326 | 0.9559 | 0.5255 |
| 2.4224 | 56.67 | 8500 | 2.5514 | 0.9227 | 0.5209 |
| 2.4224 | 60.0 | 9000 | 2.9135 | 0.9717 | 0.5455 |
| 2.4224 | 63.33 | 9500 | 3.0465 | 0.8346 | 0.5002 |
| 0.8569 | 66.67 | 10000 | 2.8177 | 0.9302 | 0.5216 |
| 0.8569 | 70.0 | 10500 | 2.9908 | 0.9310 | 0.5128 |
| 0.8569 | 73.33 | 11000 | 3.1752 | 0.9235 | 0.5284 |
| 0.8569 | 76.67 | 11500 | 2.7412 | 0.8886 | 0.5 |
| 0.8569 | 80.0 | 12000 | 2.7362 | 0.9127 | 0.5040 |
| 0.8569 | 83.33 | 12500 | 2.9636 | 0.9152 | 0.5093 |
| 0.8569 | 86.67 | 13000 | 3.0139 | 0.9011 | 0.5097 |
| 0.8569 | 90.0 | 13500 | 2.8325 | 0.8853 | 0.5032 |
| 0.8569 | 93.33 | 14000 | 3.0383 | 0.8845 | 0.5056 |
| 0.8569 | 96.67 | 14500 | 2.7931 | 0.8795 | 0.4965 |
| 0.3881 | 100.0 | 15000 | 2.8972 | 0.8928 | 0.5012 |
| 0.3881 | 103.33 | 15500 | 2.7780 | 0.8736 | 0.4947 |
| 0.3881 | 106.67 | 16000 | 3.1081 | 0.9036 | 0.5109 |
| 0.3881 | 110.0 | 16500 | 3.0078 | 0.8928 | 0.5032 |
| 0.3881 | 113.33 | 17000 | 3.0245 | 0.8886 | 0.5009 |
| 0.3881 | 116.67 | 17500 | 3.0739 | 0.8928 | 0.5065 |
| 0.3881 | 120.0 | 18000 | 3.0545 | 0.8861 | 0.5014 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
f36693a2957bdf3c18be84742446fc3f
|
peterhsu/marian-finetuned-kde4-en-to-zh_TW-accelerate
|
peterhsu
|
marian
| 9 | 5 |
transformers
| 0 |
translation
| true | false | false |
apache-2.0
| null |
['kde4']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| true | true | true | 930 | false |
# marian-finetuned-kde4-en-to-zh_TW-accelerate
## Model description
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Bleu: 40.70
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
98702876c67ba8fc1bb5e5501f6f7678
|
HanSSH/mt5-small-finetuned-amazon-en-es
|
HanSSH
|
mt5
| 17 | 1 |
transformers
| 0 |
text2text-generation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,484 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HanSSH/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2684
- Validation Loss: 3.2288
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.00056, 'decay_steps': 4836, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.8144 | 3.5283 | 0 |
| 3.8758 | 3.2971 | 1 |
| 3.4741 | 3.2452 | 2 |
| 3.2684 | 3.2288 | 3 |
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.10.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
1dec1da82f003552ea7f9d2264a9e6a4
|
Fulccrum/distilbert-base-uncased-finetuned-sst2
|
Fulccrum
|
distilbert
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,482 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3739
- Accuracy: 0.9128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1885 | 1.0 | 4210 | 0.3092 | 0.9083 |
| 0.1311 | 2.0 | 8420 | 0.3809 | 0.9071 |
| 0.1036 | 3.0 | 12630 | 0.3739 | 0.9128 |
| 0.0629 | 4.0 | 16840 | 0.4623 | 0.9083 |
| 0.036 | 5.0 | 21050 | 0.5198 | 0.9048 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
a519a67868991c9874db057fb9c9abaa
|
TransQuest/siamesetransquest-da-ro_en-wiki
|
TransQuest
|
xlm-roberta
| 12 | 12 |
transformers
| 0 |
feature-extraction
| true | false | false |
apache-2.0
|
['ro-en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['Quality Estimation', 'siamesetransquest', 'da']
| false | true | true | 5,243 | false |
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
import torch
from transquest.algo.sentence_level.siamesetransquest.run_model import SiameseTransQuestModel
model = SiameseTransQuestModel("TransQuest/siamesetransquest-da-ro_en-wiki")
predictions = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]])
print(predictions)
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
f4d0d525b09f559bd0c46c7e5fb941c7
|
pritamdeka/PubMedBert-abstract-cord19-v2
|
pritamdeka
|
bert
| 13 | 7 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null |
['pritamdeka/cord-19-abstract']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,842 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PubMedBert-abstract-cord19-v2
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [pritamdeka/cord-19-abstract](https://huggingface.co/datasets/pritamdeka/cord-19-abstract) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2371
- Accuracy: 0.7247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 4.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.27 | 0.53 | 5000 | 1.2425 | 0.7236 |
| 1.2634 | 1.06 | 10000 | 1.3123 | 0.7141 |
| 1.3041 | 1.59 | 15000 | 1.3583 | 0.7072 |
| 1.3829 | 2.12 | 20000 | 1.3590 | 0.7121 |
| 1.3069 | 2.65 | 25000 | 1.3506 | 0.7154 |
| 1.2921 | 3.18 | 30000 | 1.3448 | 0.7160 |
| 1.2731 | 3.7 | 35000 | 1.3375 | 0.7178 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
972bdde3c53ff80f8a096f1ce9919934
|
HPL/roberta-large-unlabeled-gab-reddit-semeval2023-task10-57000sample
|
HPL
|
roberta
| 11 | 1 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,340 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-unlabeled-gab-reddit-semeval2023-task10-57000sample
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1999 | 1.0 | 3563 | 2.0576 |
| 2.0587 | 2.0 | 7126 | 1.9371 |
| 1.9591 | 3.0 | 10689 | 1.8823 |
| 1.8652 | 4.0 | 14252 | 1.8874 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.10.3
|
80c3147fec31aa87ca98fde1fdb610ec
|
google/tapas-large-finetuned-wtq
|
google
|
tapas
| 8 | 2,913 |
transformers
| 18 |
table-question-answering
| true | true | false |
apache-2.0
|
['en']
|
['wikitablequestions']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['tapas', 'table-question-answering']
| false | true | true | 7,108 | false |
# TAPAS large model fine-tuned on WikiTable Questions (WTQ)
This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_large_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_large` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
**LARGE** | **noreset** | **0.5062** | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset)
**LARGE** | **reset** | **0.5097** | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main)
BASE | noreset | 0.4525 | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset)
BASE | reset | 0.4638 | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main)
MEDIUM | noreset | 0.4324 | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset)
MEDIUM | reset | 0.4324 | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main)
SMALL | noreset | 0.3681 | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset)
SMALL | reset | 0.3762 | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main)
MINI | noreset | 0.2783 | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset)
MINI | reset | 0.2854 | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main)
TINY | noreset | 0.0823 | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset)
TINY | reset | 0.1039 | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQa, WikiSQL and finally WTQ.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WTQ dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 1.93581e-5, and a warmup
ratio of 0.128960. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and
12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/PasupatL15,
author = {Panupong Pasupat and
Percy Liang},
title = {Compositional Semantic Parsing on Semi-Structured Tables},
journal = {CoRR},
volume = {abs/1508.00305},
year = {2015},
url = {http://arxiv.org/abs/1508.00305},
archivePrefix = {arXiv},
eprint = {1508.00305},
timestamp = {Mon, 13 Aug 2018 16:47:37 +0200},
biburl = {https://dblp.org/rec/journals/corr/PasupatL15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
a8355557fc6795e0b5c11791007438ff
|
gokuls/mobilebert_add_GLUE_Experiment_mnli
|
gokuls
|
mobilebert
| 17 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,840 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_mnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0985
- Accuracy: 0.3522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0988 | 1.0 | 3068 | 1.0988 | 0.3182 |
| 1.0987 | 2.0 | 6136 | 1.0986 | 0.3184 |
| 1.0987 | 3.0 | 9204 | 1.0989 | 0.3274 |
| 1.0987 | 4.0 | 12272 | 1.0987 | 0.3182 |
| 1.0987 | 5.0 | 15340 | 1.0984 | 0.3545 |
| 1.0986 | 6.0 | 18408 | 1.0987 | 0.3274 |
| 1.0986 | 7.0 | 21476 | 1.0993 | 0.3274 |
| 1.0986 | 8.0 | 24544 | 1.0985 | 0.3545 |
| 1.0986 | 9.0 | 27612 | 1.0985 | 0.3545 |
| 1.0986 | 10.0 | 30680 | 1.0987 | 0.3182 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
bfe46f2adc252e888eb687c973b04f39
|
stanfordnlp/corenlp-english-extra
|
stanfordnlp
| null | 3 | 0 | null | 0 | null | false | false | false |
gpl-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['corenlp']
| false | true | true | 666 | false |
# Core NLP model for english-extra
CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations.
Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP).
This card and repo were automatically prepared with `hugging_corenlp.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2023-01-21 01:36:25.611
|
40b71a23e28aa21b0dfafab4afd2fd6c
|
spooncats/lacroix-can-plus-van-gogh
|
spooncats
| null | 19 | 8 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 740 | false |
### lacroix_can_plus_van_gogh Dreambooth model trained by spooncats with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:

|
3536b16cf5332ec18e7a8918522a616a
|
lmqg/mt5-base-jaquad-ae
|
lmqg
|
mt5
| 13 | 72 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['ja']
|
['lmqg/qg_jaquad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['answer extraction']
| true | true | true | 4,385 | false |
# Model Card of `lmqg/mt5-base-jaquad-ae`
This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for answer extraction on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
- **Language:** ja
- **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ja", model="lmqg/mt5-base-jaquad-ae")
# model prediction
answers = model.generate_a("フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-base-jaquad-ae")
output = pipe("『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。<hl>前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる3万5000部が刷られた。<hl>他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。")
```
## Evaluation
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-jaquad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 28.33 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| AnswerF1Score | 28.33 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| BERTScore | 77.33 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_1 | 33.75 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_2 | 30.74 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_3 | 28.29 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_4 | 26.48 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| METEOR | 25.61 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| MoverScore | 64.96 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| ROUGE_L | 35.58 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_jaquad
- dataset_name: default
- input_types: ['paragraph_sentence']
- output_types: ['answer']
- prefix_types: None
- model: google/mt5-base
- max_length: 512
- max_length_output: 32
- epoch: 9
- batch: 8
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-jaquad-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
0e6b148a4952ed50068c4302f1978f37
|
daekeun-ml/koelectra-small-v3-nsmc
|
daekeun-ml
|
electra
| 9 | 13 |
transformers
| 1 |
text-classification
| true | false | false |
mit
|
['ko']
|
['nsmc']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification']
| false | true | true | 4,575 | false |
# Sentiment Binary Classification (fine-tuning with KoELECTRA-Small-v3 model and Naver Sentiment Movie Corpus dataset)
## Usage (Amazon SageMaker inference applicable)
It uses the interface of the SageMaker Inference Toolkit as is, so it can be easily deployed to SageMaker Endpoint.
### inference_nsmc.py
```python
import json
import sys
import logging
import torch
from torch import nn
from transformers import ElectraConfig
from transformers import ElectraModel, AutoTokenizer, ElectraTokenizer, ElectraForSequenceClassification
logging.basicConfig(
level=logging.INFO,
format='[{%(filename)s:%(lineno)d} %(levelname)s - %(message)s',
handlers=[
logging.FileHandler(filename='tmp.log'),
logging.StreamHandler(sys.stdout)
]
)
logger = logging.getLogger(__name__)
max_seq_length = 128
classes = ['Neg', 'Pos']
tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/koelectra-small-v3-nsmc")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def model_fn(model_path=None):
####
# If you have your own trained model
# Huggingface pre-trained model: 'monologg/koelectra-small-v3-discriminator'
####
#config = ElectraConfig.from_json_file(f'{model_path}/config.json')
#model = ElectraForSequenceClassification.from_pretrained(f'{model_path}/model.pth', config=config)
# Download model from the Huggingface hub
model = ElectraForSequenceClassification.from_pretrained('daekeun-ml/koelectra-small-v3-nsmc')
model.to(device)
return model
def input_fn(input_data, content_type="application/jsonlines"):
data_str = input_data.decode("utf-8")
jsonlines = data_str.split("\n")
transformed_inputs = []
for jsonline in jsonlines:
text = json.loads(jsonline)["text"][0]
logger.info("input text: {}".format(text))
encode_plus_token = tokenizer.encode_plus(
text,
max_length=max_seq_length,
add_special_tokens=True,
return_token_type_ids=False,
padding="max_length",
return_attention_mask=True,
return_tensors="pt",
truncation=True,
)
transformed_inputs.append(encode_plus_token)
return transformed_inputs
def predict_fn(transformed_inputs, model):
predicted_classes = []
for data in transformed_inputs:
data = data.to(device)
output = model(**data)
softmax_fn = nn.Softmax(dim=1)
softmax_output = softmax_fn(output[0])
_, prediction = torch.max(softmax_output, dim=1)
predicted_class_idx = prediction.item()
predicted_class = classes[predicted_class_idx]
score = softmax_output[0][predicted_class_idx]
logger.info("predicted_class: {}".format(predicted_class))
prediction_dict = {}
prediction_dict["predicted_label"] = predicted_class
prediction_dict['score'] = score.cpu().detach().numpy().tolist()
jsonline = json.dumps(prediction_dict)
logger.info("jsonline: {}".format(jsonline))
predicted_classes.append(jsonline)
predicted_classes_jsonlines = "\n".join(predicted_classes)
return predicted_classes_jsonlines
def output_fn(outputs, accept="application/jsonlines"):
return outputs, accept
```
### test.py
```python
>>> from inference_nsmc import model_fn, input_fn, predict_fn, output_fn
>>> with open('samples/nsmc.txt', mode='rb') as file:
>>> model_input_data = file.read()
>>> model = model_fn()
>>> transformed_inputs = input_fn(model_input_data)
>>> predicted_classes_jsonlines = predict_fn(transformed_inputs, model)
>>> model_outputs = output_fn(predicted_classes_jsonlines)
>>> print(model_outputs[0])
[{inference_nsmc.py:47} INFO - input text: 이 영화는 최고의 영화입니다
[{inference_nsmc.py:47} INFO - input text: 최악이에요. 배우의 연기력도 좋지 않고 내용도 너무 허접합니다
[{inference_nsmc.py:77} INFO - predicted_class: Pos
[{inference_nsmc.py:84} INFO - jsonline: {"predicted_label": "Pos", "score": 0.9619030952453613}
[{inference_nsmc.py:77} INFO - predicted_class: Neg
[{inference_nsmc.py:84} INFO - jsonline: {"predicted_label": "Neg", "score": 0.9994170665740967}
{"predicted_label": "Pos", "score": 0.9619030952453613}
{"predicted_label": "Neg", "score": 0.9994170665740967}
```
### Sample data (samples/nsmc.txt)
```
{"text": ["이 영화는 최고의 영화입니다"]}
{"text": ["최악이에요. 배우의 연기력도 좋지 않고 내용도 너무 허접합니다"]}
```
## References
- KoELECTRA: https://github.com/monologg/KoELECTRA
- Naver Sentiment Movie Corpus Dataset: https://github.com/e9t/nsmc
|
a3ae86628b7c67f47fa1eaac4f8ba1c1
|
facebook/regnet-y-320-seer-in1k
|
facebook
|
regnet
| 6 | 9 |
transformers
| 0 |
image-classification
| true | true | false |
apache-2.0
| null |
['imagenet-1k']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['vision', 'image-classification']
| false | true | true | 1,911 | false |
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision](https://arxiv.org/abs/2202.08360) and first released in [this repository](https://github.com/facebookresearch/vissl/tree/main/projects/SEER).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors trained [RegNets](https://huggingface.co/?models=regnet) models in a self-supervised fashion on a billion uncurated Instagram images. This model is later fine-tuned on ImageNet.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-320-seer-in1k")
>>> model = RegNetForImageClassification.from_pretrained("facebook/regnet-y-320-seer-in1k")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
|
7677d25c81a13f82a1316bc9715cc037
|
tsmatz/roberta_qa_japanese
|
tsmatz
|
roberta
| 10 | 296 |
transformers
| 1 |
question-answering
| true | false | false |
mit
|
['ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question-answering', 'generated_from_trainer', 'bert', 'jaquad']
| true | true | true | 4,054 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_qa_japanese
(Japanese caption : 日本語の (抽出型) 質問応答のモデル)
This model is a fine-tuned version of [rinna/japanese-roberta-base](https://huggingface.co/rinna/japanese-roberta-base) (pre-trained RoBERTa model provided by rinna Co., Ltd.) trained for extractive question answering.
The model is fine-tuned on [JaQuAD](https://huggingface.co/datasets/SkelterLabsInc/JaQuAD) dataset provided by Skelter Labs, in which data is collected from Japanese Wikipedia articles and annotated by a human.
## Intended uses
When running with a dedicated pipeline :
```python
from transformers import pipeline
model_name = "tsmatz/roberta_qa_japanese"
qa_pipeline = pipeline(
"question-answering",
model=model_name,
tokenizer=model_name)
result = qa_pipeline(
question = "決勝トーナメントで日本に勝ったのはどこでしたか。",
context = "日本は予選リーグで強豪のドイツとスペインに勝って決勝トーナメントに進んだが、クロアチアと対戦して敗れた。",
align_to_words = False,
)
print(result)
```
When manually running through forward pass :
```python
import torch
import numpy as np
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model_name = "tsmatz/roberta_qa_japanese"
model = (AutoModelForQuestionAnswering
.from_pretrained(model_name))
tokenizer = AutoTokenizer.from_pretrained(model_name)
def inference_answer(question, context):
question = question
context = context
test_feature = tokenizer(
question,
context,
max_length=318,
)
with torch.no_grad():
outputs = model(torch.tensor([test_feature["input_ids"]]))
start_logits = outputs.start_logits.cpu().numpy()
end_logits = outputs.end_logits.cpu().numpy()
answer_ids = test_feature["input_ids"][np.argmax(start_logits):np.argmax(end_logits)+1]
return "".join(tokenizer.batch_decode(answer_ids))
question = "決勝トーナメントで日本に勝ったのはどこでしたか。"
context = "日本は予選リーグで強豪のドイツとスペインに勝って決勝トーナメントに進んだが、クロアチアと対戦して敗れた。"
answer_pred = inference_answer(question, context)
print(answer_pred)
```
## Training procedure
You can download the source code for fine-tuning from [here](https://github.com/tsmatz/huggingface-finetune-japanese/blob/master/03-question-answering.ipynb).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1293 | 0.13 | 150 | 1.0311 |
| 1.1965 | 0.26 | 300 | 0.6723 |
| 1.022 | 0.39 | 450 | 0.4838 |
| 0.9594 | 0.53 | 600 | 0.5174 |
| 0.9187 | 0.66 | 750 | 0.4671 |
| 0.8229 | 0.79 | 900 | 0.4650 |
| 0.71 | 0.92 | 1050 | 0.2648 |
| 0.5436 | 1.05 | 1200 | 0.2665 |
| 0.5045 | 1.19 | 1350 | 0.2686 |
| 0.5025 | 1.32 | 1500 | 0.2082 |
| 0.5213 | 1.45 | 1650 | 0.1715 |
| 0.4648 | 1.58 | 1800 | 0.1563 |
| 0.4698 | 1.71 | 1950 | 0.1488 |
| 0.4823 | 1.84 | 2100 | 0.1050 |
| 0.4482 | 1.97 | 2250 | 0.0821 |
| 0.2755 | 2.11 | 2400 | 0.0898 |
| 0.2834 | 2.24 | 2550 | 0.0964 |
| 0.2525 | 2.37 | 2700 | 0.0533 |
| 0.2606 | 2.5 | 2850 | 0.0561 |
| 0.2467 | 2.63 | 3000 | 0.0601 |
| 0.2799 | 2.77 | 3150 | 0.0562 |
| 0.2497 | 2.9 | 3300 | 0.0516 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
c733f7d509525c45fcbd1a152dc68e6f
|
krishnayogik/distilbert-base-uncased-finetuned-emotion
|
krishnayogik
|
distilbert
| 12 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,345 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2258
- Accuracy: 0.9245
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8359 | 1.0 | 250 | 0.3316 | 0.901 | 0.8967 |
| 0.2584 | 2.0 | 500 | 0.2258 | 0.9245 | 0.9248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
e4db0a508c5f7ee6c7b3c0f6a561a095
|
furyhawk/t5-base-finetuned-bbc
|
furyhawk
|
t5
| 20 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,203 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-bbc
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 334 | 0.1500 | 24.5024 | 21.4979 | 24.0227 | 24.0303 | 19.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
d8ec256b0ccb9a199dd6e2fa87c8367f
|
thu-coai/CDial-GPT2_LCCC-base
|
thu-coai
| null | 5 | 82 |
transformers
| 1 |
conversational
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['conversational']
| false | true | true | 1,082 | false |
## Chinese pre-trained dialogue model (CDial-GPT)
This project provides a large-scale Chinese GPT model pre-trained on the dataset [LCCC](https://huggingface.co/datasets/silver/lccc).
We present a series of Chinese GPT model that are first pre-trained on a Chinese novel dataset and then post-trained on our LCCC dataset.
Similar to [TransferTransfo](https://arxiv.org/abs/1901.08149), we concatenate all dialogue histories into one context sentence, and use this sentence to predict the response. The input of our model consists of word embedding, speaker embedding, and positional embedding of each word.
Paper: [A Large-Scale Chinese Short-Text Conversation Dataset](https://arxiv.org/pdf/2008.03946.pdf)
### How to use
```python
from transformers import OpenAIGPTLMHeadModel, GPT2LMHeadModel, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained("thu-coai/CDial-GPT2_LCCC-base")
model = GPT2LMHeadModel.from_pretrained("thu-coai/CDial-GPT2_LCCC-base")
```
For more details, please refer to our [repo.](https://github.com/thu-coai/CDial-GPT) on github.
|
497d441bff43f6601f10c667f2d93073
|
PriaPillai/distilbert-base-uncased-finetuned-query
|
PriaPillai
|
distilbert
| 30 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,410 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-query
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3668
- Accuracy: 0.8936
- F1: 0.8924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6511 | 1.0 | 30 | 0.5878 | 0.7234 | 0.6985 |
| 0.499 | 2.0 | 60 | 0.4520 | 0.8723 | 0.8683 |
| 0.3169 | 3.0 | 90 | 0.3668 | 0.8936 | 0.8924 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
e68d338fb7c05ec4de38e213094749df
|
HYM/test_ner-finetuned-ner
|
HYM
|
distilbert
| 13 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,540 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_ner-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9242
- Recall: 0.9349
- F1: 0.9295
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2385 | 1.0 | 878 | 0.0708 | 0.9140 | 0.9216 | 0.9178 | 0.9808 |
| 0.055 | 2.0 | 1756 | 0.0626 | 0.9209 | 0.9340 | 0.9274 | 0.9828 |
| 0.0309 | 3.0 | 2634 | 0.0623 | 0.9242 | 0.9349 | 0.9295 | 0.9834 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
bbf8bd081460175bb5d21c52c15b6253
|
stanfordnlp/stanza-hyw
|
stanfordnlp
| null | 9 | 1 |
stanza
| 0 |
token-classification
| false | false | false |
apache-2.0
|
['hyw']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stanza', 'token-classification']
| false | true | true | 590 | false |
# Stanza model for Western_Armenian (hyw)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2022-09-25 01:32:11.573
|
2d8d946d462cd9a9f1443ff3cb6880e5
|
pyf98/tedlium2_transducer_e_branchformer
|
pyf98
| null | 21 | 0 |
espnet
| 0 |
automatic-speech-recognition
| false | false | false |
cc-by-4.0
|
['en']
|
['tedlium2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition']
| false | true | true | 11,168 | false |
## ESPnet2 ASR model
### `pyf98/tedlium2_transducer_e_branchformer`
This model was trained by Yifan Peng using tedlium2 recipe in [espnet](https://github.com/espnet/espnet/).
References:
- [E-Branchformer: Branchformer with Enhanced merging for speech recognition (SLT 2022)](https://arxiv.org/abs/2210.00077)
- [Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding (ICML 2022)](https://proceedings.mlr.press/v162/peng22a.html)
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 478ba004e114e7862b05fb01112de7f7e1da3996
pip install -e .
cd egs2/tedlium2/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/tedlium2_transducer_e_branchformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Feb 9 01:29:33 CST 2023`
- python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]`
- espnet version: `espnet 202301`
- pytorch version: `pytorch 1.13.1`
- Git hash: `478ba004e114e7862b05fb01112de7f7e1da3996`
- Commit date: `Tue Feb 7 00:50:49 2023 +0000`
## asr_train_asr_transducer_e_branchformer_e12_raw_en_bpe500_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transducer_asr_model_valid.loss.ave/dev|466|14671|93.4|4.3|2.3|1.0|7.6|71.7|
|decode_asr_transducer_asr_model_valid.loss.ave/test|1155|27500|93.6|4.0|2.4|1.0|7.4|63.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transducer_asr_model_valid.loss.ave/dev|466|78259|97.1|0.9|2.0|0.9|3.8|71.7|
|decode_asr_transducer_asr_model_valid.loss.ave/test|1155|145066|97.1|0.9|2.1|0.9|3.9|63.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transducer_asr_model_valid.loss.ave/dev|466|28296|94.7|3.1|2.3|0.8|6.2|71.7|
|decode_asr_transducer_asr_model_valid.loss.ave/test|1155|52113|95.1|2.6|2.2|0.9|5.8|63.5|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_transducer_e_branchformer_e12.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transducer_e_branchformer_e12_raw_en_bpe500_sp
ngpu: 1
seed: 2022
num_workers: 6
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45753
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 5
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 10000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe500_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe500_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe500_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe500_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- s
- ▁the
- t
- ▁a
- ▁and
- ▁to
- d
- e
- ▁of
- ''''
- n
- ing
- ▁in
- ▁i
- ▁that
- i
- a
- l
- p
- m
- y
- o
- ▁it
- ▁we
- c
- u
- ▁you
- ed
- ▁
- r
- ▁is
- re
- ▁this
- ar
- g
- ▁so
- al
- b
- ▁s
- or
- ▁f
- ▁c
- in
- k
- f
- ▁for
- ic
- er
- le
- ▁be
- ▁do
- ▁re
- ve
- ▁e
- ▁w
- ▁was
- es
- ▁they
- ly
- h
- ▁on
- v
- ▁are
- ri
- ▁have
- an
- ▁what
- ▁with
- ▁t
- w
- ur
- it
- ent
- ▁can
- ▁he
- ▁but
- ra
- ce
- ▁me
- ▁b
- ▁ma
- ▁p
- ll
- ▁st
- ▁one
- 'on'
- ▁about
- th
- ▁de
- en
- ▁all
- ▁not
- il
- ▁g
- ch
- at
- ▁there
- ▁mo
- ter
- ation
- tion
- ▁at
- ▁my
- ro
- ▁as
- te
- ▁le
- ▁con
- ▁like
- ▁people
- ▁or
- ▁an
- el
- ▁if
- ▁from
- ver
- ▁su
- ▁co
- ate
- ▁these
- ol
- ci
- ▁now
- ▁see
- ▁out
- ▁our
- ion
- ▁know
- ect
- ▁just
- as
- ▁ex
- ▁ch
- ▁d
- ▁when
- ▁very
- ▁think
- ▁who
- ▁because
- ▁go
- ▁up
- ▁us
- ▁pa
- ▁no
- ies
- ▁di
- ▁ho
- om
- ive
- ▁get
- id
- ▁o
- ▁hi
- un
- ▁how
- ▁by
- ir
- et
- ck
- ity
- ▁po
- ul
- ▁which
- ▁mi
- ▁some
- z
- ▁sp
- ▁un
- ▁going
- ▁pro
- ist
- ▁se
- ▁look
- ▁time
- ment
- de
- ▁more
- ▁had
- ng
- ▁would
- ge
- la
- ▁here
- ▁really
- x
- ▁your
- ▁them
- us
- me
- ▁en
- ▁two
- ▁k
- ▁li
- ▁world
- ne
- ow
- ▁way
- ▁want
- ▁work
- ▁don
- ▁lo
- ▁fa
- ▁were
- ▁their
- age
- vi
- ▁ha
- ac
- der
- est
- ▁bo
- am
- ▁other
- able
- ▁actually
- ▁sh
- ▁make
- ▁ba
- ▁la
- ine
- ▁into
- ▁where
- ▁could
- ▁comp
- ting
- ▁has
- ▁will
- ▁ne
- j
- ical
- ally
- ▁vi
- ▁things
- ▁te
- igh
- ▁say
- ▁years
- ers
- ▁ra
- ther
- ▁than
- ru
- ▁ro
- op
- ▁did
- ▁any
- ▁new
- ound
- ig
- ▁well
- mo
- ▁she
- ▁na
- ▁been
- he
- ▁thousand
- ▁car
- ▁take
- ▁right
- ▁then
- ▁need
- ▁start
- ▁hundred
- ▁something
- ▁over
- ▁com
- ia
- ▁kind
- um
- if
- ▁those
- ▁first
- ▁pre
- ta
- ▁said
- ize
- end
- ▁even
- ▁thing
- one
- ▁back
- ite
- ▁every
- ▁little
- ry
- ▁life
- ▁much
- ke
- ▁also
- ▁most
- ant
- per
- ▁three
- ▁come
- ▁lot
- ance
- ▁got
- ▁talk
- ▁per
- ▁inter
- ▁sa
- ▁use
- ▁mu
- ▁part
- ish
- ence
- ▁happen
- ▁bi
- ▁mean
- ough
- ▁qu
- ▁bu
- ▁day
- ▁ga
- ▁only
- ▁many
- ▁different
- ▁dr
- ▁th
- ▁show
- ful
- ▁down
- ated
- ▁good
- ▁tra
- ▁around
- ▁idea
- ▁human
- ous
- ▁put
- ▁through
- ▁five
- ▁why
- ▁change
- ▁real
- ff
- ible
- ▁fact
- ▁same
- ▁jo
- ▁live
- ▁year
- ▁problem
- ▁ph
- ▁four
- ▁give
- ▁big
- ▁tell
- ▁great
- ▁try
- ▁va
- ▁ru
- ▁system
- ▁six
- ▁plan
- ▁place
- ▁build
- ▁called
- ▁again
- ▁point
- ▁twenty
- ▁percent
- ▁nine
- ▁find
- ▁app
- ▁after
- ▁long
- ▁eight
- ▁imp
- ▁gene
- ▁design
- ▁today
- ▁should
- ▁made
- ious
- ▁came
- ▁learn
- ▁last
- ▁own
- way
- ▁turn
- ▁seven
- ▁high
- ▁question
- ▁person
- ▁brain
- ▁important
- ▁another
- ▁thought
- ▁trans
- ▁create
- ness
- ▁hu
- ▁power
- ▁act
- land
- ▁play
- ▁sort
- ▁old
- ▁before
- ▁course
- ▁understand
- ▁feel
- ▁might
- ▁each
- ▁million
- ▁better
- ▁together
- ▁ago
- ▁example
- ▁help
- ▁story
- ▁next
- ▁hand
- ▁school
- ▁water
- ▁develop
- ▁technology
- que
- ▁second
- ▁grow
- ▁still
- ▁cell
- ▁believe
- ▁number
- ▁small
- ▁between
- qui
- ▁data
- ▁become
- ▁america
- ▁maybe
- ▁space
- ▁project
- ▁organ
- ▁vo
- ▁children
- ▁book
- graph
- ▁open
- ▁fifty
- ▁picture
- ▁health
- ▁thirty
- ▁africa
- ▁reason
- ▁large
- ▁hard
- ▁computer
- ▁always
- ▁sense
- ▁money
- ▁women
- ▁everything
- ▁information
- ▁country
- ▁teach
- ▁energy
- ▁experience
- ▁food
- ▁process
- qua
- ▁interesting
- ▁future
- ▁science
- q
- '0'
- '5'
- '6'
- '9'
- '3'
- '8'
- '4'
- N
- A
- '7'
- S
- G
- F
- R
- L
- U
- E
- T
- H
- _
- B
- D
- J
- M
- ă
- ō
- ť
- '2'
- '-'
- '1'
- C
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf:
joint_space_size: 320
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe500_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
report_cer: false
report_wer: false
preencoder: null
preencoder_conf: {}
encoder: e_branchformer
encoder_conf:
output_size: 256
attention_heads: 4
attention_layer_type: rel_selfattn
pos_enc_layer_type: rel_pos
rel_pos_type: latest
cgmlp_linear_units: 1024
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
layer_drop_rate: 0.0
linear_units: 1024
positionwise_layer_type: linear
use_ffn: true
macaron_ffn: true
merge_conv_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transducer
decoder_conf:
rnn_type: lstm
num_layers: 1
hidden_size: 256
dropout: 0.1
dropout_embed: 0.2
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202301'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
a55efdd3e53469de7ae0654fdea3978c
|
sayakpaul/glpn-nyu-finetuned-diode-230119-100058
|
sayakpaul
|
glpn
| 7 | 0 |
transformers
| 0 |
depth-estimation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'depth-estimation', 'generated_from_trainer']
| true | true | true | 11,010 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glpn-nyu-finetuned-diode-230119-100058
This model is a fine-tuned version of [vinvino02/glpn-nyu](https://huggingface.co/vinvino02/glpn-nyu) on the diode-subset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4305
- Mae: 0.4203
- Rmse: 0.6123
- Abs Rel: 0.4280
- Log Mae: 0.1694
- Log Rmse: 0.2214
- Delta1: 0.3813
- Delta2: 0.6446
- Delta3: 0.8152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 75
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:-------:|:--------:|:------:|:------:|:------:|
| 1.2807 | 1.0 | 72 | 0.9866 | 0.8312 | 1.0131 | 0.7179 | 0.5655 | 0.5924 | 0.0087 | 0.0200 | 0.0552 |
| 0.7396 | 2.0 | 144 | 0.4976 | 0.4741 | 0.6670 | 0.5279 | 0.1989 | 0.2567 | 0.3070 | 0.5470 | 0.7943 |
| 0.5018 | 3.0 | 216 | 0.4811 | 0.4630 | 0.6367 | 0.5198 | 0.1929 | 0.2446 | 0.3211 | 0.5440 | 0.7506 |
| 0.482 | 4.0 | 288 | 0.4726 | 0.4556 | 0.6337 | 0.4951 | 0.1893 | 0.2410 | 0.3306 | 0.5636 | 0.7663 |
| 0.4874 | 5.0 | 360 | 0.4813 | 0.4662 | 0.6355 | 0.5265 | 0.1941 | 0.2446 | 0.3179 | 0.5385 | 0.7278 |
| 0.4648 | 6.0 | 432 | 0.4681 | 0.4512 | 0.6309 | 0.4783 | 0.1869 | 0.2383 | 0.3430 | 0.5757 | 0.7527 |
| 0.4346 | 7.0 | 504 | 0.4637 | 0.4499 | 0.6292 | 0.4710 | 0.1859 | 0.2357 | 0.3453 | 0.5671 | 0.7644 |
| 0.4018 | 8.0 | 576 | 0.4790 | 0.4638 | 0.6349 | 0.5161 | 0.1928 | 0.2436 | 0.3255 | 0.5408 | 0.7338 |
| 0.4092 | 9.0 | 648 | 0.4559 | 0.4449 | 0.6267 | 0.4540 | 0.1827 | 0.2319 | 0.3541 | 0.5814 | 0.7692 |
| 0.3891 | 10.0 | 720 | 0.4619 | 0.4433 | 0.6259 | 0.4748 | 0.1823 | 0.2351 | 0.3579 | 0.5870 | 0.7742 |
| 0.3707 | 11.0 | 792 | 0.4624 | 0.4500 | 0.6269 | 0.4828 | 0.1851 | 0.2350 | 0.3421 | 0.5672 | 0.7638 |
| 0.4129 | 12.0 | 864 | 0.4648 | 0.4468 | 0.6265 | 0.4836 | 0.1836 | 0.2358 | 0.3533 | 0.5786 | 0.7625 |
| 0.4108 | 13.0 | 936 | 0.4474 | 0.4312 | 0.6187 | 0.4501 | 0.1752 | 0.2280 | 0.3801 | 0.6088 | 0.7887 |
| 0.3948 | 14.0 | 1008 | 0.4619 | 0.4498 | 0.6263 | 0.4853 | 0.1844 | 0.2344 | 0.3401 | 0.5721 | 0.7645 |
| 0.4009 | 15.0 | 1080 | 0.4619 | 0.4440 | 0.6244 | 0.4889 | 0.1820 | 0.2351 | 0.3563 | 0.5841 | 0.7751 |
| 0.3657 | 16.0 | 1152 | 0.4636 | 0.4491 | 0.6260 | 0.4936 | 0.1846 | 0.2360 | 0.3422 | 0.5734 | 0.7644 |
| 0.3605 | 17.0 | 1224 | 0.4353 | 0.4255 | 0.6153 | 0.4248 | 0.1715 | 0.2218 | 0.3844 | 0.6207 | 0.8008 |
| 0.3937 | 18.0 | 1296 | 0.4756 | 0.4609 | 0.6310 | 0.5281 | 0.1909 | 0.2423 | 0.3220 | 0.5461 | 0.7538 |
| 0.3453 | 19.0 | 1368 | 0.4698 | 0.4517 | 0.6270 | 0.5145 | 0.1863 | 0.2392 | 0.3360 | 0.5702 | 0.7689 |
| 0.3883 | 20.0 | 1440 | 0.4349 | 0.4240 | 0.6145 | 0.4311 | 0.1712 | 0.2230 | 0.3841 | 0.6321 | 0.8030 |
| 0.3482 | 21.0 | 1512 | 0.4339 | 0.4209 | 0.6146 | 0.4223 | 0.1694 | 0.2223 | 0.3967 | 0.6337 | 0.8036 |
| 0.3374 | 22.0 | 1584 | 0.4400 | 0.4289 | 0.6167 | 0.4431 | 0.1737 | 0.2254 | 0.3743 | 0.6191 | 0.7971 |
| 0.3516 | 23.0 | 1656 | 0.4395 | 0.4280 | 0.6171 | 0.4426 | 0.1737 | 0.2259 | 0.3710 | 0.6241 | 0.7998 |
| 0.3901 | 24.0 | 1728 | 0.4444 | 0.4324 | 0.6184 | 0.4562 | 0.1758 | 0.2280 | 0.3665 | 0.6118 | 0.7991 |
| 0.3587 | 25.0 | 1800 | 0.4326 | 0.4200 | 0.6129 | 0.4281 | 0.1690 | 0.2222 | 0.3920 | 0.6403 | 0.8073 |
| 0.3425 | 26.0 | 1872 | 0.4371 | 0.4231 | 0.6152 | 0.4341 | 0.1709 | 0.2242 | 0.3852 | 0.6372 | 0.7974 |
| 0.3252 | 27.0 | 1944 | 0.4381 | 0.4225 | 0.6140 | 0.4399 | 0.1705 | 0.2245 | 0.3851 | 0.6396 | 0.8065 |
| 0.3586 | 28.0 | 2016 | 0.4441 | 0.4304 | 0.6162 | 0.4488 | 0.1746 | 0.2258 | 0.3674 | 0.6179 | 0.7929 |
| 0.3389 | 29.0 | 2088 | 0.4240 | 0.4112 | 0.6100 | 0.4017 | 0.1640 | 0.2173 | 0.4152 | 0.6599 | 0.8128 |
| 0.3418 | 30.0 | 2160 | 0.4312 | 0.4195 | 0.6126 | 0.4211 | 0.1687 | 0.2206 | 0.3899 | 0.6435 | 0.8123 |
| 0.3454 | 31.0 | 2232 | 0.4301 | 0.4176 | 0.6126 | 0.4167 | 0.1674 | 0.2203 | 0.3974 | 0.6479 | 0.8089 |
| 0.3499 | 32.0 | 2304 | 0.4262 | 0.4154 | 0.6115 | 0.4081 | 0.1661 | 0.2184 | 0.3997 | 0.6578 | 0.8083 |
| 0.3649 | 33.0 | 2376 | 0.4429 | 0.4313 | 0.6171 | 0.4507 | 0.1753 | 0.2263 | 0.3641 | 0.6134 | 0.7982 |
| 0.3341 | 34.0 | 2448 | 0.4292 | 0.4207 | 0.6127 | 0.4161 | 0.1689 | 0.2192 | 0.3874 | 0.6415 | 0.8007 |
| 0.3323 | 35.0 | 2520 | 0.4402 | 0.4266 | 0.6148 | 0.4434 | 0.1728 | 0.2247 | 0.3754 | 0.6254 | 0.7983 |
| 0.3374 | 36.0 | 2592 | 0.4336 | 0.4233 | 0.6139 | 0.4277 | 0.1706 | 0.2219 | 0.3810 | 0.6362 | 0.8008 |
| 0.334 | 37.0 | 2664 | 0.4310 | 0.4230 | 0.6138 | 0.4240 | 0.1703 | 0.2209 | 0.3826 | 0.6345 | 0.8034 |
| 0.3471 | 38.0 | 2736 | 0.4372 | 0.4250 | 0.6144 | 0.4397 | 0.1720 | 0.2240 | 0.3780 | 0.6303 | 0.8046 |
| 0.3283 | 39.0 | 2808 | 0.4421 | 0.4301 | 0.6168 | 0.4497 | 0.1743 | 0.2259 | 0.3654 | 0.6209 | 0.7993 |
| 0.3418 | 40.0 | 2880 | 0.4340 | 0.4224 | 0.6137 | 0.4334 | 0.1703 | 0.2228 | 0.3857 | 0.6351 | 0.8054 |
| 0.3455 | 41.0 | 2952 | 0.4294 | 0.4174 | 0.6118 | 0.4212 | 0.1675 | 0.2203 | 0.3959 | 0.6469 | 0.8109 |
| 0.3229 | 42.0 | 3024 | 0.4291 | 0.4165 | 0.6121 | 0.4199 | 0.1671 | 0.2207 | 0.4035 | 0.6464 | 0.8103 |
| 0.352 | 43.0 | 3096 | 0.4393 | 0.4266 | 0.6154 | 0.4462 | 0.1729 | 0.2253 | 0.3744 | 0.6287 | 0.8049 |
| 0.3163 | 44.0 | 3168 | 0.4250 | 0.4113 | 0.6098 | 0.4112 | 0.1647 | 0.2187 | 0.4041 | 0.6620 | 0.8201 |
| 0.3284 | 45.0 | 3240 | 0.4358 | 0.4245 | 0.6138 | 0.4379 | 0.1716 | 0.2233 | 0.3745 | 0.6306 | 0.8106 |
| 0.3359 | 46.0 | 3312 | 0.4321 | 0.4217 | 0.6124 | 0.4283 | 0.1699 | 0.2210 | 0.3770 | 0.6412 | 0.8129 |
| 0.3406 | 47.0 | 3384 | 0.4238 | 0.4127 | 0.6104 | 0.4084 | 0.1653 | 0.2183 | 0.3982 | 0.6617 | 0.8177 |
| 0.3207 | 48.0 | 3456 | 0.4375 | 0.4275 | 0.6147 | 0.4435 | 0.1733 | 0.2243 | 0.3658 | 0.6262 | 0.8071 |
| 0.3338 | 49.0 | 3528 | 0.4331 | 0.4223 | 0.6142 | 0.4310 | 0.1705 | 0.2228 | 0.3846 | 0.6374 | 0.8071 |
| 0.3203 | 50.0 | 3600 | 0.4308 | 0.4212 | 0.6136 | 0.4253 | 0.1695 | 0.2213 | 0.3878 | 0.6407 | 0.8054 |
| 0.3238 | 51.0 | 3672 | 0.4379 | 0.4267 | 0.6148 | 0.4416 | 0.1727 | 0.2241 | 0.3723 | 0.6244 | 0.8036 |
| 0.3209 | 52.0 | 3744 | 0.4289 | 0.4187 | 0.6121 | 0.4178 | 0.1681 | 0.2198 | 0.3920 | 0.6461 | 0.8096 |
| 0.3198 | 53.0 | 3816 | 0.4376 | 0.4264 | 0.6145 | 0.4402 | 0.1724 | 0.2237 | 0.3708 | 0.6279 | 0.8066 |
| 0.3137 | 54.0 | 3888 | 0.4294 | 0.4180 | 0.6115 | 0.4242 | 0.1681 | 0.2208 | 0.3888 | 0.6494 | 0.8152 |
| 0.3238 | 55.0 | 3960 | 0.4416 | 0.4294 | 0.6158 | 0.4521 | 0.1743 | 0.2261 | 0.3645 | 0.6205 | 0.8069 |
| 0.3173 | 56.0 | 4032 | 0.4257 | 0.4142 | 0.6116 | 0.4145 | 0.1661 | 0.2198 | 0.4016 | 0.6586 | 0.8136 |
| 0.3173 | 57.0 | 4104 | 0.4303 | 0.4193 | 0.6123 | 0.4246 | 0.1687 | 0.2210 | 0.3879 | 0.6451 | 0.8118 |
| 0.3297 | 58.0 | 4176 | 0.4302 | 0.4219 | 0.6132 | 0.4259 | 0.1700 | 0.2211 | 0.3792 | 0.6394 | 0.8122 |
| 0.3261 | 59.0 | 4248 | 0.4319 | 0.4220 | 0.6131 | 0.4312 | 0.1702 | 0.2221 | 0.3781 | 0.6407 | 0.8142 |
| 0.3082 | 60.0 | 4320 | 0.4340 | 0.4234 | 0.6136 | 0.4346 | 0.1710 | 0.2228 | 0.3754 | 0.6373 | 0.8106 |
| 0.31 | 61.0 | 4392 | 0.4225 | 0.4120 | 0.6104 | 0.4073 | 0.1646 | 0.2181 | 0.4054 | 0.6626 | 0.8168 |
| 0.3065 | 62.0 | 4464 | 0.4313 | 0.4197 | 0.6125 | 0.4280 | 0.1690 | 0.2216 | 0.3854 | 0.6472 | 0.8127 |
| 0.3046 | 63.0 | 4536 | 0.4316 | 0.4202 | 0.6127 | 0.4268 | 0.1691 | 0.2213 | 0.3849 | 0.6448 | 0.8131 |
| 0.303 | 64.0 | 4608 | 0.4352 | 0.4241 | 0.6137 | 0.4373 | 0.1712 | 0.2231 | 0.3760 | 0.6364 | 0.8097 |
| 0.3094 | 65.0 | 4680 | 0.4318 | 0.4205 | 0.6128 | 0.4304 | 0.1695 | 0.2220 | 0.3828 | 0.6438 | 0.8140 |
| 0.3035 | 66.0 | 4752 | 0.4351 | 0.4233 | 0.6136 | 0.4386 | 0.1709 | 0.2235 | 0.3781 | 0.6388 | 0.8099 |
| 0.327 | 67.0 | 4824 | 0.4307 | 0.4203 | 0.6131 | 0.4280 | 0.1693 | 0.2216 | 0.3828 | 0.6463 | 0.8143 |
| 0.3175 | 68.0 | 4896 | 0.4325 | 0.4219 | 0.6137 | 0.4314 | 0.1701 | 0.2222 | 0.3809 | 0.6406 | 0.8135 |
| 0.3188 | 69.0 | 4968 | 0.4299 | 0.4203 | 0.6126 | 0.4271 | 0.1694 | 0.2214 | 0.3827 | 0.6440 | 0.8141 |
| 0.3158 | 70.0 | 5040 | 0.4304 | 0.4203 | 0.6126 | 0.4274 | 0.1694 | 0.2215 | 0.3832 | 0.6443 | 0.8133 |
| 0.3298 | 71.0 | 5112 | 0.4315 | 0.4219 | 0.6135 | 0.4292 | 0.1700 | 0.2218 | 0.3792 | 0.6423 | 0.8136 |
| 0.3246 | 72.0 | 5184 | 0.4323 | 0.4219 | 0.6129 | 0.4322 | 0.1703 | 0.2223 | 0.3769 | 0.6418 | 0.8133 |
| 0.3116 | 73.0 | 5256 | 0.4301 | 0.4198 | 0.6124 | 0.4264 | 0.1691 | 0.2213 | 0.3833 | 0.6459 | 0.8141 |
| 0.3192 | 74.0 | 5328 | 0.4301 | 0.4200 | 0.6125 | 0.4266 | 0.1691 | 0.2213 | 0.3819 | 0.6464 | 0.8156 |
| 0.3172 | 75.0 | 5400 | 0.4305 | 0.4203 | 0.6123 | 0.4280 | 0.1694 | 0.2214 | 0.3813 | 0.6446 | 0.8152 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
9af60b4520910c053b8e850e9c8e2682
|
sd-concepts-library/dragonborn
|
sd-concepts-library
| null | 12 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,354 | false |
### Dragonborn on Stable Diffusion
This is the `<dragonborn>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







|
9891734d10d8866b4b7d10f1e302ff4b
|
sriAryan18/tf_bert_uncased_emotion_detection
|
sriAryan18
|
bert
| 4 | 5 |
transformers
| 1 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 2,202 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf_bert_uncased_emotion_detection
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0659
- Train Accuracy: 0.9661
- Validation Loss: 0.1150
- Validation Accuracy: 0.9370
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 6000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3703 | 0.8683 | 0.1511 | 0.9315 | 0 |
| 0.1208 | 0.9414 | 0.1145 | 0.9380 | 1 |
| 0.0820 | 0.9561 | 0.1150 | 0.9370 | 2 |
| 0.0656 | 0.9681 | 0.1150 | 0.9370 | 3 |
| 0.0643 | 0.9671 | 0.1150 | 0.9370 | 4 |
| 0.0652 | 0.9697 | 0.1150 | 0.9370 | 5 |
| 0.0646 | 0.9689 | 0.1150 | 0.9370 | 6 |
| 0.0651 | 0.9678 | 0.1150 | 0.9370 | 7 |
| 0.0651 | 0.9691 | 0.1150 | 0.9370 | 8 |
| 0.0659 | 0.9661 | 0.1150 | 0.9370 | 9 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.0
- Tokenizers 0.13.2
|
5d1544d2458333f194d60107ead1bd88
|
shirshakach/function-arg-swap-model-148k-files-365k-samples
|
shirshakach
|
distilbert
| 15 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,101 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# function-arg-swap-model-148k-files-365k-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4783
- Accuracy: 0.7679
- Precision: 0.7641
- Recall: 0.7812
- F1 score: 0.7725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
6f0b3c8a8d594586cc8adb9997830a13
|
spacy/xx_sent_ud_sm
|
spacy
| null | 17 | 79 |
spacy
| 0 | null | false | false | false |
cc-by-sa-3.0
|
['multilingual']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['spacy']
| false | true | true | 1,509 | false |
### Details: https://spacy.io/models/xx#xx_sent_ud_sm
Multi-language pipeline optimized for CPU. Components: senter.
| Feature | Description |
| --- | --- |
| **Name** | `xx_sent_ud_sm` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `senter` |
| **Components** | `senter` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Universal Dependencies v2.8 (UD_Afrikaans-AfriBooms, UD_Croatian-SET, UD_Czech-CAC, UD_Czech-CLTT, UD_Danish-DDT, UD_Dutch-Alpino, UD_Dutch-LassySmall, UD_English-EWT, UD_Finnish-FTB, UD_Finnish-TDT, UD_French-GSD, UD_French-Spoken, UD_German-GSD, UD_Indonesian-GSD, UD_Irish-IDT, UD_Italian-TWITTIRO, UD_Korean-GSD, UD_Korean-Kaist, UD_Latvian-LVTB, UD_Lithuanian-ALKSNIS, UD_Lithuanian-HSE, UD_Marathi-UFAL, UD_Norwegian-Bokmaal, UD_Norwegian-Nynorsk, UD_Norwegian-NynorskLIA, UD_Persian-Seraji, UD_Portuguese-Bosque, UD_Portuguese-GSD, UD_Romanian-Nonstandard, UD_Romanian-RRT, UD_Russian-GSD, UD_Russian-Taiga, UD_Serbian-SET, UD_Slovak-SNK, UD_Spanish-GSD, UD_Swedish-Talbanken, UD_Telugu-MTG, UD_Vietnamese-VTB)](https://universaldependencies.org/) (Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell; et al.) |
| **License** | `CC BY-SA 3.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 98.59 |
| `TOKEN_P` | 95.31 |
| `TOKEN_R` | 95.72 |
| `TOKEN_F` | 95.52 |
| `SENTS_P` | 90.66 |
| `SENTS_R` | 81.58 |
| `SENTS_F` | 85.88 |
|
6bc789d787d98f103b24b997a4f7efc8
|
seonghyeonye/flipped_11B
|
seonghyeonye
|
t5
| 12 | 4 |
transformers
| 6 |
text2text-generation
| true | false | false |
apache-2.0
|
['en']
|
['bigscience/P3']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 5,693 | false |
**Official repository**: [seonghyeonye/Flipped-Learning](https://github.com/seonghyeonye/Flipped-Learning)
# Model Description
FLIPPED uses a unique meta-learning method to show zero-shot task generalization on classification natural language prompts, outperforming GPT-3 and T0-11B on many tasks with a 4x smaller scale.
It is a series of encoder-decoder model trained on a numerous classification dataset. We show inputs and its corresponding outputs of each instances in each dataset to FLIPPED, and train it to generate its possible instruction. We add unlikelihood loss in order **not** to generate the instruction when given the same input, but a wrong output. To obtain FLIPPED, we fine-tune a T5 model in a given scale on a multitask mixture covering many different classification NLP tasks.
# Intended uses
You can use the models to perform inference on tasks by specifying your input-output NLP query in a "input: {input}\noutput: {output}" form , and the model will predict the instruction. For example, You can try
*"input: <extra_id_0> this is the best cast iron skillet you will ever buy<extra_id_1>\noutput: Positive"*
as an input, and the model will hopefully generate *"Title: Review:"*.
# How to use
Our overall explanation models along with ablations can be found in our [paper](https://arxiv.org/abs/2210.02969). We recommend using the [FLIPPED-11B](seonghyeonye/flipped_11B) checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
|Model|Number of parameters|
|-|-|
|[Flipped_11B](https://huggingface.co/seonghyeonye/flipped_11B)|11 billion|
|[Flipped_3B](https://huggingface.co/seonghyeonye/flipped_3B)|3 billion|
Here is how to download the model in PyTorch:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("seonghyeonye/flipped_11B")
tokenizer = T5Tokenizer.from_pretrained("seonghyeonye/flipped_11B")
```
If you want to use another checkpoint, please replace the path in `T5Tokenizer` and `T5ForConditionalGeneration`.
We also provide a quick [Jupyter Notebook](https://github.com/seonghyeonye/Flipped-Learning/blob/master/flipped_inference.ipynb) where you can inference with our method.
**Note: the model was trained with bfloat16 activations. As such, we highly discourage running inference with fp16.**
# Training procedure
FLIPPED models are based on [T5](https://huggingface.co/google/t5-v1_1-xxl), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4).
At a high level, the input text along with output label is fed to the encoder and the instruction text is produced by the decoder. The model is fine-tuned to autoregressively generate the target. We also feed input text along with a wrong input, adding an unlikelihood loss in order not to make model produce the proper instruction in that case. Here are our training details.
Training details:
- Fine-tuning steps: 5'000
- Input sequence length: 384
- Target sequence length: 64
- Batch size: 240
- Optimizer: Adafactor
- Learning rate: 5e-5
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we randomly sampled any dataset if it has over 500'000 examples so that it has at most 500'000 examples. Also, we randomly choose which instruction to generate for each training steps, so ideally each instruction appears *num_examples/num_templates* while training.)
# Training data
We trained different variants T0 with different mixtures of datasets.
|Model|Training datasets|
|--|--|
|FLIPPED-11B|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Topic Classification: AG News, DBPedia<br>- Paraphrase Identification: MRPC, PAWS, QQP|
|FLIPPED_3B|Same as FLIPPED-11B|
We only choose prompts examples that has output lables, which can be found on the dataset page.
# Evaluation data
We evaluate our models on following datasets:
|Task category|Datasets|
|-|-|
|Natural language inference|ANLI(R1, R2, R3), CB, RTE|
|Coreference resolution|WSC, Winogrande|
|Word sense disambiguation|WiC|
|Sentence completion|COPA, HellaSwag, Story Cloze|
|QA|PIQA, ARC-Challenge, OpenbookQA|
We also evaluate FLIPPED on a subset of [BIG-bench benchmark](https://github.com/google/BIG-bench):
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
# Label generalization
We evaluate the robustness of models on following datasets with changing the output label of the datasets. The substitute words can be found in our [paper](https://arxiv.org/abs/2210.02969).
|Task category|(Datasets, Template name)|
|-|-|
|Unseen tasks|(WSC, does the pronoun refer to), (CB, can we infer), (RTE, MNLI crowdsource)|
|Seen tasks|(IMDB, Reviewer Enjoyment Yes No), (PAWS, Meaning) |
The template name we used can be found in the [promptsource template library](https://github.com/bigscience-workshop/promptsource/tree/main/promptsource/templates).
# BibTeX entry and citation info
```bibtex
@article{ye2022guess,
title={Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners},
author={Ye, Seonghyeon and Kim, Doyoung and Jang, Joel and Shin, Joongbo and Seo, Minjoon},
journal={arXiv preprint arXiv:2210.02969},
year={2022}
}
```
|
47ee4b27bf1ce3b44bc936574449cd9e
|
shed-e/MLM
|
shed-e
|
distilbert
| 9 | 5 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,318 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6954 | 1.0 | 157 | 2.5243 |
| 2.563 | 2.0 | 314 | 2.4738 |
| 2.5258 | 3.0 | 471 | 2.4369 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
03cfccdefd4b5e4292e13a2924f726f5
|
McGill-NLP/bart-qg-nq-checkpoint
|
McGill-NLP
|
bart
| 7 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,687 | false |
# BART-base fine-tuned on NaturalQuestions for **Question Generation**
[BART Model](https://arxiv.org/pdf/1910.13461.pdf) fine-tuned on [Google NaturalQuestions](https://ai.google.com/research/NaturalQuestions/) for **Question Generation** by treating long answer as input, and question as output.
## Details of BART
The **BART** model was presented in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by *Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer* in Here the abstract:
We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.
## Details of the downstream task (QG) - Dataset 📚 🧐
Dataset: ```NaturalQuestions``` from Google (https://ai.google.com/research/NaturalQuestions/)
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| NaturalQuestions | train | 97650 |
| NaturalQuestions | valid | 10850 |
## Model fine-tuning 🏋️
The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/QG/train.py)
## Model in Action 🚀
```python
from transformers import AutoModel, BartTokenizer
#Load the tokenizer
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
#Load the model
model = AutoModelForSeq2SeqLM.from_pretrained("McGill-NLP/bart-qg-nq-checkpoint")
```
## Citation
If you want to cite this model you can use this:
```bibtex
@inproceedings{kulshreshtha-etal-2021-back,
title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
author = "Kulshreshtha, Devang and
Belfer, Robert and
Serban, Iulian Vlad and
Reddy, Siva",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.566",
pages = "7064--7078",
abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
}
```
> Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
6a46da7ac00d29ceafda71253ac10da2
|
madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1
|
madlag
|
bert
| 83 | 42 |
transformers
| 0 |
question-answering
| true | true | false |
mit
|
['en']
|
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question-answering', 'bert', 'bert-base']
| false | true | true | 2,700 | false |
## BERT-base uncased model fine-tuned on SQuAD v1
This model is block sparse: the **linear** layers contains **7.5%** of the original weights.
The model contains **28.2%** of the original weights **overall**.
The training use a modified version of Victor Sanh [Movement Pruning](https://arxiv.org/abs/2005.07683) method.
That means that with the [block-sparse](https://github.com/huggingface/pytorch_block_sparse) runtime it ran **1.92x** faster than an dense networks on the evaluation, at the price of some impact on the accuracy (see below).
This model was fine-tuned from the HuggingFace [BERT](https://www.aclweb.org/anthology/N19-1423/) base uncased checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the equivalent model [csarron/bert-base-uncased-squad-v1](https://huggingface.co/csarron/bert-base-uncased-squad-v1).
This model is case-insensitive: it does not make a difference between english and English.
## Pruning details
A side-effect of the block pruning is that some of the attention heads are completely removed: 106 heads were removed on a total of 144 (73.6%).
Here is a detailed view on how the remaining heads are distributed in the network after pruning.

## Density plot
<script src="/madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1/raw/main/model_card/density.js" id="9301e950-59b1-497b-a2c5-25c24e07b3a0"></script>
## Details
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD1.1 | train | 90.6K |
| SQuAD1.1 | eval | 11.1k |
### Fine-tuning
- Python: `3.8.5`
- Machine specs:
```CPU: Intel(R) Core(TM) i7-6700K CPU
Memory: 64 GiB
GPUs: 1 GeForce GTX 3090, with 24GiB memory
GPU driver: 455.23.05, CUDA: 11.1
```
### Results
**Pytorch model file size**: `335M` (original BERT: `438M`)
| Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))|
| ------ | --------- | --------- |
| **EM** | **71.88** | **80.8** |
| **F1** | **81.36** | **88.5** |
## Example Usage
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1",
tokenizer="madlag/bert-base-uncased-squad1.1-block-sparse-0.07-v1"
)
predictions = qa_pipeline({
'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.",
'question': "Who is Frederic Chopin?",
})
print(predictions)
```
|
970ddfba19b9e4816816b9ae508344e6
|
cahya/wav2vec2-large-xlsr-turkish-artificial
|
cahya
|
wav2vec2
| 9 | 7 |
transformers
| 1 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['tr']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 3,445 | false |
# Wav2Vec2-Large-XLSR-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Turkish Artificial Common Voice dataset](https://cloud.uncool.ai/index.php/f/2165181).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 66.98 %
## Training
The Artificial Common Voice `train`, `validation` is used to fine tune the model
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
c0c3fb6f38007e10e70dc5d24311729c
|
gagan3012/pickuplines
|
gagan3012
|
gpt2
| 27 | 4 |
transformers
| 1 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 966 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pickuplines
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
7ad9f8240f03974886f7bb5e98a6bbb2
|
romainlhardy/t5-small-booksum
|
romainlhardy
|
t5
| 10 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,210 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-booksum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3266 | 1.0 | 29228 | 3.1859 |
| 3.2947 | 2.0 | 58456 | 3.1700 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.12.1
|
36a66ee4c6cc5c67bc9d273d009bdb86
|
Joeythemonster/test
|
Joeythemonster
| null | 18 | 14 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 614 | false |
### test_ Dreambooth model trained by Joeythemonster with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
2d0b00971c7b0e45210e557d78e1ed48
|
sd-dreambooth-library/Origtron
|
sd-dreambooth-library
| null | 22 | 15 |
diffusers
| 4 |
text-to-image
| false | false | false |
mit
| null | null | null | 4 | 0 | 4 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 1,012 | false |
Model trained in the [Shivam Shrirao](https://colab.research.google.com/github/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb?authuser=2#scrollTo=jXgi8HM4c-DA) google colab, dreambooth. It was made with various screen captures I took from videos of TRON from 1982. The original trailer and 2 long movie clips.
Download the **origtron.ckpt** file to: _stable-diffusion-webui\models\Stable-diffusion_ once it's downloaded just use the prompt **origtron** and you'll get some great results.
The file size is 2.3gb.
### Images I created




|
65604e0754b9c07879b445e9ab559e7b
|
ajaiswal1008/wav2vec2-large-xls-r-300m-hi-colab_new
|
ajaiswal1008
|
wav2vec2
| 13 | 9 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,104 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-colab_new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
7f55875d58d921665ccdb20401ead24e
|
RichVip/Cute_RichStyle_1.5
|
RichVip
| null | 5 | 0 | null | 3 | null | false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['BABY', 'BABIES', 'LITTLE', 'SD2.1', 'DIGITAL ART', 'CUTE', 'MIDJOURNEY', 'DOLLS', 'CHARACTER', 'CARTOON']
| false | true | true | 3,610 | false |
# Cute RichStyle - 512x512
Model trained in SD 1.5 with photos generated with Midjourney, created to generate people, animals/creatures...
You can also make objects... landscapes, etc, but maybe you need more tries:
- 30 steps - 7cfg
- euler a,ddim, dpm++sde...
- you can use different resolutions, you can generate interesting things
Characters rendered with the model:
.jpg)
.jpg)
**TOKEN**: cbzbb, cbzbb style, cbzbb style of _____ , you can put the token , (it is not required) but it is better to put it. Many times the token between () works better
possible positives: cute, little, baby, beautiful, fantasy art, devian art, trending artstation, digital art, detailed, cute, realistic, humanoide, character, tiny, film still of "____" , cinematic shot, "__" environment, beautiful landspace of _____, cinematic portrait of ______, cute character as a "_"....
most important negatives (not mandatory but they help a lot) : pencil draw, bad photo, bad draw
other possible negatives: cartoon, woman, man, person, people, character, super hero, iron man, baby, anime...
((When you generate the photo, there are times when it tries to create a person/character, that's why the negative character prompts etc...))
- landscape prompts better between ( ) or more parentheses, although it is not always necessary
- you can use other styles, removing the "cbzbb" token and adding pencil draw, lego style.. watercolor etc etc, it doesn't make the exact photo style with which I trained it but they look great too!!
- Most of the photos are daytime, to create nights it once worked with:
- positive: (dark), (black sky) (dark sky) etc etc
- negative: (blue day), (day light), (day) (sun) etc etc
- To increase quality: send the photo that you like the most to img2img (30-steps), 0.60-80, generate 4 photos, choose one or repeat (with less donoising to make it look more like the original, or more to make it change more ), resend via img2img (you can raise the ratio/aspect of the image a bit), lower the denoising to 0.40-0.50, generate 2/4 images, choose the one you like the most and have more detail, send to img2img uploading the photo scale (same ratio/aspect,) and at 0.15-0.30 50 steps, generate 1 photo, if you want you can continue rescaling it for more detail and more resolution
- Change person/character in the image: if you like the photo but want to change the character, send a photo to img2img, change the name of the character or person or animal and between 0.7-1 denoising
**Prompt examples:**
cbzbb style of a pennywise
michael jackson, cbzbb, detailed, fantasy,super cute, trending on artstation
cbzbb style of angry baby groot
cute panda reading a book, cbzbb style
## ENJOY !!!!
The creations of the images are absolutely yours! But if you can share them with me on Twitter or Instagram or reddit, anywhere , I'd LOVE to SEE what you can do with the model!
- **Twitter:** @RichViip
- **Instagram**: richviip
- **Reddit:** Richviip
Thank you for the support and great help of ALL the people on Patricio's discord, who were at every moment of the creation of the model giving their opinion on more than 15 different types of models and making my head hurt less!
Social media of Patricio, follow him!!
- **Youtube:** patricio-fernandez
- **Twitter:** patriciofernanf
|
d2afeb57af71415aefea341df092acee
|
marccgrau/whisper-small-allSNR-v4
|
marccgrau
|
whisper
| 13 | 1 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['de']
|
['marccgrau/sbbdata_allSNR']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sbb-asr', 'generated_from_trainer']
| true | true | true | 1,659 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small German SBB all SNR - v4
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the SBB Dataset 05.01.2023 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0287
- Wer: 0.0222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 700
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.6894 | 0.71 | 100 | 0.4702 | 0.4661 |
| 0.1896 | 1.42 | 200 | 0.0322 | 0.0241 |
| 0.0297 | 2.13 | 300 | 0.0349 | 0.0228 |
| 0.0181 | 2.84 | 400 | 0.0250 | 0.0209 |
| 0.0154 | 3.55 | 500 | 0.0298 | 0.0209 |
| 0.0112 | 4.26 | 600 | 0.0327 | 0.0222 |
| 0.009 | 4.96 | 700 | 0.0287 | 0.0222 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.12.1
|
4cc12b3b46ebbac1511b5342a94b429f
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_mrpc_96
|
gokuls
|
distilbert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,100 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_mrpc_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5290
- Accuracy: 0.3162
- F1: 0.0
- Combined Score: 0.1581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:--------------:|
| 0.5507 | 1.0 | 15 | 0.5375 | 0.3162 | 0.0 | 0.1581 |
| 0.5355 | 2.0 | 30 | 0.5312 | 0.3162 | 0.0 | 0.1581 |
| 0.531 | 3.0 | 45 | 0.5296 | 0.3162 | 0.0 | 0.1581 |
| 0.5292 | 4.0 | 60 | 0.5290 | 0.3162 | 0.0 | 0.1581 |
| 0.5278 | 5.0 | 75 | 0.5290 | 0.3162 | 0.0 | 0.1581 |
| 0.5292 | 6.0 | 90 | 0.5292 | 0.3162 | 0.0 | 0.1581 |
| 0.5279 | 7.0 | 105 | 0.5292 | 0.3162 | 0.0 | 0.1581 |
| 0.5288 | 8.0 | 120 | 0.5291 | 0.3162 | 0.0 | 0.1581 |
| 0.5282 | 9.0 | 135 | 0.5291 | 0.3162 | 0.0 | 0.1581 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
8849dd36fb94fcef2cba73133e73f4ea
|
troesy/distilbert-base-cased-3epoch-LaTTrue-updatedAlligning
|
troesy
|
distilbert
| 14 | 6 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,291 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-3epoch-LaTTrue-updatedAlligning
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 174 | 0.1690 |
| No log | 2.0 | 348 | 0.1739 |
| 0.1311 | 3.0 | 522 | 0.1790 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
291d364c11241277f130251c42df6977
|
tbasic5/distilbert-base-uncased-finetuned-emotion
|
tbasic5
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,343 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2222
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8521 | 1.0 | 250 | 0.3164 | 0.907 | 0.9038 |
| 0.2549 | 2.0 | 500 | 0.2222 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
e21d272d865dbb8f01a95492c8a3628a
|
anas-awadalla/splinter-large-few-shot-k-32-finetuned-squad-seed-4
|
anas-awadalla
|
splinter
| 16 | 1 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,006 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# splinter-large-few-shot-k-32-finetuned-squad-seed-4
This model is a fine-tuned version of [tau/splinter-large-qass](https://huggingface.co/tau/splinter-large-qass) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
9ca33b6488e8e4bd28e604b39abcd305
|
henryscheible/eval_masked_v4_sst2
|
henryscheible
| null | 13 | 0 | null | 0 | null | true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,010 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval_masked_v4_sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3821
- Accuracy: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
b627ff755b1026949f1f67fc2923ab04
|
ParhamAbdarzade/finetuning-sentiment-model-20000-samples-imdb-v2
|
ParhamAbdarzade
|
distilbert
| 12 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,416 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-20000-samples-imdb-v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3694
- Accuracy: 0.924
- F1: 0.9242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2795 | 1.0 | 2500 | 0.2224 | 0.9275 | 0.9263 |
| 0.1877 | 2.0 | 5000 | 0.3141 | 0.9275 | 0.9274 |
| 0.1045 | 3.0 | 7500 | 0.3694 | 0.924 | 0.9242 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
f99e56d4192a7720714d2712fb11c42c
|
Lvxue/distilled-mt5-small-0.03-1
|
Lvxue
|
mt5
| 14 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['en', 'ro']
|
['wmt16']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,037 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-0.03-1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8063
- Bleu: 7.1839
- Gen Len: 45.5733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
219f6b75037f0d94daf14143cf265880
|
LeBenchmark/wav2vec2-FR-7K-large
|
LeBenchmark
|
wav2vec2
| 6 | 1,255 |
transformers
| 5 |
feature-extraction
| true | false | false |
apache-2.0
|
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['wav2vec2']
| false | true | true | 4,537 | false |
# LeBenchmark: wav2vec2 large model trained on 7K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [Task Agnostic and Task Specific Self-Supervised Learning from Speech with LeBenchmark](https://openreview.net/pdf?id=TSvj5dmuSd)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Two different wav2vec2 architectures *Base* and *Large* are coupled with our small (1K), medium (3K), and large (7K) corpus. A larger one should come later. In short:
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@article{Evain2021LeBenchmarkAR,
title={LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech},
author={Sol{\`e}ne Evain and Ha Nguyen and Hang Le and Marcely Zanon Boito and Salima Mdhaffar and Sina Alisamir and Ziyi Tong and N. Tomashenko and Marco Dinarelli and Titouan Parcollet and A. Allauzen and Y. Est{\`e}ve and B. Lecouteux and F. Portet and S. Rossato and F. Ringeval and D. Schwab and L. Besacier},
journal={ArXiv},
year={2021},
volume={abs/2104.11462}
}
```
|
7fd45fcd3092c0481715e0e6ec90de70
|
BSC-LT/roberta-large-bne-capitel-pos
|
BSC-LT
|
roberta
| 9 | 0 |
transformers
| 3 |
token-classification
| true | false | false |
apache-2.0
|
['es']
|
['bne', 'capitel']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['national library of spain', 'spanish', 'bne', 'capitel', 'pos']
| false | true | true | 1,667 | false |
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-capitel-pos
# Spanish RoBERTa-large trained on BNE finetuned for CAPITEL Part of Speech (POS) dataset
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 2).
## Evaluation and results
F1 Score: 0.9851 (average of 5 runs).
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
97e7812a12193f01658541ad6340aaa2
|
sudo-s/exper_batch_8_e8
|
sudo-s
|
vit
| 14 | 11 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'generated_from_trainer']
| true | true | true | 7,648 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper_batch_8_e8
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4608
- Accuracy: 0.9052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Apex, opt level O1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.2202 | 0.08 | 100 | 4.1245 | 0.1237 |
| 3.467 | 0.16 | 200 | 3.5622 | 0.2143 |
| 3.3469 | 0.23 | 300 | 3.1688 | 0.2675 |
| 2.8086 | 0.31 | 400 | 2.8965 | 0.3034 |
| 2.6291 | 0.39 | 500 | 2.5858 | 0.4025 |
| 2.2382 | 0.47 | 600 | 2.2908 | 0.4133 |
| 1.9259 | 0.55 | 700 | 2.2007 | 0.4676 |
| 1.8088 | 0.63 | 800 | 2.0419 | 0.4742 |
| 1.9462 | 0.7 | 900 | 1.6793 | 0.5578 |
| 1.5392 | 0.78 | 1000 | 1.5460 | 0.6079 |
| 1.561 | 0.86 | 1100 | 1.5793 | 0.5690 |
| 1.2135 | 0.94 | 1200 | 1.4663 | 0.5929 |
| 1.0725 | 1.02 | 1300 | 1.2974 | 0.6534 |
| 0.8696 | 1.1 | 1400 | 1.2406 | 0.6569 |
| 0.8758 | 1.17 | 1500 | 1.2127 | 0.6623 |
| 1.1737 | 1.25 | 1600 | 1.2243 | 0.6550 |
| 0.8242 | 1.33 | 1700 | 1.1371 | 0.6735 |
| 1.0141 | 1.41 | 1800 | 1.0536 | 0.7024 |
| 0.9855 | 1.49 | 1900 | 0.9885 | 0.7205 |
| 0.805 | 1.57 | 2000 | 0.9048 | 0.7479 |
| 0.7207 | 1.64 | 2100 | 0.8842 | 0.7490 |
| 0.7101 | 1.72 | 2200 | 0.8954 | 0.7436 |
| 0.5946 | 1.8 | 2300 | 0.9174 | 0.7386 |
| 0.6937 | 1.88 | 2400 | 0.7818 | 0.7760 |
| 0.5593 | 1.96 | 2500 | 0.7449 | 0.7934 |
| 0.4139 | 2.04 | 2600 | 0.7787 | 0.7830 |
| 0.2929 | 2.11 | 2700 | 0.7122 | 0.7945 |
| 0.4159 | 2.19 | 2800 | 0.7446 | 0.7907 |
| 0.4079 | 2.27 | 2900 | 0.7354 | 0.7938 |
| 0.516 | 2.35 | 3000 | 0.7499 | 0.8007 |
| 0.2728 | 2.43 | 3100 | 0.6851 | 0.8061 |
| 0.4159 | 2.51 | 3200 | 0.7258 | 0.7999 |
| 0.3396 | 2.58 | 3300 | 0.7455 | 0.7972 |
| 0.1918 | 2.66 | 3400 | 0.6793 | 0.8119 |
| 0.1228 | 2.74 | 3500 | 0.6696 | 0.8134 |
| 0.2671 | 2.82 | 3600 | 0.6306 | 0.8285 |
| 0.4986 | 2.9 | 3700 | 0.6111 | 0.8296 |
| 0.3699 | 2.98 | 3800 | 0.5600 | 0.8508 |
| 0.0444 | 3.05 | 3900 | 0.6021 | 0.8331 |
| 0.1489 | 3.13 | 4000 | 0.5599 | 0.8516 |
| 0.15 | 3.21 | 4100 | 0.6377 | 0.8365 |
| 0.2535 | 3.29 | 4200 | 0.5752 | 0.8543 |
| 0.2679 | 3.37 | 4300 | 0.5677 | 0.8608 |
| 0.0989 | 3.45 | 4400 | 0.6325 | 0.8396 |
| 0.0825 | 3.52 | 4500 | 0.5979 | 0.8524 |
| 0.0427 | 3.6 | 4600 | 0.5903 | 0.8516 |
| 0.1806 | 3.68 | 4700 | 0.5323 | 0.8628 |
| 0.2672 | 3.76 | 4800 | 0.5688 | 0.8604 |
| 0.2674 | 3.84 | 4900 | 0.5369 | 0.8635 |
| 0.2185 | 3.92 | 5000 | 0.4743 | 0.8820 |
| 0.2195 | 3.99 | 5100 | 0.5340 | 0.8709 |
| 0.0049 | 4.07 | 5200 | 0.5883 | 0.8608 |
| 0.0204 | 4.15 | 5300 | 0.6102 | 0.8539 |
| 0.0652 | 4.23 | 5400 | 0.5659 | 0.8670 |
| 0.028 | 4.31 | 5500 | 0.4916 | 0.8840 |
| 0.0423 | 4.39 | 5600 | 0.5706 | 0.8736 |
| 0.0087 | 4.46 | 5700 | 0.5653 | 0.8697 |
| 0.0964 | 4.54 | 5800 | 0.5423 | 0.8755 |
| 0.0841 | 4.62 | 5900 | 0.5160 | 0.8743 |
| 0.0945 | 4.7 | 6000 | 0.5532 | 0.8697 |
| 0.0311 | 4.78 | 6100 | 0.4947 | 0.8867 |
| 0.0423 | 4.86 | 6200 | 0.5063 | 0.8843 |
| 0.1348 | 4.93 | 6300 | 0.5619 | 0.8743 |
| 0.049 | 5.01 | 6400 | 0.5800 | 0.8732 |
| 0.0053 | 5.09 | 6500 | 0.5499 | 0.8770 |
| 0.0234 | 5.17 | 6600 | 0.5102 | 0.8874 |
| 0.0192 | 5.25 | 6700 | 0.5447 | 0.8836 |
| 0.0029 | 5.32 | 6800 | 0.4787 | 0.8936 |
| 0.0249 | 5.4 | 6900 | 0.5232 | 0.8870 |
| 0.0671 | 5.48 | 7000 | 0.4766 | 0.8975 |
| 0.0056 | 5.56 | 7100 | 0.5136 | 0.8894 |
| 0.003 | 5.64 | 7200 | 0.5085 | 0.8882 |
| 0.0015 | 5.72 | 7300 | 0.4832 | 0.8971 |
| 0.0014 | 5.79 | 7400 | 0.4648 | 0.8998 |
| 0.0065 | 5.87 | 7500 | 0.4739 | 0.8978 |
| 0.0011 | 5.95 | 7600 | 0.5349 | 0.8867 |
| 0.0021 | 6.03 | 7700 | 0.5460 | 0.8847 |
| 0.0012 | 6.11 | 7800 | 0.5309 | 0.8890 |
| 0.0011 | 6.19 | 7900 | 0.4852 | 0.8998 |
| 0.0093 | 6.26 | 8000 | 0.4751 | 0.8998 |
| 0.003 | 6.34 | 8100 | 0.4934 | 0.8963 |
| 0.0027 | 6.42 | 8200 | 0.4882 | 0.9029 |
| 0.0009 | 6.5 | 8300 | 0.4806 | 0.9021 |
| 0.0009 | 6.58 | 8400 | 0.4974 | 0.9029 |
| 0.0009 | 6.66 | 8500 | 0.4748 | 0.9075 |
| 0.0008 | 6.73 | 8600 | 0.4723 | 0.9094 |
| 0.001 | 6.81 | 8700 | 0.4692 | 0.9098 |
| 0.0007 | 6.89 | 8800 | 0.4726 | 0.9075 |
| 0.0011 | 6.97 | 8900 | 0.4686 | 0.9067 |
| 0.0006 | 7.05 | 9000 | 0.4653 | 0.9056 |
| 0.0006 | 7.13 | 9100 | 0.4755 | 0.9029 |
| 0.0007 | 7.2 | 9200 | 0.4633 | 0.9036 |
| 0.0067 | 7.28 | 9300 | 0.4611 | 0.9036 |
| 0.0007 | 7.36 | 9400 | 0.4608 | 0.9052 |
| 0.0007 | 7.44 | 9500 | 0.4623 | 0.9044 |
| 0.0005 | 7.52 | 9600 | 0.4621 | 0.9056 |
| 0.0005 | 7.6 | 9700 | 0.4615 | 0.9056 |
| 0.0005 | 7.67 | 9800 | 0.4612 | 0.9059 |
| 0.0005 | 7.75 | 9900 | 0.4626 | 0.9075 |
| 0.0004 | 7.83 | 10000 | 0.4626 | 0.9075 |
| 0.0005 | 7.91 | 10100 | 0.4626 | 0.9075 |
| 0.0006 | 7.99 | 10200 | 0.4626 | 0.9079 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.5.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
2c2f1c2c63062d9d0154cd08c5f9efe4
|
hfl/minirbt-h256
|
hfl
|
bert
| 6 | 276 |
transformers
| 4 |
fill-mask
| true | true | false |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bert']
| false | true | true | 913 | false |
# Please use 'Bert' related functions to load this model!
## Chinese small pre-trained model MiniRBT
In order to further promote the research and development of Chinese information processing, we launched a Chinese small pre-training model MiniRBT based on the self-developed knowledge distillation tool TextBrewer, combined with Whole Word Masking technology and Knowledge Distillation technology.
This repository is developed based on:https://github.com/iflytek/MiniRBT
You may also interested in,
- Chinese LERT: https://github.com/ymcui/LERT
- Chinese PERT: https://github.com/ymcui/PERT
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/iflytek/HFL-Anthology
|
286393dbedba4303ef33c8bf6c4bba70
|
Payoto/t5-small-finetuned-xsum
|
Payoto
|
t5
| 7 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['xsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,279 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6962 | 1.0 | 3188 | 2.5273 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cpu
- Datasets 2.7.1
- Tokenizers 0.12.1
|
a95e6447551e00682cdc03a4be54da95
|
junjuice0/VOXO
|
junjuice0
| null | 18 | 302 |
diffusers
| 23 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,050 | false |

# VOXO
Merged model by junjuice0.
This model was originally created just for me, so I am not after quality and please don't expect too much.
I may release finetune version of this model in the future, but only God knows if I am willing to do it until then.
[JOIN US(日本語)](https://discord.gg/ai-art)
# VOXO-Vtuber (VOXO-v0-vtuber.safetensors)
This model can generate vtubers for Hololive and Nijisanji.
Some vtubers may or may not come out well.
It is recommended to give the name a weight of about 1.2 (e.g. (ange katrina:1.2))
# RECOMMENDED
It is recommended to use TIs such as bad-images or bad-prompt for negative prompts. Also, quality prompts (e.g. masterpiece, high quality) are not required.
The use of highres. fix may change the painting considerably, use according to your preference.
# HOW TO USE
The usage is the same as other diffusion models, and it would be easier to read other people's explanations than mine here.
|
33c1689e3a2d3cf66ef7831b1200fde3
|
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42
|
anas-awadalla
|
bert
| 12 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,056 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 12.93282876064333, 'f1': 21.98821604201723}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
31a1a3a2925df56d750f58f328af0d02
|
gokuls/bert-base-emotion-intent
|
gokuls
|
bert
| 13 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,492 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-emotion-intent
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1952
- Accuracy: 0.9385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4058 | 1.0 | 1000 | 0.2421 | 0.9265 |
| 0.1541 | 2.0 | 2000 | 0.1952 | 0.9385 |
| 0.1279 | 3.0 | 3000 | 0.1807 | 0.9345 |
| 0.1069 | 4.0 | 4000 | 0.2292 | 0.9365 |
| 0.081 | 5.0 | 5000 | 0.3315 | 0.936 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
928d4f7420b6c8beac2a5814b68d02bb
|
henryscheible/stsb_bert-base-uncased_144_v2
|
henryscheible
| null | 13 | 0 | null | 0 | null | true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,064 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stsb_bert-base-uncased_144_v2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4994
- Pearson: 0.8900
- Spearmanr: 0.8864
- Combined Score: 0.8882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
097ef03c85f3564399f035680816c609
|
nitrosocke/Arcane-Diffusion
|
nitrosocke
| null | 25 | 25,268 |
diffusers
| 584 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 15 | 4 | 6 | 5 | 14 | 10 | 4 |
['stable-diffusion', 'text-to-image']
| false | true | true | 3,318 | false |
# Arcane Diffusion
This is the fine-tuned Stable Diffusion model trained on images from the TV Show Arcane.
Use the tokens **_arcane style_** in your prompts for the effect.
**If you enjoy my work, please consider supporting me**
[](https://patreon.com/user?u=79196446)
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
#!pip install diffusers transformers scipy torch
from diffusers import StableDiffusionPipeline
import torch
model_id = "nitrosocke/Arcane-Diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "arcane style, a magical princess with golden hair"
image = pipe(prompt).images[0]
image.save("./magical_princess.png")
```
# Gradio & Colab
We also support a [Gradio](https://github.com/gradio-app/gradio) Web UI and Colab with Diffusers to run fine-tuned Stable Diffusion models:
[](https://huggingface.co/spaces/anzorq/finetuned_diffusion)
[](https://colab.research.google.com/drive/1j5YvfMZoGdDGdj3O3xRU1m4ujKYsElZO?usp=sharing)

### Sample images from v3:


### Sample images from the model:

### Sample images used for training:

**Version 3** (arcane-diffusion-v3): This version uses the new _train-text-encoder_ setting and improves the quality and edibility of the model immensely. Trained on 95 images from the show in 8000 steps.
**Version 2** (arcane-diffusion-v2): This uses the diffusers based dreambooth training and prior-preservation loss is way more effective. The diffusers where then converted with a script to a ckpt file in order to work with automatics repo.
Training was done with 5k steps for a direct comparison to v1 and results show that it needs more steps for a more prominent result. Version 3 will be tested with 11k steps.
**Version 1** (arcane-diffusion-5k): This model was trained using _Unfrozen Model Textual Inversion_ utilizing the _Training with prior-preservation loss_ methods. There is still a slight shift towards the style, while not using the arcane token.
|
f3b6288b00a4072858b7a121efd4ade9
|
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc
|
deepdoctection
| null | 5 | 0 | null | 0 | null | false | false | false |
apache-2.0
| null |
['Pubtabnet']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Tensorflow']
| false | true | true | 2,951 | false |
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) .
Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683).
The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are
calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## How this model was trained.
To recreate the model run on the **deep**doctection framework, run:
```python
>>> import os
>>> from deep_doctection.datasets import DatasetRegistry
>>> from deep_doctection.eval import MetricRegistry
>>> from deep_doctection.utils import get_configs_dir_path
>>> from deep_doctection.train import train_faster_rcnn
pubtabnet = DatasetRegistry.get_dataset("pubtabnet")
pubtabnet.dataflow.categories.set_cat_to_sub_cat({"ITEM":"row_col"})
pubtabnet.dataflow.categories.filter_categories(categories=["ROW","COLUMN"])
path_config_yaml=os.path.join(get_configs_dir_path(),"tp/rows/conf_frcnn_rows.yaml")
path_weights = ""
dataset_train = pubtabnet
config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50"]
build_train_config=["max_datapoints=500000","rows_and_cols=True"]
dataset_val = pubtabnet
build_val_config = ["max_datapoints=2000","rows_and_cols=True"]
coco_metric = MetricRegistry.get_metric("coco")
coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]])
train_faster_rcnn(path_config_yaml=path_config_yaml,
dataset_train=dataset_train,
path_weights=path_weights,
config_overwrite=config_overwrite,
log_dir="/path/to/dir",
build_train_config=build_train_config,
dataset_val=dataset_val,
build_val_config=build_val_config,
metric=coco_metric,
pipeline_component_name="ImageLayoutService"
)
```
## How to fine-tune this model
To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
|
df8f1332bfc042531344035c4496f52c
|
theojolliffe/bart-paraphrase-v0.75-e1
|
theojolliffe
|
bart
| 12 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,456 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-v0.75-e1
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1865
- Rouge1: 71.3427
- Rouge2: 66.0011
- Rougel: 69.8855
- Rougelsum: 69.9796
- Gen Len: 19.6036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.1373 | 1.0 | 2660 | 0.1865 | 71.3427 | 66.0011 | 69.8855 | 69.9796 | 19.6036 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
2a1cd90d97ca771277c71db42a402f43
|
Oesnim/chaper01_2
|
Oesnim
| null | 2 | 0 | null | 0 | null | false | false | false |
openrail
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 604 | false |
text="""Dear Amazon, last week I ordered an Optimus Prime action figure from your online store in Germany. Unfortunately, when I opened the package, I discovered to my horror that I had been sent an action figure of Megatron instead! As a lifelong enemy of the Deceptions, I hope yoou can understand my dilemma. To resolve the issue, I demand an exchange of Megatron for the Optimus Prime figure I ordered. Enclosed are copies of my records concerning this purchase. I expect to hear from you soon. Sincerely, Bumblebee."""
from transformers import pipeline
classifier = pipeline("text-classification")
|
46e9203fed693746aacb3e07f0fa6d87
|
Unbabel/wmt22-comet-da
|
Unbabel
| null | 5 | 0 | null | 0 |
translation
| false | false | false |
apache-2.0
|
['multilingual', 'af', 'am', 'ar', 'as', 'az', 'be', 'bg', 'bn', 'br', 'bs', 'ca', 'cs', 'cy', 'da', 'de', 'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fr', 'fy', 'ga', 'gd', 'gl', 'gu', 'ha', 'he', 'hi', 'hr', 'hu', 'hy', 'id', 'is', 'it', 'ja', 'jv', 'ka', 'kk', 'km', 'kn', 'ko', 'ku', 'ky', 'la', 'lo', 'lt', 'lv', 'mg', 'mk', 'ml', 'mn', 'mr', 'ms', 'my', 'ne', 'nl', 'no', 'om', 'or', 'pa', 'pl', 'ps', 'pt', 'ro', 'ru', 'sa', 'sd', 'si', 'sk', 'sl', 'so', 'sq', 'sr', 'su', 'sv', 'sw', 'ta', 'te', 'th', 'tl', 'tr', 'ug', 'uk', 'ur', 'uz', 'vi', 'xh', 'yi', 'zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,791 | false |
This is a [COMET](https://github.com/Unbabel/COMET) evaluation model: It receives a triplet with (source sentence, translation, reference translation) and returns a score that reflects the quality of the translation compared to both source and reference.
# Paper
[COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task](https://aclanthology.org/2022.wmt-1.52) (Rei et al., WMT 2022)
# License
Apache-2.0
# Usage (unbabel-comet)
Using this model requires unbabel-comet to be installed:
```bash
pip install --upgrade pip # ensures that pip is current
pip install unbabel-comet
```
Then you can use it through comet CLI:
```bash
comet-score -s {source-inputs}.txt -t {translation-outputs}.txt -r {references}.txt --model Unbabel/wmt22-comet-da
```
Or using Python:
```python
from comet import download_model, load_from_checkpoint
model_path = download_model("Unbabel/wmt22-comet-da")
model = load_from_checkpoint(model_path)
data = [
{
"src": "Dem Feuer konnte Einhalt geboten werden",
"mt": "The fire could be stopped",
"ref": "They were able to control the fire."
},
{
"src": "Schulen und Kindergärten wurden eröffnet.",
"mt": "Schools and kindergartens were open",
"ref": "Schools and kindergartens opened"
}
]
model_output = model.predict(data, batch_size=8, gpus=1)
print (model_output)
```
# Intended uses
Our model is intented to be used for **MT evaluation**.
Given a a triplet with (source sentence, translation, reference translation) outputs a single score between 0 and 1 where 1 represents a perfect translation.
# Languages Covered:
This model builds on top of XLM-R which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable!
|
a1f517c7d3dfee8997ef4548e12a2af9
|
DrishtiSharma/whisper-large-v2-assamese-700-steps
|
DrishtiSharma
|
whisper
| 15 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hi']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,314 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Assamese - Drishti Sharma
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2452
- Wer: 21.4582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 700
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0109 | 4.32 | 700 | 0.2452 | 21.4582 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
c4a0e3042cc62e9d84e515b3811edd62
|
Geotrend/bert-base-en-el-cased
|
Geotrend
|
bert
| 8 | 4 |
transformers
| 0 |
fill-mask
| true | true | true |
apache-2.0
|
['multilingual']
|
['wikipedia']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,292 | false |
# bert-base-en-el-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-el-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-el-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
3b8387cb8a02b24ff8488881e9b75bb1
|
armandnlp/distilbert-base-uncased-finetuned-emotion
|
armandnlp
|
distilbert
| 14 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 3 | 1 | 2 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,345 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2237
- Accuracy: 0.9275
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8643 | 1.0 | 250 | 0.3324 | 0.9065 | 0.9025 |
| 0.2589 | 2.0 | 500 | 0.2237 | 0.9275 | 0.9274 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
0ababfcc65072d5754c0d6703a358af0
|
AigizK/bashkir-whisper-small
|
AigizK
|
whisper
| 17 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ba']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard']
| true | true | true | 2,192 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Bashkir
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 ba dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2589
- Wer: 15.0723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 30000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.1637 | 1.01 | 2000 | 0.2555 | 26.4682 |
| 0.1375 | 2.01 | 4000 | 0.2223 | 21.5394 |
| 0.0851 | 3.02 | 6000 | 0.2086 | 19.6725 |
| 0.0573 | 4.02 | 8000 | 0.2178 | 18.4280 |
| 0.036 | 5.03 | 10000 | 0.2312 | 17.8248 |
| 0.0238 | 6.04 | 12000 | 0.2621 | 17.4096 |
| 0.0733 | 7.04 | 14000 | 0.2120 | 16.5656 |
| 0.0111 | 8.05 | 16000 | 0.2682 | 16.2291 |
| 0.0155 | 9.05 | 18000 | 0.2677 | 15.9242 |
| 0.0041 | 10.06 | 20000 | 0.3178 | 15.9534 |
| 0.0023 | 12.01 | 22000 | 0.3218 | 16.0536 |
| 0.0621 | 13.01 | 24000 | 0.2313 | 15.6169 |
| 0.0022 | 14.02 | 26000 | 0.2887 | 15.1083 |
| 0.0199 | 15.02 | 28000 | 0.2553 | 15.1848 |
| 0.0083 | 16.03 | 30000 | 0.2589 | 15.0723 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
8196e986536d270a92bdf26f2605da1b
|
alphahg/kobart-base-v2-finetuned-paper
|
alphahg
|
bart
| 9 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null |
['aihub_paper_summarization']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,658 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobart-base-v2-finetuned-paper
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the aihub_paper_summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2966
- Rouge1: 6.2883
- Rouge2: 1.7038
- Rougel: 6.2556
- Rougelsum: 6.2618
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.2215 | 1.0 | 8831 | 1.3293 | 6.2425 | 1.7317 | 6.2246 | 6.2247 | 20.0 |
| 1.122 | 2.0 | 17662 | 1.3056 | 6.2298 | 1.7005 | 6.2042 | 6.2109 | 20.0 |
| 1.0914 | 3.0 | 26493 | 1.2966 | 6.2883 | 1.7038 | 6.2556 | 6.2618 | 20.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
d3f6c445a5026590d473f4ac581cbdd6
|
yanaiela/roberta-base-epoch_33
|
yanaiela
|
roberta
| 9 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
|
['en']
|
['wikipedia', 'bookcorpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta-base', 'roberta-base-epoch_33']
| false | true | true | 2,102 | false |
# RoBERTa, Intermediate Checkpoint - Epoch 33
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_33.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
16b3c32902ad44b66297ebf781ae4216
|
henryscheible/eval_v3_mrpc
|
henryscheible
|
bert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,136 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eval_v3_mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6564
- eval_accuracy: 0.6649
- eval_f1: 0.7987
- eval_combined_score: 0.7318
- eval_runtime: 5.045
- eval_samples_per_second: 341.921
- eval_steps_per_second: 42.815
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
e211e08689ba1202d6e058902648fa89
|
nestoralvaro/mt5-small-finetuned-google_small_for_summarization_TF
|
nestoralvaro
|
mt5
| 8 | 1 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,676 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nestoralvaro/mt5-small-finetuned-google_small_for_summarization_TF
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.3123
- Validation Loss: 2.1399
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 266360, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2631 | 2.3702 | 0 |
| 2.6166 | 2.2422 | 1 |
| 2.4974 | 2.2074 | 2 |
| 2.4288 | 2.1843 | 3 |
| 2.3837 | 2.1613 | 4 |
| 2.3503 | 2.1521 | 5 |
| 2.3263 | 2.1407 | 6 |
| 2.3123 | 2.1399 | 7 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
6265d0c051614fde9367c8937150f2c9
|
glasses/resnet50
|
glasses
| null | 4 | 24 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification']
| false | true | true | 1,588 | false |
# resnet50
Implementation of ResNet proposed in [Deep Residual Learning for Image
Recognition](https://arxiv.org/abs/1512.03385)
``` python
ResNet.resnet18()
ResNet.resnet26()
ResNet.resnet34()
ResNet.resnet50()
ResNet.resnet101()
ResNet.resnet152()
ResNet.resnet200()
Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_
ResNet.resnet26d()
ResNet.resnet34d()
ResNet.resnet50d()
# You can construct your own one by chaning `stem` and `block`
resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD))
```
Examples:
``` python
# change activation
ResNet.resnet18(activation = nn.SELU)
# change number of classes (default is 1000 )
ResNet.resnet18(n_classes=100)
# pass a different block
ResNet.resnet18(block=SENetBasicBlock)
# change the steam
model = ResNet.resnet18(stem=ResNetStemC)
change shortcut
model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD))
# store each feature
x = torch.rand((1, 3, 224, 224))
# get features
model = ResNet.resnet18()
# first call .features, this will activate the forward hooks and tells the model you'll like to get the features
model.encoder.features
model(torch.randn((1,3,224,224)))
# get the features from the encoder
features = model.encoder.features
print([x.shape for x in features])
#[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])]
```
|
899632756736775a1c4a8a2868c6ba03
|
Kushala/wav2vec2-base-timit-demo-google-colab
|
Kushala
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,998 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5195
- Wer: 0.3386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5345 | 1.0 | 500 | 2.1466 | 1.0010 |
| 0.949 | 2.01 | 1000 | 0.5687 | 0.5492 |
| 0.445 | 3.01 | 1500 | 0.4562 | 0.4717 |
| 0.2998 | 4.02 | 2000 | 0.4154 | 0.4401 |
| 0.2242 | 5.02 | 2500 | 0.3887 | 0.4034 |
| 0.1834 | 6.02 | 3000 | 0.4262 | 0.3905 |
| 0.1573 | 7.03 | 3500 | 0.4200 | 0.3927 |
| 0.1431 | 8.03 | 4000 | 0.4194 | 0.3869 |
| 0.1205 | 9.04 | 4500 | 0.4600 | 0.3912 |
| 0.1082 | 10.04 | 5000 | 0.4613 | 0.3776 |
| 0.0984 | 11.04 | 5500 | 0.4926 | 0.3860 |
| 0.0872 | 12.05 | 6000 | 0.4869 | 0.3780 |
| 0.0826 | 13.05 | 6500 | 0.5033 | 0.3690 |
| 0.0717 | 14.06 | 7000 | 0.4827 | 0.3791 |
| 0.0658 | 15.06 | 7500 | 0.4816 | 0.3650 |
| 0.0579 | 16.06 | 8000 | 0.5433 | 0.3689 |
| 0.056 | 17.07 | 8500 | 0.5513 | 0.3672 |
| 0.0579 | 18.07 | 9000 | 0.4813 | 0.3632 |
| 0.0461 | 19.08 | 9500 | 0.4846 | 0.3501 |
| 0.0431 | 20.08 | 10000 | 0.5449 | 0.3637 |
| 0.043 | 21.08 | 10500 | 0.4906 | 0.3538 |
| 0.0334 | 22.09 | 11000 | 0.5081 | 0.3477 |
| 0.0322 | 23.09 | 11500 | 0.5184 | 0.3439 |
| 0.0316 | 24.1 | 12000 | 0.5412 | 0.3450 |
| 0.0262 | 25.1 | 12500 | 0.5113 | 0.3425 |
| 0.0267 | 26.1 | 13000 | 0.4888 | 0.3414 |
| 0.0258 | 27.11 | 13500 | 0.5071 | 0.3371 |
| 0.0226 | 28.11 | 14000 | 0.5311 | 0.3380 |
| 0.0233 | 29.12 | 14500 | 0.5195 | 0.3386 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
67bd55a3b96510b4f5fd13686a88e09e
|
paola-md/distilroberta-recipes
|
paola-md
|
roberta
| 6 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,701 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr2e05-wd0.02-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2784
- Rmse: 0.5277
- Mse: 0.2784
- Mae: 0.4161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2774 | 1.0 | 623 | 0.2749 | 0.5243 | 0.2749 | 0.4184 |
| 0.2741 | 2.0 | 1246 | 0.2741 | 0.5235 | 0.2741 | 0.4173 |
| 0.2724 | 3.0 | 1869 | 0.2855 | 0.5343 | 0.2855 | 0.4428 |
| 0.2713 | 4.0 | 2492 | 0.2758 | 0.5252 | 0.2758 | 0.4013 |
| 0.2695 | 5.0 | 3115 | 0.2777 | 0.5270 | 0.2777 | 0.4245 |
| 0.2674 | 6.0 | 3738 | 0.2784 | 0.5277 | 0.2784 | 0.4161 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
d34066b33b2121d0049b943818be7b54
|
AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru
|
AlexKay
|
xlm-roberta
| 8 | 960 |
transformers
| 15 |
question-answering
| true | false | false |
apache-2.0
|
['en', 'ru', 'multilingual']
| null | null | 1 | 1 | 0 | 0 | 1 | 0 | 1 |
[]
| false | true | true | 416 | false |
# XLM-RoBERTa large model whole word masking finetuned on SQuAD
Pretrained model using a masked language modeling (MLM) objective.
Fine tuned on English and Russian QA datasets
## Used QA Datasets
SQuAD + SberQuAD
[SberQuAD original paper](https://arxiv.org/pdf/1912.09723.pdf) is here! Recommend to read!
## Evaluation results
The results obtained are the following (SberQUaD):
```
f1 = 84.3
exact_match = 65.3
|
f9fb8a63819149d6d67987762d471d7c
|
edmundhui/mental_health_trainer
|
edmundhui
|
bert
| 12 | 38 |
transformers
| 2 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['generated_from_trainer']
| true | true | true | 1,005 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mental_health_trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [reddit_mental_health_posts](https://huggingface.co/datasets/solomonk/reddit_mental_health_posts)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
b8683b6aeabc47edb8f243f772426e08
|
JonatanGk/roberta-base-bne-finetuned-sqac
|
JonatanGk
|
roberta
| 13 | 8 |
transformers
| 1 |
question-answering
| true | false | false |
apache-2.0
| null |
['sqac']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,284 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-sqac
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9924 | 1.0 | 1196 | 0.8670 |
| 0.474 | 2.0 | 2392 | 0.8923 |
| 0.1637 | 3.0 | 3588 | 1.2066 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
93cedac1a13bb61841d6bd658a767a88
|
Devarshi/Brain_Tumor_Class_swin
|
Devarshi
|
swin
| 11 | 17 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,674 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Brain_Tumor_Class_swin
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0220
- Accuracy: 0.9936
- F1: 0.9936
- Recall: 0.9936
- Precision: 0.9936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.1248 | 1.0 | 220 | 0.0610 | 0.9767 | 0.9767 | 0.9767 | 0.9767 |
| 0.0887 | 2.0 | 440 | 0.0300 | 0.9920 | 0.9920 | 0.9920 | 0.9920 |
| 0.0449 | 3.0 | 660 | 0.0220 | 0.9936 | 0.9936 | 0.9936 | 0.9936 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
926c89940717ad6e35f604d56c2f2654
|
Salesforce/blip-image-captioning-large
|
Salesforce
|
blip
| 9 | 7,776 |
transformers
| 15 |
image-to-text
| true | false | false |
bsd-3-clause
| null | null | null | 2 | 1 | 1 | 0 | 3 | 1 | 2 |
['image-captioning']
| false | true | true | 5,407 | false |
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone).
|  |
|:--:|
| <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>|
## TL;DR
Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract:
*Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.*
## Usage
You can use this model for conditional and un-conditional image captioning
### Using the Pytorch model
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large")
model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
# conditional image captioning
text = "a photography of"
inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
# >>> a photography of a woman and her dog
# unconditional image captioning
inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> a woman sitting on the beach with her dog
```
</details>
## BibTex and citation info
```
@misc{https://doi.org/10.48550/arxiv.2201.12086,
doi = {10.48550/ARXIV.2201.12086},
url = {https://arxiv.org/abs/2201.12086},
author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
920b716dd70fdc899586fd3ef4f499b0
|
muhtasham/tiny-mlm-glue-stsb
|
muhtasham
|
bert
| 12 | 2 |
transformers
| 1 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,648 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-stsb
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.7548 | 0.7 | 500 | 3.9253 |
| 4.4535 | 1.39 | 1000 | 3.9069 |
| 4.364 | 2.09 | 1500 | 3.8392 |
| 4.1534 | 2.78 | 2000 | 3.7830 |
| 4.2317 | 3.48 | 2500 | 3.7450 |
| 4.1233 | 4.17 | 3000 | 3.7755 |
| 4.0383 | 4.87 | 3500 | 3.7060 |
| 4.0459 | 5.56 | 4000 | 3.8708 |
| 3.9321 | 6.26 | 4500 | 3.8573 |
| 4.0206 | 6.95 | 5000 | 3.7830 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
034b7610927b65615ea15a95552b27c2
|
zp2222/ddpm-butterflies-128
|
zp2222
| null | 12 | 2 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/smithsonian_butterflies_subset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,228 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/zp2222/ddpm-butterflies-128/tensorboard?#scalars)
|
84fe01746b37603ea7665231ae2a8734
|
Helsinki-NLP/opus-mt-srn-fr
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-srn-fr
* source languages: srn
* target languages: fr
* OPUS readme: [srn-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.srn.fr | 28.9 | 0.462 |
|
12b69b3f8923126faac9b0a9f21d6643
|
testimonial/wav2vec2-base-timit-demo-colab
|
testimonial
|
wav2vec2
| 12 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,641 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4688
- Wer: 0.3417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4156 | 4.0 | 500 | 1.2721 | 0.8882 |
| 0.6145 | 8.0 | 1000 | 0.4712 | 0.4510 |
| 0.229 | 12.0 | 1500 | 0.4459 | 0.3847 |
| 0.1312 | 16.0 | 2000 | 0.4739 | 0.3786 |
| 0.0897 | 20.0 | 2500 | 0.4483 | 0.3562 |
| 0.0608 | 24.0 | 3000 | 0.4450 | 0.3502 |
| 0.0456 | 28.0 | 3500 | 0.4688 | 0.3417 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
b7fdeb8a2af19dec9d60e379a6d4073c
|
sweaterr/xlm-roberta-base-finetuned-panx-de
|
sweaterr
|
xlm-roberta
| 12 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1358
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2591 | 1.0 | 525 | 0.1621 | 0.8206 |
| 0.1276 | 2.0 | 1050 | 0.1379 | 0.8486 |
| 0.082 | 3.0 | 1575 | 0.1358 | 0.8638 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
d5d78cd542c857a21d44691a07b303ee
|
Intel/distilbart-cnn-12-6-int8-dynamic
|
Intel
|
bart
| 9 | 17 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['cnn_dailymail']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['int8', 'Intel® Neural Compressor', 'neural-compressor', 'PostTrainingDynamic']
| false | true | true | 1,614 | false |
# INT8 DistilBart finetuned on CNN DailyMail
### Post-training dynamic quantization
This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6).
Below linear modules (21/133) are fallbacked to fp32 for less than 1% relative accuracy loss:
**'model.decoder.layers.2.fc2'**, **'model.encoder.layers.11.fc2'**, **'model.decoder.layers.1.fc2'**, **'model.decoder.layers.0.fc2'**, **'model.decoder.layers.4.fc1'**, **'model.decoder.layers.3.fc2'**, **'model.encoder.layers.8.fc2'**, **'model.decoder.layers.3.fc1'**, **'model.encoder.layers.11.fc1'**, **'model.encoder.layers.0.fc2'**, **'model.encoder.layers.3.fc1'**, **'model.encoder.layers.10.fc2'**, **'model.decoder.layers.5.fc1'**, **'model.encoder.layers.1.fc2'**, **'model.encoder.layers.3.fc2'**, **'lm_head'**, **'model.encoder.layers.7.fc2'**, **'model.decoder.layers.0.fc1'**, **'model.encoder.layers.4.fc1'**, **'model.encoder.layers.10.fc1'**, **'model.encoder.layers.6.fc1'**
### Evaluation result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-rougeLsum)** | 41.4707 | 41.8117 |
| **Model size** |722M|1249M|
### Load with optimum:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSeq2SeqLM
int8_model = IncQuantizedModelForSeq2SeqLM.from_pretrained(
'Intel/distilbart-cnn-12-6-int8-dynamic',
)
```
|
6fbc7a2ce7b25ba38a2855412e3c6792
|
harmonai/unlocked-250k
|
harmonai
| null | 6 | 389 |
diffusers
| 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio-generation']
| false | true | true | 1,313 | false |
[Dance Diffusion](https://github.com/Harmonai-org/sample-generator) is now available in 🧨 Diffusers.
## FP32
```python
# !pip install diffusers[torch] accelerate scipy
from diffusers import DiffusionPipeline
from scipy.io.wavfile import write
model_id = "harmonai/unlocked-250k"
pipe = DiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to("cuda")
audios = pipe(audio_length_in_s=4.0).audios
# To save locally
for i, audio in enumerate(audios):
write(f"test_{i}.wav", pipe.unet.sample_rate, audio.transpose())
# To dislay in google colab
import IPython.display as ipd
for audio in audios:
display(ipd.Audio(audio, rate=pipe.unet.sample_rate))
```
## FP16
Faster at a small loss of quality
```python
# !pip install diffusers[torch] accelerate scipy
from diffusers import DiffusionPipeline
from scipy.io.wavfile import write
import torch
model_id = "harmonai/unlocked-250k"
pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
audios = pipeline(audio_length_in_s=4.0).audios
# To save locally
for i, audio in enumerate(audios):
write(f"{i}.wav", pipe.unet.sample_rate, audio.transpose())
# To dislay in google colab
import IPython.display as ipd
for audio in audios:
display(ipd.Audio(audio, rate=pipe.unet.sample_rate))
```
|
672038403d447492b596796a72a83cd0
|
theodotus/stt_uk_squeezeformer_ctc_ml
|
theodotus
| null | 3 | 58 |
nemo
| 1 |
automatic-speech-recognition
| false | false | false |
bsd-3-clause
|
['uk']
|
['mozilla-foundation/common_voice_10_0', 'Yehor/voa-uk-transcriptions']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition']
| true | true | true | 404 | false |
# Squeezeformer-CTC ML (uk-UA)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets) |
|
ef2dd7b8c25c1879555bb002c63f4395
|
tftransformers/albert-xlarge-v1
|
tftransformers
| null | 6 | 3 | null | 0 | null | false | false | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 6,478 | false |
# ALBERT XLarge v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
In tf_transformers
```python
from tf_transformers.models import AlbertModel
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('albert-xlarge-v1')
model = AlbertModel.from_pretrained("albert-xlarge-v1")
text = "Replace me by any text you'd like."
inputs_tf = {}
inputs = tokenizer(text, return_tensors='tf')
inputs_tf["input_ids"] = inputs["input_ids"]
inputs_tf["input_type_ids"] = inputs["token_type_ids"]
inputs_tf["input_mask"] = inputs["attention_mask"]
outputs_tf = model(inputs_tf)
```
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
76ddf7106319e47e049981978c97161a
|
LiYuan/amazon-review-sentiment-analysis
|
LiYuan
|
bert
| 36 | 59,896 |
transformers
| 3 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,340 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli-amazon-query-shopping
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment?text=I+like+you.+I+love+you) on an [Amazon US Customer Reviews Dataset](https://www.kaggle.com/datasets/cynthiarempel/amazon-us-customer-reviews-dataset). The code for the fine-tuning process can be found
[here](https://github.com/vanderbilt-data-science/bigdata/blob/main/06-fine-tune-BERT-on-our-dataset.ipynb). This model is uncased: it does
not make a difference between english and English.
It achieves the following results on the evaluation set:
- Loss: 0.5202942490577698
- Accuracy: 0.8
## Model description
This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5).
This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further finetuning on related sentiment analysis tasks.
We replaced its head with our customer reviews to fine-tune it on 17,280 rows of training set while validating it on 4,320 rows of dev set. Finally, we evaluated our model performance on a held-out test set: 2,400 rows.
## Intended uses & limitations
Bert-base is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification, or question answering. This fine-tuned version of BERT-base is used to predict review rating star given the review.
The limitations are this trained model is focusing on reviews and products on Amazon. If you apply this model to other domains, it may perform poorly.
## How to use
You can use this model directly by downloading the trained weights and configurations like the below code snippet:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("LiYuan/amazon-review-sentiment-analysis")
model = AutoModelForSequenceClassification.from_pretrained("LiYuan/amazon-review-sentiment-analysis")
```
## Training and evaluation data
Download all the raw [dataset](https://www.kaggle.com/datasets/cynthiarempel/amazon-us-customer-reviews-dataset) from the Kaggle website.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.555400 | 1.0 | 1080 | 0.520294 | 0.800000 |
| 0.424300 | 2.0 | 1080 | 0.549649 | 0.798380 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
b2a68e085fa7f02919c93024e05f2bbf
|
muhtasham/small-mlm-wikitext-target-rotten_tomatoes
|
muhtasham
|
bert
| 10 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,583 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-wikitext-target-rotten_tomatoes
This model is a fine-tuned version of [muhtasham/small-mlm-wikitext](https://huggingface.co/muhtasham/small-mlm-wikitext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3909
- Accuracy: 0.8021
- F1: 0.8017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4528 | 1.87 | 500 | 0.4296 | 0.8030 | 0.8028 |
| 0.2265 | 3.75 | 1000 | 0.5558 | 0.8096 | 0.8096 |
| 0.1111 | 5.62 | 1500 | 0.9042 | 0.8039 | 0.8039 |
| 0.0584 | 7.49 | 2000 | 1.1252 | 0.8058 | 0.8058 |
| 0.0405 | 9.36 | 2500 | 1.3909 | 0.8021 | 0.8017 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
a6bf3a7d5c92137953f5c32066a8b16b
|
stevhliu/my_awesome_asr_mind_model
|
stevhliu
|
wav2vec2
| 90 | 76 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,406 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_asr_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8626
- Wer: 1.0299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7266 | 499.8 | 1000 | 5.8888 | 0.9403 |
| 0.166 | 999.8 | 2000 | 6.8626 | 1.0299 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
d0d407698c96d5e076f79826ea839c49
|
conan1024hao/cjkbert-small
|
conan1024hao
|
bert
| 10 | 4 |
transformers
| 2 |
fill-mask
| true | false | false |
cc-by-sa-4.0
|
['ja', 'zh', 'ko']
|
['wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 984 | false |
### Model description
- This model was trained on **ZH, JA, KO**'s Wikipedia (5 epochs).
### How to use
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("conan1024hao/cjkbert-small")
model = AutoModelForMaskedLM.from_pretrained("conan1024hao/cjkbert-small")
```
- Before you fine-tune downstream tasks, you don't need any text segmentation.
- (Though you may obtain better results if you applied morphological analysis to the data before fine-tuning)
### Morphological analysis tools
- ZH: For Chinese, we use [LTP](https://github.com/HIT-SCIR/ltp).
- JA: For Japanese, we use [Juman++](https://github.com/ku-nlp/jumanpp).
- KO: For Korean, we use [KoNLPy](https://github.com/konlpy/konlpy)(Kkma class).
### Tokenization
- We use character-based tokenization with **whole-word-masking** strategy.
### Model size
- vocab_size: 15015
- num_hidden_layers: 4
- hidden_size: 512
- num_attention_heads: 8
- param_num: 25M
|
a28ce9d1fb047daf9e9f3cdd6650ed74
|
mskolesnikov/ddpm-butterflies-128
|
mskolesnikov
| null | 13 | 0 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/smithsonian_butterflies_subset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,234 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/mskolesnikov/ddpm-butterflies-128/tensorboard?#scalars)
|
ac748d615561a94dc689a0407fea6e75
|
rdruce/ddpm-flowers-128-2
|
rdruce
| null | 12 | 1 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/cats']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,198 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-flowers-128-2
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/cats` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/rdruce/ddpm-flowers-128-2/tensorboard?#scalars)
|
f2ee5ad9d4b500e3e0ac6a2efb5c4063
|
Khanh/xlm-roberta-base-finetuned-squad
|
Khanh
|
xlm-roberta
| 13 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,205 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-squad
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7665 | 1.0 | 2295 | 0.5231 |
| 0.5236 | 2.0 | 4590 | 0.5539 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
2d8c8eae7e5dee7462750ca722c27967
|
KoichiYasuoka/roberta-base-english-upos
|
KoichiYasuoka
|
roberta
| 10 | 2,169 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-sa-4.0
|
['en']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['english', 'token-classification', 'pos', 'dependency-parsing']
| false | true | true | 859 | false |
# roberta-base-english-upos
## Model Description
This is a RoBERTa model pre-trained with [UD_English](https://universaldependencies.org/en/) for POS-tagging and dependency-parsing, derived from [roberta-base](https://huggingface.co/roberta-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-english-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-english-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-english-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
4749a8ec01695b83e5b8dfe48b27b571
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.