repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
oskarandrsson/mt-uk-sv-finetuned
|
oskarandrsson
|
marian
| 11 | 19 |
transformers
| 0 |
translation
| true | false | false |
apache-2.0
|
['uk', 'sv']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'translation']
| true | true | true | 1,171 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt-uk-sv-finetuned
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-uk-sv](https://huggingface.co/Helsinki-NLP/opus-mt-uk-sv) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4210
- eval_bleu: 40.6634
- eval_runtime: 966.5303
- eval_samples_per_second: 18.744
- eval_steps_per_second: 4.687
- epoch: 6.0
- step: 40764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 24
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
163e9e496198d44a5e1edf8af147c856
|
sayakpaul/glpn-nyu-finetuned-diode-221116-110652
|
sayakpaul
|
glpn
| 7 | 1 |
transformers
| 0 |
depth-estimation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'depth-estimation', 'generated_from_trainer']
| true | true | true | 2,737 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glpn-nyu-finetuned-diode-221116-110652
This model is a fine-tuned version of [vinvino02/glpn-nyu](https://huggingface.co/vinvino02/glpn-nyu) on the diode-subset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4018
- Mae: 0.3272
- Rmse: 0.4546
- Abs Rel: 0.3934
- Log Mae: 0.1380
- Log Rmse: 0.1907
- Delta1: 0.4598
- Delta2: 0.7659
- Delta3: 0.9082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:-------:|:--------:|:------:|:------:|:------:|
| 1.3984 | 1.0 | 72 | 1.1606 | 3.2154 | 3.2710 | 4.6927 | 0.6627 | 0.7082 | 0.0 | 0.0053 | 0.0893 |
| 0.8305 | 2.0 | 144 | 0.5445 | 0.6035 | 0.8404 | 0.8013 | 0.2102 | 0.2726 | 0.2747 | 0.5358 | 0.7609 |
| 0.4601 | 3.0 | 216 | 0.4484 | 0.4041 | 0.5376 | 0.5417 | 0.1617 | 0.2188 | 0.3771 | 0.6932 | 0.8692 |
| 0.4211 | 4.0 | 288 | 0.4251 | 0.3634 | 0.4914 | 0.4800 | 0.1499 | 0.2069 | 0.4136 | 0.7270 | 0.8931 |
| 0.4162 | 5.0 | 360 | 0.4170 | 0.3537 | 0.4833 | 0.4483 | 0.1455 | 0.2005 | 0.4303 | 0.7444 | 0.8992 |
| 0.3776 | 6.0 | 432 | 0.4115 | 0.3491 | 0.4692 | 0.4558 | 0.1449 | 0.1999 | 0.4281 | 0.7471 | 0.9018 |
| 0.3729 | 7.0 | 504 | 0.4058 | 0.3337 | 0.4590 | 0.4135 | 0.1396 | 0.1935 | 0.4517 | 0.7652 | 0.9072 |
| 0.3235 | 8.0 | 576 | 0.4035 | 0.3304 | 0.4602 | 0.4043 | 0.1383 | 0.1929 | 0.4613 | 0.7679 | 0.9073 |
| 0.3382 | 9.0 | 648 | 0.3990 | 0.3254 | 0.4546 | 0.3937 | 0.1365 | 0.1900 | 0.4671 | 0.7717 | 0.9102 |
| 0.3265 | 10.0 | 720 | 0.4018 | 0.3272 | 0.4546 | 0.3934 | 0.1380 | 0.1907 | 0.4598 | 0.7659 | 0.9082 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu116
- Tokenizers 0.13.2
|
89a55d0c3aab1c17acfe84c0f8818285
|
gokuls/distilbert_add_GLUE_Experiment_logit_kd_pretrain_qnli
|
gokuls
|
distilbert
| 17 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,859 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_logit_kd_pretrain_qnli
This model is a fine-tuned version of [gokuls/distilbert_add_pre-training-complete](https://huggingface.co/gokuls/distilbert_add_pre-training-complete) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3579
- Accuracy: 0.6522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4059 | 1.0 | 410 | 0.4016 | 0.5585 |
| 0.3907 | 2.0 | 820 | 0.3735 | 0.6094 |
| 0.3715 | 3.0 | 1230 | 0.3602 | 0.6480 |
| 0.352 | 4.0 | 1640 | 0.3579 | 0.6522 |
| 0.3314 | 5.0 | 2050 | 0.3626 | 0.6670 |
| 0.309 | 6.0 | 2460 | 0.3650 | 0.6776 |
| 0.2865 | 7.0 | 2870 | 0.3799 | 0.6776 |
| 0.2679 | 8.0 | 3280 | 0.3817 | 0.6903 |
| 0.2525 | 9.0 | 3690 | 0.3942 | 0.6822 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
622a8c8599de7e9698fb0935712226d7
|
Zhaohui/finetuning-misinfo-model-700-Zhaohui-1_misinfo
|
Zhaohui
|
distilbert
| 13 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,063 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-misinfo-model-700-Zhaohui-1_misinfo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5343
- Accuracy: 0.8571
- F1: 0.8571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
2d0305e6347a7c0796af99e8a3c4e4f6
|
vasista22/whisper-kannada-tiny
|
vasista22
|
whisper
| 12 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['kn']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event']
| true | true | true | 1,317 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Kannada Tiny
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Kannada data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
## Training and evaluation data
Training Data: MILE ASR Corpus, ULCA ASR Corpus, Shrutilipi ASR Corpus, Google/Fleurs Train+Dev set.
Evaluation Data: Google/Fleurs Test set, MILE Test set, OpenSLR.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 88
- eval_batch_size: 88
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- training_steps: 15008 (terminated upon convergence. Initially set to 51570 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at Speech Lab, IITM.
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.
|
8427c3083f00cb9306407779b3a688cb
|
Helsinki-NLP/opus-mt-fse-fi
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-fse-fi
* source languages: fse
* target languages: fi
* OPUS readme: [fse-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fse-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fse.fi | 90.2 | 0.943 |
|
660644bdfeefc4e0c3ac5bc28bf8a9cb
|
tosin/dialogpt_mwoz_idioms
|
tosin
|
gpt2
| 9 | 3 |
transformers
| 0 |
conversational
| true | false | false |
cc-by-4.0
|
['en']
|
['multi_woz_v22 and PIE-English idioms corpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['conversational', 'transformers']
| false | true | true | 2,300 | false |
## DialoGPT_MWOZ_Idioms
This is a fine-tuned model of DialoGPT (medium)-MultiWOZ on the PIE-English idioms corpus. It is intended to be used as an idiom-aware conversational system.
The dataset it's trained on is limited in scope, as it covers only 10 classes of idioms ( metaphor, simile, euphemism, parallelism, personification, oxymoron, paradox, hyperbole, irony and literal).
The perplexity achieved on the test and validation sets are 160.9 and 160.46, respectively. More information about the original pre-trained model can be found [here](https://huggingface.co/tosin/dialogpt_mwoz)
* Generation example from an interactive environment:
|Role | Response |
|---------|------------|
|User | remind her to take it off the hook. |
|Bot | the bookie took it on the nod |
|User | A lengthy break is in order ; so make sure you keep your nose clean |
|Bot | they are under a cloud because of fears about US President's wife Hillary Clinton |
|User | Some families breed like rabbits and send us all their male heirs |
|Bot | he was out for the count |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_mwoz_idioms")
model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_mwoz_idioms")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT_MWOZ_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
a6bf7cbd1df03ea0c772d94bb707b830
|
jonatasgrosman/exp_w2v2t_sv-se_vp-it_s817
|
jonatasgrosman
|
wav2vec2
| 10 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['sv-SE']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'sv-SE']
| false | true | true | 475 | false |
# exp_w2v2t_sv-se_vp-it_s817
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
062c36861549d27668a59ac9ff7ca776
|
henryjiang/distilbert-base-uncased-finetuned-emotion
|
henryjiang
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,343 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2223
- Accuracy: 0.927
- F1: 0.9271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8412 | 1.0 | 250 | 0.3215 | 0.904 | 0.9010 |
| 0.2535 | 2.0 | 500 | 0.2223 | 0.927 | 0.9271 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
b22436f7da3fbd5e41a5f218e45a449a
|
josetapia/hygpt2-cml
|
josetapia
|
gpt2
| 29 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 978 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hygpt2-cml
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 1.18.4
- Tokenizers 0.11.6
|
596740ff975b17d062dcf9f4cb64fa6c
|
Helsinki-NLP/opus-mt-sem-sem
|
Helsinki-NLP
|
marian
| 11 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['mt', 'ar', 'he', 'ti', 'am', 'sem']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,578 | false |
### sem-sem
* source group: Semitic languages
* target group: Semitic languages
* OPUS readme: [sem-sem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-sem/README.md)
* model: transformer
* source language(s): apc ara arq arz heb mlt
* target language(s): apc ara arq arz heb mlt
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara-ara.ara.ara | 4.2 | 0.200 |
| Tatoeba-test.ara-heb.ara.heb | 34.0 | 0.542 |
| Tatoeba-test.ara-mlt.ara.mlt | 16.6 | 0.513 |
| Tatoeba-test.heb-ara.heb.ara | 18.8 | 0.477 |
| Tatoeba-test.mlt-ara.mlt.ara | 20.7 | 0.388 |
| Tatoeba-test.multi.multi | 27.1 | 0.507 |
### System Info:
- hf_name: sem-sem
- source_languages: sem
- target_languages: sem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-sem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem']
- src_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
- tgt_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.test.txt
- src_alpha3: sem
- tgt_alpha3: sem
- short_pair: sem-sem
- chrF2_score: 0.507
- bleu: 27.1
- brevity_penalty: 0.972
- ref_len: 13472.0
- src_name: Semitic languages
- tgt_name: Semitic languages
- train_date: 2020-07-27
- src_alpha2: sem
- tgt_alpha2: sem
- prefer_old: False
- long_pair: sem-sem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
c9a37e0ab4fa857c319bbeed3508aee3
|
annahaz/roberta-large-mnli-misogyny-sexism-4tweets-3e-05-0.05
|
annahaz
|
roberta
| 11 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,899 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-mnli-misogyny-sexism-4tweets-3e-05-0.05
This model is a fine-tuned version of [roberta-large-mnli](https://huggingface.co/roberta-large-mnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9069
- Accuracy: 0.6914
- F1: 0.7061
- Precision: 0.6293
- Recall: 0.8043
- Mae: 0.3086
- Tn: 320
- Fp: 218
- Fn: 90
- Tp: 370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | Tn | Fp | Fn | Tp |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|:---:|:---:|:---:|:---:|
| 0.5248 | 1.0 | 1346 | 0.7245 | 0.6513 | 0.6234 | 0.6207 | 0.6261 | 0.3487 | 362 | 176 | 172 | 288 |
| 0.4553 | 2.0 | 2692 | 0.6894 | 0.6693 | 0.7043 | 0.5991 | 0.8543 | 0.3307 | 275 | 263 | 67 | 393 |
| 0.3753 | 3.0 | 4038 | 0.6966 | 0.7234 | 0.7326 | 0.6608 | 0.8217 | 0.2766 | 344 | 194 | 82 | 378 |
| 0.2986 | 4.0 | 5384 | 0.9069 | 0.6914 | 0.7061 | 0.6293 | 0.8043 | 0.3086 | 320 | 218 | 90 | 370 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
4c2fd564125fbd37a8e6448c11ec6372
|
google/t5-efficient-small-nl36
|
google
|
t5
| 12 | 32 |
transformers
| 3 |
text2text-generation
| true | true | true |
apache-2.0
|
['en']
|
['c4']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['deep-narrow']
| false | true | true | 6,257 | false |
# T5-Efficient-SMALL-NL36 (Deep-Narrow version)
T5-Efficient-SMALL-NL36 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-nl36** - is of model type **Small** with the following variations:
- **nl** is **36**
It has **280.87** million parameters and thus requires *ca.* **1123.47 MB** of memory in full precision (*fp32*)
or **561.74 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
3fc50de70a2ca161c3393e012cadca62
|
tkubotake/xlm-roberta-base-finetuned-panx-en
|
tkubotake
|
xlm-roberta
| 9 | 8 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,375 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [tkubotake/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/tkubotake/xlm-roberta-base-finetuned-panx-de) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5430
- F1: 0.7580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1318 | 1.0 | 50 | 0.4145 | 0.7557 |
| 0.0589 | 2.0 | 100 | 0.5016 | 0.7524 |
| 0.0314 | 3.0 | 150 | 0.5430 | 0.7580 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
5255d1d2d19580e965c3d35d1e080141
|
geevegeorge/customdbmodelv7
|
geevegeorge
| null | 8 | 3 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['geevegeorge/customdbv7']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,210 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# customdbmodelv7
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `geevegeorge/customdbv7` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- gradient_accumulation_steps: 8
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
- lr_scheduler: cosine
- lr_warmup_steps: 500
- ema_inv_gamma: 1.0
- ema_inv_gamma: 0.75
- ema_inv_gamma: 0.9999
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/customdbmodelv7/tensorboard?#scalars)
|
b458654ef066d0441e09770b9ed641bf
|
ishaankul67/2008_Sichuan_earthquake-clustered
|
ishaankul67
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,880 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ishaankul67/2008_Sichuan_earthquake-clustered
This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4882
- Train End Logits Accuracy: 0.8924
- Train Start Logits Accuracy: 0.7882
- Validation Loss: 0.2788
- Validation End Logits Accuracy: 0.8947
- Validation Start Logits Accuracy: 0.8947
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.4882 | 0.8924 | 0.7882 | 0.2788 | 0.8947 | 0.8947 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
a6174c4800ee0a8bcbc021f3fa50cb71
|
hrdipto/wav2vec2-base-timit-demo-colab
|
hrdipto
|
wav2vec2
| 12 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,641 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4241
- Wer: 0.3381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7749 | 4.0 | 500 | 2.0639 | 1.0018 |
| 0.9252 | 8.0 | 1000 | 0.4853 | 0.4821 |
| 0.3076 | 12.0 | 1500 | 0.4507 | 0.4044 |
| 0.1732 | 16.0 | 2000 | 0.4315 | 0.3688 |
| 0.1269 | 20.0 | 2500 | 0.4481 | 0.3559 |
| 0.1087 | 24.0 | 3000 | 0.4354 | 0.3464 |
| 0.0832 | 28.0 | 3500 | 0.4241 | 0.3381 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
b6f85e1e1b2faad70c5a106927e1d6b9
|
microsoft/conditional-detr-resnet-50
|
microsoft
|
conditional_detr
| 5 | 1,007 |
transformers
| 1 |
object-detection
| true | false | false |
apache-2.0
| null |
['coco']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['object-detection', 'vision']
| false | true | true | 4,237 | false |
# Conditional DETR model with ResNet-50 backbone
Conditional DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Meng et al. and first released in [this repository](https://github.com/Atten4Vis/ConditionalDETR).
## Model description
The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101.

## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=microsoft/conditional-detr) to look for all available Conditional DETR models.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, ConditionalDetrForObjectDetection
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/conditional-detr-resnet-50")
model = ConditionalDetrForObjectDetection.from_pretrained("microsoft/conditional-detr-resnet-50")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
# let's only keep detections with score > 0.7
target_sizes = torch.tensor([image.size[::-1]])
results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
print(
f"Detected {model.config.id2label[label.item()]} with confidence "
f"{round(score.item(), 3)} at location {box}"
)
```
This should output:
```
Detected remote with confidence 0.833 at location [38.31, 72.1, 177.63, 118.45]
Detected cat with confidence 0.831 at location [9.2, 51.38, 321.13, 469.0]
Detected cat with confidence 0.804 at location [340.3, 16.85, 642.93, 370.95]
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The Conditional DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### BibTeX entry and citation info
```bibtex
@inproceedings{MengCFZLYS021,
author = {Depu Meng and
Xiaokang Chen and
Zejia Fan and
Gang Zeng and
Houqiang Li and
Yuhui Yuan and
Lei Sun and
Jingdong Wang},
title = {Conditional {DETR} for Fast Training Convergence},
booktitle = {2021 {IEEE/CVF} International Conference on Computer Vision, {ICCV}
2021, Montreal, QC, Canada, October 10-17, 2021},
}
```
|
45f3301dbac25d6bb39c97dd7c33cd14
|
wyu1/FiD-NQ
|
wyu1
|
t5
| 6 | 0 |
transformers
| 1 | null | true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 747 | false |
# FiD model trained on NQ
-- This is the model checkpoint of FiD [2], based on the T5 large (with 770M parameters) and trained on the natural question (NQ) dataset [1].
-- Hyperparameters: 8 x 40GB A100 GPUs; batch size 8; AdamW; LR 3e-5; 50000 steps
References:
[1] Natural Questions: A Benchmark for Question Answering Research. TACL 2019.
[2] Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. EACL 2021.
## Model performance
We evaluate it on the NQ dataset, the EM score is 51.3 (0.1 lower than original performance reported in the paper).
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
b3b8e6e356edf3d566328022e047557b
|
OpenMatch/t5-ance
|
OpenMatch
|
t5
| 7 | 2 |
transformers
| 0 |
feature-extraction
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 597 | false |
# T5-ANCE
T5-ANCE generally follows the training procedure described in [this page](https://openmatch.readthedocs.io/en/latest/dr-msmarco-passage.html), but uses a much larger batch size.
Dataset used for training:
- MS MARCO Passage
Evaluation result:
|Dataset|Metric|Result|
|---|---|---|
|MS MARCO Passage (dev) | MRR@10 | 0.3570|
Important hyper-parameters:
|Name|Value|
|---|---|
|Global batch size|256|
|Learning rate|5e-6|
|Maximum length of query|32|
|Maximum length of document|128|
|Template for query|`<text>`|
|Template for document|`Title: <title> Text: <text>`|
### Paper
\-
|
38561e01fe2690d63f701d179786a2cd
|
shreeshaaithal/DialoGPT-small-Michael-Scott
|
shreeshaaithal
|
gpt2
| 9 | 4 |
transformers
| 0 |
conversational
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['conversational']
| false | true | true | 1,643 | false |
# DialoGPT Trained on WhatsApp chats
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on WhatsApp chats or you can train this model on [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
feel free to ask me questions on discord server [discord server](https://discord.gg/Gqhje8Z7DX)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("harrydonni/DialoGPT-small-Michael-Scott")
model = AutoModelWithLMHead.from_pretrained("harrydonni/DialoGPT-small-Michael-Scott")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("Michael: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
this is done by shreesha thank you......
|
1fc68a1eb8946e0738b753a1206c67da
|
it5/mt5-small-repubblica-to-ilgiornale
|
it5
|
mt5
| 11 | 4 |
transformers
| 0 |
text2text-generation
| true | true | true |
apache-2.0
|
['it']
|
['gsarti/change_it']
|
{'emissions': '17g', 'source': 'Google Cloud Platform Carbon Footprint', 'training_type': 'fine-tuning', 'geographical_location': 'Eemshaven, Netherlands, Europe', 'hardware_used': '1 TPU v3-8 VM'}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['italian', 'sequence-to-sequence', 'newspaper', 'ilgiornale', 'repubblica', 'style-transfer']
| true | true | true | 3,264 | false |
# mT5 Small for News Headline Style Transfer (Repubblica to Il Giornale) 🗞️➡️🗞️ 🇮🇹
This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on news headline style transfer in the Repubblica to Il Giornale direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
The model is trained to generate an headline in the style of Il Giornale from the full body of an article written in the style of Repubblica. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
r2g = pipeline("text2text-generation", model='it5/mt5-small-repubblica-to-ilgiornale')
r2g("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-repubblica-to-ilgiornale")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-repubblica-to-ilgiornale")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
|
8ebe2663939cc869a7c7a1838b2cff9e
|
Lvxue/distilled-mt5-small-010099
|
Lvxue
|
mt5
| 14 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['en', 'ro']
|
['wmt16']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,037 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-010099
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9787
- Bleu: 5.9209
- Gen Len: 50.1856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
4d6ead1af09698b62bee7ec0279bbbdd
|
aXhyra/presentation_hate_1234567
|
aXhyra
|
distilbert
| 10 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['tweet_eval']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,403 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_hate_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8438
- F1: 0.7680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.436235805743952e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6027 | 1.0 | 282 | 0.5186 | 0.7209 |
| 0.3537 | 2.0 | 564 | 0.4989 | 0.7619 |
| 0.0969 | 3.0 | 846 | 0.6405 | 0.7697 |
| 0.0514 | 4.0 | 1128 | 0.8438 | 0.7680 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
6474939a362aaf6d43fd44d86802e47b
|
sd-concepts-library/cortana
|
sd-concepts-library
| null | 12 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,306 | false |
### cortana on Stable Diffusion
This is the `<cortana>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







|
cda8285610d93c8d879c929285372443
|
gchhablani/wav2vec2-large-xlsr-eo
|
gchhablani
|
wav2vec2
| 10 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['eo']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 4,414 | false |
# Wav2Vec2-Large-XLSR-53-Esperanto
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Esperanto using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eo", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained('gchhablani/wav2vec2-large-xlsr-eo')
model = Wav2Vec2ForCTC.from_pretrained('gchhablani/wav2vec2-large-xlsr-eo')
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import jiwer
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
test_dataset = load_dataset("common_voice", "eo", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('gchhablani/wav2vec2-large-xlsr-eo')
model = Wav2Vec2ForCTC.from_pretrained('gchhablani/wav2vec2-large-xlsr-eo')
model.to("cuda")
chars_to_ignore_regex = """[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�\\\\\\\\„\\\\\\\\«\\\\\\\\(\\\\\\\\»\\\\\\\\)\\\\\\\\’\\\\\\\\']"""
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace('—',' ').replace('–',' ')
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * chunked_wer(predictions=result["pred_strings"], targets=result["sentence"],chunk_size=5000)))
```
**Test Result**: 10.13 %
## Training
The Common Voice `train` and `validation` datasets were used for training. The code can be found [here](https://github.com/gchhablani/wav2vec2-week/blob/main/fine-tune-xlsr-wav2vec2-on-esperanto-asr-with-transformers-final.ipynb).
|
e8503a345651d80c47d7e06114314d20
|
Helsinki-NLP/opus-mt-de-hil
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-de-hil
* source languages: de
* target languages: hil
* OPUS readme: [de-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-hil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-hil/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-hil/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-hil/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.hil | 33.9 | 0.563 |
|
5073fce856762ba8521956a93424dd0b
|
espnet/YushiUeda_mini_librispeech_diar_train_diar_raw_valid.acc.best
|
espnet
| null | 24 | 2 |
espnet
| 0 | null | false | false | false |
cc-by-4.0
|
['noinfo']
|
['mini_librispeech']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'diarization']
| false | true | true | 5,382 | false |
## ESPnet2 DIAR model
### `espnet/YushiUeda_mini_librispeech_diar_train_diar_raw_valid.acc.best`
This model was trained by YushiUeda using mini_librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 650472b45a67612eaac09c7fbd61dc25f8ff2405
pip install -e .
cd egs2/mini_librispeech/diar1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_mini_librispeech_diar_train_diar_raw_valid.acc.best
```
<!-- Generated by scripts/utils/show_diar_result.sh -->
# RESULTS
## Environments
- date: `Tue Jan 4 16:43:34 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.5a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `0b2a6786b6f627f47defaee22911b3c2dc04af2a`
- Commit date: `Thu Dec 23 12:22:49 2021 -0500`
## diar_train_diar_raw
### DER
dev_clean_2_ns2_beta2_500
|threshold_median_collar|DER|
|---|---|
|result_th0.3_med11_collar0.0|32.28|
|result_th0.3_med1_collar0.0|32.64|
|result_th0.4_med11_collar0.0|30.43|
|result_th0.4_med1_collar0.0|31.15|
|result_th0.5_med11_collar0.0|29.45|
|result_th0.5_med1_collar0.0|30.53|
|result_th0.6_med11_collar0.0|29.52|
|result_th0.6_med1_collar0.0|30.95|
|result_th0.7_med11_collar0.0|30.92|
|result_th0.7_med1_collar0.0|32.69|
## DIAR config
<details><summary>expand</summary>
```
config: conf/train_diar.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/diar_train_diar_raw
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 33757
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 3
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/diar_stats_8k/train/speech_shape
- exp/diar_stats_8k/train/spk_labels_shape
valid_shape_file:
- exp/diar_stats_8k/valid/speech_shape
- exp/diar_stats_8k/valid/spk_labels_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 200000
chunk_shift_ratio: 0.5
num_cache_chunks: 64
train_data_path_and_name_and_type:
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
valid_data_path_and_name_and_type:
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.01
scheduler: noamlr
scheduler_conf:
warmup_steps: 1000
num_spk: 2
init: xavier_uniform
input_size: null
model_conf:
attractor_weight: 1.0
use_preprocessor: true
frontend: default
frontend_conf:
fs: 8k
hop_length: 128
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/diar_stats_8k/train/feats_stats.npz
encoder: transformer
encoder_conf:
input_layer: linear
num_blocks: 2
linear_units: 512
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
decoder: linear
decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf: {}
attractor: null
attractor_conf: {}
required:
- output_dir
version: 0.10.5a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
1b8fcad247fa5591f9e0586e668e8d44
|
k4black/edos-2023-baseline-bert-base-uncased-label_vector
|
k4black
|
bert
| 10 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,706 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edos-2023-baseline-bert-base-uncased-label_vector
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1312
- F1: 0.4311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1453 | 0.59 | 100 | 1.9401 | 0.1077 |
| 1.8818 | 1.18 | 200 | 1.7312 | 0.1350 |
| 1.7149 | 1.78 | 300 | 1.5556 | 0.2047 |
| 1.5769 | 2.37 | 400 | 1.4030 | 0.2815 |
| 1.4909 | 2.96 | 500 | 1.3020 | 0.3217 |
| 1.3472 | 3.55 | 600 | 1.2238 | 0.3872 |
| 1.2856 | 4.14 | 700 | 1.1584 | 0.4162 |
| 1.2455 | 4.73 | 800 | 1.1312 | 0.4311 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
28620131ac8aa21dac8ca7369237143d
|
Evgeneus/distilbert-base-uncased-finetuned-ner
|
Evgeneus
|
distilbert
| 15 | 4 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,372 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0845
- Precision: 0.8754
- Recall: 0.9058
- F1: 0.8904
- Accuracy: 0.9763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2529 | 1.0 | 878 | 0.0845 | 0.8754 | 0.9058 | 0.8904 | 0.9763 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
a43473c2c3ba5110311d451e6536cf46
|
yhavinga/ul2-small-dutch
|
yhavinga
|
t5
| 18 | 37 |
transformers
| 0 |
text2text-generation
| true | false | true |
apache-2.0
|
['nl']
|
['yhavinga/mc4_nl_cleaned', 'yhavinga/nedd_wiki_news']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['dutch', 't5', 't5x', 'ul2', 'seq2seq']
| false | true | true | 10,354 | false |
# ul2-small-dutch for Dutch
Pretrained T5 model on Dutch using a UL2 (Mixture-of-Denoisers) objective.
The T5 model was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
The UL2 objective was introduced in
[this paper](https://arxiv.org/abs/2205.05131)
and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on
a specific downstream task to be useful in practice.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
`ul2-small-dutch` T5 is a transformers model pretrained on a very large corpus of
Dutch data in a self-supervised fashion.
This means it was pretrained on the raw texts only, with no humans labelling them in any way
(which is why it can use lots of publicly available data) with an automatic process to generate
inputs and outputs from those texts.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in the feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off during pre-training. Dropout should be re-enabled during fine-tuning
- Pre-trained on self-supervised objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
### UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training
paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where
the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers
that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of
three denoising tasks:
1. R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective;
2. X-denoising (or extreme span corruption); and
3. S-denoising (or sequential PrefixLM).
During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training
denoising task. During the pre-training, a paradigm token is inserted to the input
(`[NLU]` for R-denoising, `[NLG]` for X-denoising, or `[S2S]` for S-denoising) indicating the denoising task at hand.
Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream
fine-tuning tasks.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training.
Therefore, this model has to be fine-tuned before it is usable on a downstream task,
like text classification, unlike the Google's original T5 model.
**Note:** You most likely need to fine-tune these T5/UL2 models without mixed precision
so fine-tune them with full fp32 precision. Fine-tuning with Flax in bf16 - `model.to_bf16()` - is possible
if you set the mask correctly to exclude layernorm and embedding layers. Also note that the T5x pre-training
and fine-tuning configs set `z_loss` to 1e-4, which is used to keep the loss scale from underflowing.
You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
**Note**: For fine-tuning, most likely you can get better results if you insert a prefix token
of `[NLU]`, `[NLG]`, or `[S2S]` to your input texts.
For general language understanding fine-tuning tasks, you could use the `[NLU]` token.
For GPT-style causal language generation, you could use the `[S2S]` token.
The token `[NLG]` of the X-denoising pretrain task is somewhat mix between the language understanding and causal language
generation so the token `[NLG]` could maybe be used for language generation fine-tuning too.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("yhavinga/ul2-small-dutch", use_fast=False)
model = T5ForConditionalGeneration.from_pretrained("yhavinga/ul2-small-dutch")
```
and in Flax:
```python
from transformers import T5Tokenizer, FlaxT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("yhavinga/ul2-small-dutch", use_fast=False)
model = FlaxT5ForConditionalGeneration.from_pretrained("yhavinga/ul2-small-dutch")
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral.
Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
The `ul2-small-dutch` T5 model was pre-trained simultaneously on a combination of several datasets,
including the full version of the "mc4_nl_cleaned" dataset, which is a cleaned version of Common Crawl's web
crawl corpus, Dutch books, the Dutch subset of Wikipedia (2022-03-20), and a subset of "mc4_nl_cleaned"
containing only texts from Dutch and Belgian newspapers. This last dataset is oversampled to bias the model
towards descriptions of events in the Netherlands and Belgium.
## Training procedure
### Preprocessing
The ul2-small-dutch T5 model uses a SentencePiece unigram tokenizer with a vocabulary of 32,000 tokens.
The tokenizer includes the special tokens `<pad>`, `</s>`, `<unk>`, known from the original T5 paper,
`[NLU]`, `[NLG]` and `[S2S]` for the MoD pre-training, and `<n>` for newline.
During pre-training with the UL2 objective, input and output sequences consist of 512 consecutive tokens.
The tokenizer does not lowercase texts and is therefore case-sensitive; it distinguises
between `dutch` and `Dutch`.
Additionally, 100+28 extra tokens were added for pre-training tasks, resulting in a total of 32,128 tokens.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/),
for 957300 steps with a batch size of 128
(in total 62 B tokens).
The optimizer used was AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2,
and then an inverse square root decay (exponential decay) of the learning rate after.
The model was trained with Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) with help
from [Stephenn Fernandes](https://huggingface.co/StephennFernandes) to get started writing task definitions that wrap
HF datasets.
The UL2 training objective code used with the [t5x framework](https://github.com/google-research/t5x) was copied and
slightly modified from the [UL2 paper](https://arxiv.org/pdf/2205.05131.pdf) appendix chapter 9.2 by the authors
of the Finnish ul2 models. Used UL2 objective code is available in the repository
[Finnish-NLP/ul2-base-nl36-finnish](https://huggingface.co/Finnish-NLP/ul2-base-nl36-finnish) in the files `ul2_objective.py` and `tasks.py`.
UL2's mixture-of-denoisers configuration was otherwise equal to the UL2 paper
but for the rate of mixing denoisers, 20% for S-denoising was used (suggested at the paper chapter 4.5)
and the rest was divided equally between the R-denoising and X-denoising (i.e. 40% for both).
### Model list
Models in this series:
| | ul2-base-dutch | ul2-base-nl36-dutch | ul2-large-dutch | ul2-small-dutch |
|:---------------------|:---------------------|:----------------------|:---------------------|:---------------------|
| model_type | t5 | t5 | t5 | t5 |
| _pipeline_tag | text2text-generation | text2text-generation | text2text-generation | text2text-generation |
| d_model | 768 | 768 | 1024 | 512 |
| d_ff | 2048 | 3072 | 2816 | 1024 |
| num_heads | 12 | 12 | 16 | 6 |
| d_kv | 64 | 64 | 64 | 64 |
| num_layers | 12 | 36 | 24 | 8 |
| num_decoder_layers | 12 | 36 | 24 | 8 |
| feed_forward_proj | gated-gelu | gated-gelu | gated-gelu | gated-gelu |
| dense_act_fn | gelu_new | gelu_new | gelu_new | gelu_new |
| vocab_size | 32128 | 32128 | 32128 | 32128 |
| tie_word_embeddings | 0 | 0 | 0 | 0 |
| torch_dtype | float32 | float32 | float32 | float32 |
| _gin_batch_size | 128 | 64 | 64 | 128 |
| _gin_z_loss | 0.0001 | 0.0001 | 0.0001 | 0.0001 |
| _gin_t5_config_dtype | 'bfloat16' | 'bfloat16' | 'bfloat16' | 'bfloat16' |
## Evaluation results
See the evaluation section in the interactive [Pre-training Dutch T5 Models](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models) blog.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
Thanks to the [Finnish-NLP](https://huggingface.co/Finnish-NLP) authors for releasing their code for the UL2 objective and associated task definitions.
Thanks to [Stephenn Fernandes](https://huggingface.co/StephennFernandes) for helping me get started with the t5x framework.
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
c200abb83db6bf59e12f2439ee1de499
|
tomekkorbak/lucid_varahamihira
|
tomekkorbak
|
gpt2
| 23 | 0 |
transformers
| 0 | null | true | false | false |
mit
|
['en']
|
['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 7,561 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lucid_varahamihira
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048}],
'scorer_config': {}},
'kl_gpt3_callback': {'max_tokens': 64, 'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'lucid_varahamihira',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/2nk79xf2
|
bfbd1c0dc039022965523d55b3167087
|
minhtoan/t5-finetune-bbc-news
|
minhtoan
|
t5
| 8 | 1 |
transformers
| 0 |
summarization
| true | false | false |
mit
|
['en']
|
['x_sum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization']
| false | true | true | 3,095 | false |
# Text Summarization of News Articles
State-of-the-art lightweights pretrained Transformer-based encoder-decoder model for text summarization.
Model trained on dataset BBC News (The Extreme Summarization XSum dataset) with input length = 512, output length = 150
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("minhtoan/t5-finetune-bbc-news")
model = AutoModelForSeq2SeqLM.from_pretrained("minhtoan/t5-finetune-bbc-news")
model.cuda()
src = "summarize: The full cost of damage in Newton Stewart, one of the areas worst affected, is still being assessed.Repair work is ongoing in Hawick and many roads in Peeblesshire remain badly affected by standing water.Trains on the west coast mainline face disruption due to damage at the Lamington Viaduct.Many businesses and householders were affected by flooding in Newton Stewart after the River Cree overflowed into the town.First Minister Nicola Sturgeon visited the area to inspect the damage.The waters breached a retaining wall, flooding many commercial properties on Victoria Street - the main shopping thoroughfare.Jeanette Tate, who owns the Cinnamon Cafe which was badly affected, said she could not fault the multi-agency response once the flood hit.However, she said more preventative work could have been carried out to ensure the retaining wall did not fail.'It is difficult but I do think there is so much publicity for Dumfries and the Nith - and I totally appreciate that - but it is almost like we're neglected or forgotten,' she said.'That may not be true but it is perhaps my perspective over the last few days.'Why were you not ready to help us a bit more when the warning and the alarm alerts had gone out?'Meanwhile, a flood alert remains in place across the Borders because of the constant rain.Peebles was badly hit by problems, sparking calls to introduce more defences in the area.Scottish Borders Council has put a list on its website of the roads worst affected and drivers have been urged not to ignore closure signs.The Labour Party's deputy Scottish leader Alex Rowley was in Hawick on Monday to see the situation first hand.He said it was important to get the flood protection plan right but backed calls to speed up the process.'I was quite taken aback by the amount of damage that has been done,' he said.'Obviously it is heart-breaking for people who have been forced out of their homes and the impact on businesses.'He said it was important that 'immediate steps' were taken to protect the areas most vulnerable and a clear timetable put in place for flood prevention plans.Have you been affected by flooding in Dumfries and Galloway or the Borders? Tell us about your experience of the situation and how it was handled. Email us on selkirk.news@bbc.co.uk or dumfries@bbc.co.uk."
tokenized_text = tokenizer.encode(src, return_tensors="pt").cuda()
model.eval()
summary_ids = model.generate(tokenized_text, max_length=150)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
output
```
## Author
`
Phan Minh Toan
`
|
301359b803bdaedd15196f664c763321
|
Sebabrata/dof-Rai2-1
|
Sebabrata
|
vision-encoder-decoder
| 14 | 0 |
transformers
| 0 | null | true | false | false |
mit
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 971 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dof-Rai2-1
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
cb9770781e6d7dfffbbe7b7541dddd18
|
RobertoMCA97/xlm-roberta-base-finetuned-panx-de-fr
|
RobertoMCA97
|
xlm-roberta
| 9 | 8 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1667
- F1: 0.8582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 |
| 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 |
| 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
e939682469b1b18fc08bff12538bbcef
|
sd-concepts-library/tron-style
|
sd-concepts-library
| null | 8 | 0 | null | 6 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 924 | false |
### Tron-Style on Stable Diffusion
This is the `<tron-style>"` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:



|
37fb82589f643948f873bf6324e6f356
|
Gladiator/microsoft-deberta-v3-large_ner_wikiann
|
Gladiator
|
deberta-v2
| 13 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['wikiann']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,744 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microsoft-deberta-v3-large_ner_wikiann
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3108
- Precision: 0.8557
- Recall: 0.8738
- F1: 0.8647
- Accuracy: 0.9406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3005 | 1.0 | 1250 | 0.2462 | 0.8205 | 0.8400 | 0.8301 | 0.9294 |
| 0.1931 | 2.0 | 2500 | 0.2247 | 0.8448 | 0.8630 | 0.8538 | 0.9386 |
| 0.1203 | 3.0 | 3750 | 0.2341 | 0.8468 | 0.8693 | 0.8579 | 0.9403 |
| 0.0635 | 4.0 | 5000 | 0.2948 | 0.8596 | 0.8745 | 0.8670 | 0.9411 |
| 0.0451 | 5.0 | 6250 | 0.3108 | 0.8557 | 0.8738 | 0.8647 | 0.9406 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
417487b761663898bf6ab611c3e62356
|
nandysoham16/18-clustered_aug
|
nandysoham16
|
distilbert
| 8 | 0 |
keras
| 0 | null | false | true | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,873 | false |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
['Genome', 'Lighting', 'Hydrogen', 'Gene', 'Copper', 'Grape', 'Infrared', 'Uranium', 'Sexual_orientation', 'Asphalt', 'Incandescent_light_bulb', 'Cotton', 'Alloy', 'Annelid', 'Glass', 'Green', 'Zinc', 'Flowering_plant', 'Light-emitting_diode', 'Red']
- **Developed by:** nandysoham
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
8b3961f22db666ff227311d16eaaa5f3
|
google/bigbird-base-trivia-itc
|
google
|
big_bird
| 8 | 6,949 |
transformers
| 2 |
question-answering
| true | false | true |
apache-2.0
|
['en']
|
['trivia_qa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,953 | false |
# BigBird base trivia-itc
This model is a fine-tune checkpoint of `bigbird-roberta-base`, fine-tuned on `trivia_qa` with `BigBirdForQuestionAnsweringHead` on its top.
Check out [this](https://colab.research.google.com/drive/1DVOm1VHjW0eKCayFq1N2GpY6GR9M4tJP?usp=sharing) to see how well `google/bigbird-base-trivia-itc` performs on question answering.
## How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BigBirdForQuestionAnswering
# by default its in `block_sparse` mode with num_random_blocks=3, block_size=64
model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-base-trivia-itc")
# you can change `attention_type` to full attention like this:
model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-base-trivia-itc", attention_type="original_full")
# you can change `block_size` & `num_random_blocks` like this:
model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-base-trivia-itc", block_size=16, num_random_blocks=2)
question = "Replace me by any text you'd like."
context = "Put some context for answering"
encoded_input = tokenizer(question, context, return_tensors='pt')
output = model(**encoded_input)
```
# Fine-tuning config & hyper-parameters
- No. of global token = 128
- Window length = 192
- No. of random token = 192
- Max. sequence length = 4096
- No. of heads = 12
- No. of hidden layers = 12
- Hidden layer size = 768
- Batch size = 32
- Loss = cross-entropy noisy spans
## BibTeX entry and citation info
```tex
@misc{zaheer2021big,
title={Big Bird: Transformers for Longer Sequences},
author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed},
year={2021},
eprint={2007.14062},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
348b2432c682c6eed33629f260d4f1b4
|
jayantapaul888/twitter-data-microsoft-xtremedistil-l6-h256-uncased-sentiment-finetuned-memes
|
jayantapaul888
|
bert
| 12 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,909 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-data-microsoft-xtremedistil-l6-h256-uncased-sentiment-finetuned-memes
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3635
- Accuracy: 0.8756
- Precision: 0.8761
- Recall: 0.8756
- F1: 0.8755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6142 | 1.0 | 1762 | 0.5396 | 0.8022 | 0.8010 | 0.8022 | 0.8014 |
| 0.4911 | 2.0 | 3524 | 0.4588 | 0.8322 | 0.8332 | 0.8322 | 0.8325 |
| 0.4511 | 3.0 | 5286 | 0.4072 | 0.8562 | 0.8564 | 0.8562 | 0.8559 |
| 0.412 | 4.0 | 7048 | 0.3825 | 0.8673 | 0.8680 | 0.8673 | 0.8672 |
| 0.3886 | 5.0 | 8810 | 0.3677 | 0.8745 | 0.8753 | 0.8745 | 0.8745 |
| 0.3914 | 6.0 | 10572 | 0.3635 | 0.8756 | 0.8761 | 0.8756 | 0.8755 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
45a5965bbc764bea8c2eba18adc40278
|
DunnBC22/vit-base-patch16-224-in21k_GI_diagnosis
|
DunnBC22
|
vit
| 14 | 1 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,291 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k_GI_diagnosis
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2538
- Accuracy: 0.9375
- Weighted f1: 0.9365
- Micro f1: 0.9375
- Macro f1: 0.9365
- Weighted recall: 0.9375
- Micro recall: 0.9375
- Macro recall: 0.9375
- Weighted precision: 0.9455
- Micro precision: 0.9375
- Macro precision: 0.9455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 1.3805 | 1.0 | 200 | 0.5006 | 0.8638 | 0.8531 | 0.8638 | 0.8531 | 0.8638 | 0.8638 | 0.8638 | 0.9111 | 0.8638 | 0.9111 |
| 1.3805 | 2.0 | 400 | 0.2538 | 0.9375 | 0.9365 | 0.9375 | 0.9365 | 0.9375 | 0.9375 | 0.9375 | 0.9455 | 0.9375 | 0.9455 |
| 0.0628 | 3.0 | 600 | 0.5797 | 0.8812 | 0.8740 | 0.8812 | 0.8740 | 0.8812 | 0.8812 | 0.8813 | 0.9157 | 0.8812 | 0.9157 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1
- Datasets 2.5.2
- Tokenizers 0.12.1
|
af305c4e1f0d8859fa2dd22c589fec01
|
visheratin/t5-efficient-mini-grammar-correction
|
visheratin
|
t5
| 13 | 111 |
transformers
| 4 |
text2text-generation
| true | false | false |
mit
|
['en']
|
['c4_200m']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['grammar-correction']
| false | true | true | 806 | false |
# T5-Efficient-MINI for grammar correction
This is a [T5-Efficient-MINI](https://huggingface.co/google/t5-efficient-mini) model that was trained on a subset of [C4_200M](https://ai.googleblog.com/2021/08/the-c4200m-synthetic-dataset-for.html) dataset to solve the grammar correction task in English. To bring additional errors, random typos were introduced to the input sentences using the [nlpaug](https://github.com/makcedward/nlpaug) library. Since the model was trained on only one task, there are no prefixes needed.
The model was trained as a part of the project during the [Full Stack Deep Learning](https://fullstackdeeplearning.com/course/2022/) course. ONNX version of the model is deployed on the [site](https://edge-ai.vercel.app/models/grammar-check) and can be run directly in the browser.
|
e86cd80c3da34f63f05c9c13c44653fe
|
Reggie/distilbert-joke_detector
|
Reggie
|
distilbert
| 7 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['distilbert']
| false | true | true | 1,835 | false |
### What is this?
This model has been developed to detect "narrative-style" jokes, stories and anecdotes (i.e. they are narrated as a story) spoken during speeches or conversations etc. It works best when jokes/anecdotes are at least 40 words or longer. It is based on [lvwerra's distilbert](https://huggingface.co/lvwerra/distilbert-imdb).
The training dataset was a private collection of around 2000 jokes. This model has not been trained or tested on one-liners, puns or Reddit-style language-manipulation jokes such as knock-knock, Q&A jokes etc.
See the example in the inference widget or How to use section for what constitues a narrative-style joke.
For a more accurate model (2.4% more) that is slower at inference, see the [Roberta model](https://huggingface.co/Reggie/muppet-roberta-base-joke_detector). For a still more accurate model (2.9% more) that is much slower at inference, see the [Deberta-v3 model](https://huggingface.co/Reggie/DeBERTa-v3-base-joke_detector).
### Install these first
You'll need to pip install transformers & maybe sentencepiece
### How to use
```python
from transformers import pipeline
import torch
device = 0 if torch.cuda.is_available() else -1
model_name = 'Reggie/distilbert-joke_detector'
max_seq_len = 510
pipe = pipeline(model=model_name, device=device, truncation=True, max_length=max_seq_len)
is_it_a_joke = """A nervous passenger is about to book a flight ticket, and he asks the airlines' ticket seller, "I hope your planes are safe. Do they have a good track record for safety?" The airline agent replies, "Sir, I can guarantee you, we've never had a plane that has crashed more than once." """
result = pipe(is_it_a_joke) # [{'label': 'POSITIVE', 'score': 0.7313136458396912}]
print('This is a joke') if result[0]['label'] == 'POSITIVE' else print('This is not a joke')
```
|
c68a91bed9636a5c9276612c71245926
|
Hate-speech-CNERG/deoffxlmr-mono-kannada
|
Hate-speech-CNERG
|
xlm-roberta
| 7 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['kn']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,473 | false |
This model is used to detect **Offensive Content** in **Kannada Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Kannada(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss.
This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the second-highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.73, Ensemble - 0.74)
### For more details about our paper
Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)".
***Please cite our paper in any published work that uses any of these resources.***
~~~
@inproceedings{saha-etal-2021-hate,
title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection",
author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38",
pages = "270--276",
abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.",
}
~~~
|
c794c14e31314ac66ce91c85d245917a
|
sd-concepts-library/bored-ape-textual-inversion
|
sd-concepts-library
| null | 9 | 0 | null | 3 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,110 | false |
### bored_ape_textual_inversion on Stable Diffusion
This is the `<bored_ape>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
f2bf6565452663cd2cff63f8d42669e1
|
caidas/swin2SR-realworld-sr-x4-64-bsrgan-psnr
|
caidas
|
swin2sr
| 5 | 350 |
transformers
| 1 |
image-to-image
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-to-image']
| false | true | true | 576 | false |
# Swin2SR model (image super-resolution)
Swin2SR model that upscales images x4. It was introduced in the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345)
by Conde et al. and first released in [this repository](https://github.com/mv-lab/swin2sr).
# Intended use cases
This model is intended for real-world image super resolution.
# Usage
Refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/swin2sr#transformers.Swin2SRForImageSuperResolution.forward.example).
|
65622a475bb71bafcbdfdb81723826af
|
jonatasgrosman/whisper-small-pt-cv11-v4
|
jonatasgrosman
|
whisper
| 14 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['pt']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 2,207 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Portuguese
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 pt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3191
- Wer: 14.8844
- Cer: 5.7447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|
| 2.9379 | 0.92 | 500 | 0.4783 | 17.3806 | 7.0572 |
| 2.1727 | 1.84 | 1000 | 0.3721 | 17.2727 | 6.7975 |
| 1.7856 | 2.76 | 1500 | 0.3466 | 16.3790 | 6.4023 |
| 1.7803 | 3.68 | 2000 | 0.3372 | 15.9014 | 6.2089 |
| 1.8312 | 4.6 | 2500 | 0.3303 | 15.7473 | 6.0901 |
| 1.6403 | 5.52 | 3000 | 0.3256 | 15.9476 | 6.1896 |
| 1.536 | 6.45 | 3500 | 0.3235 | 15.5008 | 6.0928 |
| 1.4223 | 7.37 | 4000 | 0.3209 | 15.3621 | 6.0735 |
| 1.4652 | 8.29 | 4500 | 0.3209 | 15.2696 | 5.9326 |
| 1.2572 | 9.21 | 5000 | 0.3191 | 14.8844 | 5.7447 |
| 1.7142 | 10.13 | 5500 | 0.3182 | 15.0077 | 5.8469 |
| 1.4195 | 11.05 | 6000 | 0.3171 | 15.0693 | 5.8856 |
| 1.3965 | 11.97 | 6500 | 0.3167 | 15.0539 | 5.8580 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
ba0e9d8467f8be4795c288b59bdeda29
|
Z3R069/sick
|
Z3R069
| null | 18 | 2 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 605 | false |
### sick Dreambooth model trained by Z3R069 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
5645af73aa6b26fdef26aaa654a401b5
|
jonatasgrosman/exp_w2v2t_de_unispeech-sat_s75
|
jonatasgrosman
|
unispeech-sat
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['de']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'de']
| false | true | true | 462 | false |
# exp_w2v2t_de_unispeech-sat_s75
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
d035abfb828d12c6755b5833ddade02b
|
Omerdor/dry_samples_train
|
Omerdor
| null | 22 | 3 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,195 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# dry_samples_train
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 4
- gradient_accumulation_steps: 3
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Omerdor/dry_samples_train/tensorboard?#scalars)
|
7892ad55dec4cf99c6d770790b387c53
|
Rocketknight1/gpt2-finetuned-wikitext2
|
Rocketknight1
|
gpt2
| 9 | 12 |
transformers
| 0 |
text-generation
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,174 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/gpt2-finetuned-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 7.3062
- Validation Loss: 6.7676
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.3062 | 6.7676 | 0 |
### Framework versions
- Transformers 4.21.0.dev0
- TensorFlow 2.9.1
- Datasets 2.3.3.dev0
- Tokenizers 0.11.0
|
a2a1d56521fe37c668491dcd9a2bd575
|
gokuls/distilbert_sa_GLUE_Experiment_data_aug_cola_384
|
gokuls
|
distilbert
| 17 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,735 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_data_aug_cola_384
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7008
- Matthews Correlation: 0.1207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5179 | 1.0 | 835 | 0.7008 | 0.1207 |
| 0.3641 | 2.0 | 1670 | 0.9121 | 0.1063 |
| 0.2641 | 3.0 | 2505 | 1.0415 | 0.0951 |
| 0.1963 | 4.0 | 3340 | 1.2167 | 0.1072 |
| 0.1519 | 5.0 | 4175 | 1.3170 | 0.1162 |
| 0.1191 | 6.0 | 5010 | 1.4385 | 0.1118 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
1fd52e92c029346c05cb858db393c368
|
dcae10/distilbert-base-uncased-finetuned-imdb
|
dcae10
|
distilbert
| 9 | 9 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,318 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.76 | 1.0 | 157 | 0.6640 |
| 0.688 | 2.0 | 314 | 0.6581 |
| 0.6768 | 3.0 | 471 | 0.6604 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ae45399301e60ec7b30b4faaeefcc52a
|
gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53
|
gary109
|
wav2vec2
| 22 | 3 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'gary109/AI_Light_Dance', 'generated_from_trainer']
| true | true | true | 1,962 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing_ft_wav2vec2-large-xlsr-53
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4327
- Wer: 0.2043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.4089 | 1.0 | 552 | 1.4750 | 0.9054 |
| 0.7995 | 2.0 | 1104 | 0.9044 | 0.6163 |
| 0.6232 | 3.0 | 1656 | 0.6645 | 0.3980 |
| 0.5351 | 4.0 | 2208 | 0.5674 | 0.3120 |
| 0.472 | 5.0 | 2760 | 0.5167 | 0.2579 |
| 0.3913 | 6.0 | 3312 | 0.4553 | 0.2335 |
| 0.3306 | 7.0 | 3864 | 0.4476 | 0.2114 |
| 0.3028 | 8.0 | 4416 | 0.4327 | 0.2043 |
| 0.317 | 9.0 | 4968 | 0.4355 | 0.2033 |
| 0.2494 | 10.0 | 5520 | 0.4405 | 0.2022 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.1.dev0
- Tokenizers 0.12.1
|
5574308bd665106036931ae96955cdc1
|
reemalyami/AraRoBERTa-DZ
|
reemalyami
|
roberta
| 6 | 1 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
|
['ar']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,665 | false |
The **AraRoBERTa** models are mono-dialectal Arabic models trained on a country-level dialect. AraRoBERTa uses RoBERTa base config. More details are available in the paper [click](https://aclanthology.org/2022.wanlp-1.24/).
The following are the AraRoBERTa seven dialectal variations:
* [AraRoBERTa-SA](https://huggingface.co/reemalyami/AraRoBERTa-SA): Saudi Arabia (SA) dialect.
* [AraRoBERTa-EGY](https://huggingface.co/reemalyami/AraRoBERTa-EGY): Egypt (EGY) dialect.
* [AraRoBERTa-KU](https://huggingface.co/reemalyami/AraRoBERTa-KU): Kuwait (KU) dialect.
* [AraRoBERTa-OM](https://huggingface.co/reemalyami/AraRoBERTa-OM): Oman (OM) dialect.
* [AraRoBERTa-LB](https://huggingface.co/reemalyami/AraRoBERTa-LB): Lebanon (LB) dialect.
* [AraRoBERTa-JO](https://huggingface.co/reemalyami/AraRoBERTa-JO): Jordan (JO) dialect.
* [AraRoBERTa-DZ](https://huggingface.co/reemalyami/AraRoBERTa-DZ): Algeria (DZ) dialect
# When using the model, please cite our paper:
```python
@inproceedings{alyami-al-zaidy-2022-weakly,
title = "Weakly and Semi-Supervised Learning for {A}rabic Text Classification using Monodialectal Language Models",
author = "AlYami, Reem and Al-Zaidy, Rabah",
booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.wanlp-1.24",
pages = "260--272",
}
```
# Contact
**Reem AlYami**: [Linkedin](https://www.linkedin.com/in/reem-alyami/) | <reem.yami@kfupm.edu.sa> | <yami.m.reem@gmail.com>
|
f3c180bc75dfbae9259697b82940b558
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-0
|
SetFit
|
distilbert
| 10 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,153 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7714
- Accuracy: 0.705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0871 | 1.0 | 19 | 1.0704 | 0.45 |
| 1.0019 | 2.0 | 38 | 1.0167 | 0.55 |
| 0.8412 | 3.0 | 57 | 0.9134 | 0.55 |
| 0.6047 | 4.0 | 76 | 0.8430 | 0.6 |
| 0.3746 | 5.0 | 95 | 0.8315 | 0.6 |
| 0.1885 | 6.0 | 114 | 0.8585 | 0.6 |
| 0.0772 | 7.0 | 133 | 0.9443 | 0.65 |
| 0.0312 | 8.0 | 152 | 1.1019 | 0.65 |
| 0.0161 | 9.0 | 171 | 1.1420 | 0.65 |
| 0.0102 | 10.0 | 190 | 1.2773 | 0.65 |
| 0.0077 | 11.0 | 209 | 1.2454 | 0.65 |
| 0.0064 | 12.0 | 228 | 1.2785 | 0.65 |
| 0.006 | 13.0 | 247 | 1.3834 | 0.65 |
| 0.0045 | 14.0 | 266 | 1.4139 | 0.65 |
| 0.0043 | 15.0 | 285 | 1.4056 | 0.65 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
34666f37234fded5787b3006269c0da2
|
Abdo96/whisper-small-ar
|
Abdo96
|
whisper
| 18 | 4 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['hf-asr-leaderboard', 'generated_from_trainer']
| true | true | true | 1,583 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ar - Abdallah Elbohy
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset For short transcription 30s but for long transcription it has some limitations and challenges.
It achieves the following results on the evaluation set:
- Loss: 0.3791
- Wer: 49.8081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0972 | 0.57 | 1000 | 0.3791 | 49.8081 |
| 0.0978 | 1.14 | 2000 | 0.3791 | 49.8081 |
| 0.0986 | 1.71 | 3000 | 0.3791 | 49.8081 |
| 0.1055 | 2.28 | 4000 | 0.3791 | 49.8081 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
f24dcda4ab1b35cae0579612ce64fbac
|
SebLih/whisper-SV3
|
SebLih
|
whisper
| 15 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['sv']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['hf-asr-leaderboard', 'generated_from_trainer']
| true | true | true | 1,593 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small SV
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3516
- Wer: 23.0598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3274 | 0.86 | 200 | 0.3552 | 24.7469 |
| 0.1395 | 1.72 | 400 | 0.3303 | 23.5038 |
| 0.074 | 2.59 | 600 | 0.3349 | 22.6603 |
| 0.0199 | 3.45 | 800 | 0.3451 | 22.7935 |
| 0.0089 | 4.31 | 1000 | 0.3516 | 23.0598 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
641414f9f841dad5300a1a0e55dbaa11
|
Oleksandr2003/my_awesome_qa_model
|
Oleksandr2003
|
distilbert
| 8 | 4 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,403 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Oleksandr2003/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6133
- Validation Loss: 1.8637
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5361 | 2.3102 | 0 |
| 1.9179 | 1.8637 | 1 |
| 1.6133 | 1.8637 | 2 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
4825f382c31b510a77c5bb34e7e81535
|
sd-concepts-library/fireworks-over-water
|
sd-concepts-library
| null | 8 | 0 | null | 2 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 952 | false |
### Fireworks Over Water on Stable Diffusion
This is the `<firework>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:



|
cf0aa5dc40e89885f9c158b8c2f85ad4
|
okho0653/distilbert-base-uncased-finetuned-sst-2-english-finetuned-20pc
|
okho0653
|
distilbert
| 13 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,623 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst-2-english-finetuned-20pc
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5078
- Accuracy: 0.8333
- F1: 0.3721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 41 | 0.3986 | 0.8272 | 0.0667 |
| No log | 2.0 | 82 | 0.3829 | 0.8519 | 0.4 |
| No log | 3.0 | 123 | 0.4916 | 0.8333 | 0.2286 |
| No log | 4.0 | 164 | 0.4894 | 0.8333 | 0.4490 |
| No log | 5.0 | 205 | 0.5078 | 0.8333 | 0.3721 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
0c6e9e067367ebb1caac6dabc9e41a0f
|
sentence-transformers/average_word_embeddings_komninos
|
sentence-transformers
| null | 8 | 0 |
sentence-transformers
| 0 |
sentence-similarity
| false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
| false | true | true | 2,000 | false |
# average_word_embeddings_komninos
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/average_word_embeddings_komninos')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/average_word_embeddings_komninos)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(222305, 300)
)
(1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
aac3522fbc9bd9fc61a1a305c834c568
|
sail/poolformer_s12
|
sail
|
poolformer
| 5 | 971 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'vision']
| false | true | true | 5,124 | false |
# PoolFormer (S12 model)
PoolFormer model trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu et al. and first released in [this repository](https://github.com/sail-sg/poolformer).
## Model description
PoolFormer is a model that replaces attention token mixer in transfomrers with extremely simple operator, pooling.
Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=sail/poolformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import PoolFormerFeatureExtractor, PoolFormerForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = PoolFormerFeatureExtractor.from_pretrained('sail/poolformer_s12')
model = PoolFormerForImageClassification.from_pretrained('sail/poolformer_s12')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The poolformer model was trained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/sail-sg/poolformer/blob/main/train.py#L529-L572).
### Pretraining
The model was trained on TPU-v3s. Training resolution is 224. For all hyperparameters (such as batch size and learning rate), please refer to the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | # params | URL |
|---------------------------------------|-------------------------|----------|------------------------------------------------------------------|
| **PoolFormer-S12** | **77.2** | **12M** | **https://huggingface.co/sail/poolformer_s12** |
| PoolFormer-S24 | 80.3 | 21M | https://huggingface.co/sail/poolformer_s24 |
| PoolFormer-S36 | 81.4 | 31M | https://huggingface.co/sail/poolformer_s36 |
| PoolFormer-M36 | 82.1 | 56M | https://huggingface.co/sail/poolformer_m36 |
| PoolFormer-M48 | 82.5 | 73M | https://huggingface.co/sail/poolformer_m48 |
### BibTeX entry and citation info
```bibtex
@article{yu2021metaformer,
title={MetaFormer is Actually What You Need for Vision},
author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},
journal={arXiv preprint arXiv:2111.11418},
year={2021}
}
```
|
05e995c4fa1c2134657cafff10562256
|
custom-diffusion-library/cat
|
custom-diffusion-library
| null | 5 | 0 |
diffusers
| 1 | null | true | false | false |
other
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'stable-diffusion', 'stable-diffusion-diffusers', 'diffusers']
| false | true | true | 1,144 | false |
# This is a Custom Diffusion model fine-tuned from the Stable Diffusion v1-4.
[Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion/index.html) allows you to fine-tune text-to-image diffusion models, such as Stable Diffusion, given a few images of a new concept (~4-20).
Here we give an example model fine-tuned using 5 images of a cat downloaded from UnSplash. The example code of inference is shown below.
## Example code of inference
```
git clone https://github.com/adobe-research/custom-diffusion
cd custom-diffusion
```
```python
from diffusers import StableDiffusionPipeline
from src import diffuser_training
device = 'cuda'
model_id = "CompVis/stable-diffusion-v1-4"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to(device)
diffuser_training.load_model(pipe.text_encoder, pipe.tokenizer, pipe.unet, 'cat.bin')
prompt = "<new1> cat swimming in a pool"
images = pipe(prompt, num_inference_steps=200, guidance_scale=6., eta=1.).images
```
<center>
<img src="https://huggingface.co/custom-diffusion-library/cat/resolve/main/cat.png" width="600" align="center" >
</center>
|
7d3322d569a537f820c9c3fbcb2474a8
|
sophiestein/experiment_2
|
sophiestein
|
distilbert
| 13 | 24 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,528 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# experiment_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1211
- Precision: 0.8841
- Recall: 0.8926
- F1: 0.8883
- Accuracy: 0.9747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2418 | 1.0 | 878 | 0.0695 | 0.9159 | 0.9255 | 0.9207 | 0.9816 |
| 0.0541 | 2.0 | 1756 | 0.0592 | 0.9244 | 0.9343 | 0.9293 | 0.9833 |
| 0.0303 | 3.0 | 2634 | 0.0602 | 0.9260 | 0.9388 | 0.9323 | 0.9838 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.11.0+cpu
- Datasets 2.4.0
- Tokenizers 0.12.1
|
814697669f44d5d62526610eae4f132f
|
jojoUla/bert-large-uncased-finetuned-lowR100-5-uncased-DA-20
|
jojoUla
|
bert
| 18 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,212 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-lowR100-5-uncased-DA-20
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5116 | 1.0 | 1 | 6.5297 |
| 6.6949 | 2.0 | 2 | 6.9289 |
| 6.0946 | 3.0 | 3 | 7.6464 |
| 5.8742 | 4.0 | 4 | 4.8191 |
| 5.4365 | 5.0 | 5 | 6.1273 |
| 5.171 | 6.0 | 6 | 4.5528 |
| 4.4944 | 7.0 | 7 | 4.8541 |
| 4.1146 | 8.0 | 8 | 3.4321 |
| 3.4689 | 9.0 | 9 | 2.4818 |
| 3.6228 | 10.0 | 10 | 2.4444 |
| 3.147 | 11.0 | 11 | 1.0668 |
| 2.969 | 12.0 | 12 | 3.5394 |
| 2.9788 | 13.0 | 13 | 3.1681 |
| 2.9108 | 14.0 | 14 | 1.6325 |
| 2.9377 | 15.0 | 15 | 2.0480 |
| 2.6179 | 16.0 | 16 | 2.6157 |
| 2.8978 | 17.0 | 17 | 3.3663 |
| 2.6496 | 18.0 | 18 | 2.6341 |
| 2.592 | 19.0 | 19 | 2.6462 |
| 2.5212 | 20.0 | 20 | 2.2172 |
| 2.402 | 21.0 | 21 | 3.3419 |
| 2.3146 | 22.0 | 22 | 1.8095 |
| 2.5215 | 23.0 | 23 | 2.7622 |
| 2.1736 | 24.0 | 24 | 3.9402 |
| 2.4366 | 25.0 | 25 | 2.3742 |
| 2.1603 | 26.0 | 26 | 2.4520 |
| 2.21 | 27.0 | 27 | 3.8185 |
| 2.1954 | 28.0 | 28 | 4.0015 |
| 2.6556 | 29.0 | 29 | 2.4132 |
| 2.3936 | 30.0 | 30 | 3.8690 |
| 2.2442 | 31.0 | 31 | 3.7408 |
| 2.2486 | 32.0 | 32 | 2.5657 |
| 2.5066 | 33.0 | 33 | 3.6632 |
| 2.0527 | 34.0 | 34 | 2.9892 |
| 2.6207 | 35.0 | 35 | 3.5594 |
| 2.296 | 36.0 | 36 | 2.3785 |
| 2.4068 | 37.0 | 37 | 3.6126 |
| 2.257 | 38.0 | 38 | 1.0477 |
| 2.0597 | 39.0 | 39 | 1.5386 |
| 2.1702 | 40.0 | 40 | 2.4686 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
fdd04cabed239324c0711752686699fb
|
maretamasaeva/thesis-freeform
|
maretamasaeva
|
roberta
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,373 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# thesis-freeform
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6933
- Accuracy: 0.4636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6922 | 1.0 | 5684 | 0.6928 | 0.4636 |
| 0.6946 | 2.0 | 11368 | 0.6918 | 0.4636 |
| 0.692 | 3.0 | 17052 | 0.6949 | 0.4636 |
| 0.6901 | 4.0 | 22736 | 0.6933 | 0.4636 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
358c817cfd15c1683e144a232c7fb706
|
willcai/wav2vec2_common_voice_accents_6
|
willcai
|
wav2vec2
| 11 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,365 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_6
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8539 | 25.0 | 400 | 0.3711 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
fd0855d9768a198e09b8975de0693d50
|
ThomasNLG/t5-weighter_cnndm-en
|
ThomasNLG
|
t5
| 8 | 444 |
transformers
| 0 |
text2text-generation
| true | false | true |
mit
|
['en']
|
['squad', 'cnndm']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['qa', 'classification', 'question', 'answering', 'SQuAD', 'metric', 'nlg', 't5-small']
| false | true | true | 1,365 | false |
# t5-weighter_cnndm-en
## Model description
This model is a *Classifier* model based on T5-small, that predicts if a answer / question couple is considered as important fact or not (Is this answer enough relevant to appear in a plausible summary?).
It is actually a component of [QuestEval](https://github.com/ThomasScialom/QuestEval) metric but can be used independently as it is.
## How to use
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ThomasNLG/t5-weighter_cnndm-en")
model = T5ForConditionalGeneration.from_pretrained("ThomasNLG/t5-weighter_cnndm-en")
```
You can play with the model using the inference API, the text input format should follow this template (accordingly to the training stage of the model):
`text_input = "{ANSWER} </s> {QUESTION} </s> {CONTEXT}"`
## Training data
The model was trained on synthetic data as described in [Questeval: Summarization asks for fact-based evaluation](https://arxiv.org/abs/2103.12693).
### Citation info
```bibtex
@article{scialom2021questeval,
title={Questeval: Summarization asks for fact-based evaluation},
author={Scialom, Thomas and Dray, Paul-Alexis and Gallinari, Patrick and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo and Wang, Alex},
journal={arXiv preprint arXiv:2103.12693},
year={2021}
}
```
|
faf9c3648e9385b72739227e635fa6ac
|
muhtasham/finetuned-mlm_medium
|
muhtasham
|
bert
| 10 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,721 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-mlm_medium
This model is a fine-tuned version of [muhtasham/bert-medium-mlm-finetuned-emotion](https://huggingface.co/muhtasham/bert-medium-mlm-finetuned-emotion) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2805
- Accuracy: 0.9542
- F1: 0.9765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2318 | 2.55 | 500 | 0.1428 | 0.9512 | 0.9750 |
| 0.0777 | 5.1 | 1000 | 0.1976 | 0.9513 | 0.9750 |
| 0.0362 | 7.65 | 1500 | 0.2704 | 0.9388 | 0.9684 |
| 0.0234 | 10.2 | 2000 | 0.2245 | 0.9578 | 0.9784 |
| 0.0181 | 12.76 | 2500 | 0.3703 | 0.9310 | 0.9643 |
| 0.0158 | 15.31 | 3000 | 0.6137 | 0.9001 | 0.9474 |
| 0.013 | 17.86 | 3500 | 0.2805 | 0.9542 | 0.9765 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
d58e0306bbe004ec683c4c7f45162d04
|
jonatasgrosman/exp_w2v2t_ja_xls-r_s941
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ja']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ja']
| false | true | true | 453 | false |
# exp_w2v2t_ja_xls-r_s941
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
aa194ad7a7dafa3082fcd7705853f38a
|
jonaskoenig/topic_classification_04
|
jonaskoenig
|
bert
| 8 | 5,762 |
transformers
| 8 |
text-classification
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 2 | 2 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,782 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# topic_classification_04
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8325
- Train Sparse Categorical Accuracy: 0.7237
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:-----:|
| 1.0735 | 0.6503 | 0 |
| 0.9742 | 0.6799 | 1 |
| 0.9424 | 0.6900 | 2 |
| 0.9199 | 0.6970 | 3 |
| 0.9016 | 0.7026 | 4 |
| 0.8853 | 0.7073 | 5 |
| 0.8707 | 0.7120 | 6 |
| 0.8578 | 0.7160 | 7 |
| 0.8448 | 0.7199 | 8 |
| 0.8325 | 0.7237 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
31a86ab3fd2f828c48adaac062413952
|
likejazz/xlm-roberta-base-finetuned-panx-all
|
likejazz
|
xlm-roberta
| 10 | 1 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1574
- F1: 0.8504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.1897 | 0.8147 |
| No log | 2.0 | 358 | 0.1624 | 0.8394 |
| No log | 3.0 | 537 | 0.1574 | 0.8504 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu117
- Datasets 1.16.1
- Tokenizers 0.10.3
|
6b44d8f92e1a9fc221db4cae22e5baa2
|
venetis/convnext-tiny-224_album_vit
|
venetis
|
convnext
| 19 | 2 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,442 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224_album_vit
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3898
- Accuracy: 0.4912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.6659 | 1.0 | 944 | 3.5335 | 0.2607 |
| 2.8174 | 2.0 | 1888 | 2.6391 | 0.4418 |
| 2.4959 | 3.0 | 2832 | 2.3898 | 0.4912 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
dc4ca0628ad6088f8cae17660bfdae71
|
hassnain/wav2vec2-base-timit-demo-colab70
|
hassnain
|
wav2vec2
| 12 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,462 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab70
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7439
- Wer: 0.5149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8646 | 7.04 | 500 | 3.1467 | 1.0 |
| 1.678 | 14.08 | 1000 | 0.8738 | 0.6511 |
| 0.5083 | 21.13 | 1500 | 0.7404 | 0.5504 |
| 0.2923 | 28.17 | 2000 | 0.7439 | 0.5149 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
d2716387728e5fa9f4514c9685318cff
|
muhtasham/olm-bert-tiny-december-2022-target-glue-mrpc
|
muhtasham
|
bert
| 11 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,604 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# olm-bert-tiny-december-2022-target-glue-mrpc
This model is a fine-tuned version of [muhtasham/olm-bert-tiny-december-2022](https://huggingface.co/muhtasham/olm-bert-tiny-december-2022) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9243
- Accuracy: 0.6299
- F1: 0.7146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6093 | 4.35 | 500 | 0.5848 | 0.7034 | 0.7980 |
| 0.5487 | 8.7 | 1000 | 0.5863 | 0.7206 | 0.8087 |
| 0.4724 | 13.04 | 1500 | 0.6881 | 0.6544 | 0.7294 |
| 0.3752 | 17.39 | 2000 | 0.7549 | 0.6520 | 0.7331 |
| 0.276 | 21.74 | 2500 | 0.9243 | 0.6299 | 0.7146 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
1c2429d01dcd478d44ea05217f637fa7
|
KoichiYasuoka/roberta-classical-chinese-large-ud-goeswith
|
KoichiYasuoka
|
roberta
| 10 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
|
['lzh']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classical chinese', 'literary chinese', 'ancient chinese', 'token-classification', 'pos', 'dependency-parsing']
| false | true | true | 2,864 | false |
# roberta-classical-chinese-large-ud-goeswith
## Model Description
This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-classical-chinese-large-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-large-char) and [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto).
## How to Use
```py
class UDgoeswith(object):
def __init__(self,bert):
from transformers import AutoTokenizer,AutoModelForTokenClassification
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForTokenClassification.from_pretrained(bert)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=self.tokenizer(text,return_offsets_mapping=True)
v=w["input_ids"]
x=[v[0:i]+[self.tokenizer.mask_token_id]+v[i+1:]+[j] for i,j in enumerate(v[1:-1],1)]
with torch.no_grad():
e=self.model(input_ids=torch.tensor(x)).logits.numpy()[:,1:-2,:]
r=[1 if i==0 else -1 if j.endswith("|root") else 0 for i,j in sorted(self.model.config.id2label.items())]
e+=numpy.where(numpy.add.outer(numpy.identity(e.shape[0]),r)==0,0,numpy.nan)
g=self.model.config.label2id["X|_|goeswith"]
r=numpy.tri(e.shape[0])
for i in range(e.shape[0]):
for j in range(i+2,e.shape[1]):
r[i,j]=r[i,j-1] if numpy.nanargmax(e[i,j-1])==g else 1
e[:,:,g]+=numpy.where(r==0,0,numpy.nan)
m=numpy.full((e.shape[0]+1,e.shape[1]+1),numpy.nan)
m[1:,1:]=numpy.nanmax(e,axis=2).transpose()
p=numpy.zeros(m.shape)
p[1:,1:]=numpy.nanargmax(e,axis=2).transpose()
for i in range(1,m.shape[0]):
m[i,0],m[i,i],p[i,0]=m[i,i],numpy.nan,p[i,i]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
m[:,0]+=numpy.where(m[:,0]==numpy.nanmax(m[[i for i,j in enumerate(h) if j==0],0]),0,numpy.nan)
m[[i for i,j in enumerate(h) if j==0]]+=[0 if i==0 or j==0 else numpy.nan for i,j in enumerate(h)]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text+"\n"
v=[(s,e) for s,e in w["offset_mapping"] if s<e]
for i,(s,e) in enumerate(v,1):
q=self.model.config.id2label[p[i,h[i]]].split("|")
u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=UDgoeswith("KoichiYasuoka/roberta-classical-chinese-large-ud-goeswith")
print(nlp("孟子見梁惠王"))
```
with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/).
Or without ufal.chu-liu-edmonds:
```
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-classical-chinese-large-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("孟子見梁惠王"))
```
|
f64d93247ebb2ed06a86ef4e9989ed99
|
mateocolina/xlm-roberta-base-finetuned-marc-en
|
mateocolina
|
xlm-roberta
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null |
['amazon_reviews_multi']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,275 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9276
- Mae: 0.5366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0992 | 1.0 | 235 | 0.9340 | 0.5122 |
| 0.945 | 2.0 | 470 | 0.9276 | 0.5366 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
8b3febad6aeff95800e413b87671e412
|
jonatasgrosman/exp_w2v2t_nl_vp-fr_s226
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['nl']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'nl']
| false | true | true | 469 | false |
# exp_w2v2t_nl_vp-fr_s226
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
14b77201a9bb79cec12c514bd0f52bc2
|
sd-concepts-library/wildkat
|
sd-concepts-library
| null | 14 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,510 | false |
### Wildkat on Stable Diffusion
This is the `<wildkat>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:









|
ce4b0c8aa6f417aec7ef31d231bc9b8c
|
Helsinki-NLP/opus-mt-fr-vi
|
Helsinki-NLP
|
marian
| 11 | 80 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['fr', 'vi']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,002 | false |
### fra-vie
* source group: French
* target group: Vietnamese
* OPUS readme: [fra-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-vie/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.vie | 31.1 | 0.486 |
### System Info:
- hf_name: fra-vie
- source_languages: fra
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'vi']
- src_constituents: {'fra'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: vie
- short_pair: fr-vi
- chrF2_score: 0.486
- bleu: 31.1
- brevity_penalty: 0.985
- ref_len: 13219.0
- src_name: French
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: vi
- prefer_old: False
- long_pair: fra-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
75bd0b5f577e84e44d8149c3589857c7
|
muhtasham/finetuned-self_mlm_mini
|
muhtasham
|
bert
| 10 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,714 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-self_mlm_mini
This model is a fine-tuned version of [muhtasham/bert-tiny-mlm-finetuned-imdb](https://huggingface.co/muhtasham/bert-tiny-mlm-finetuned-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6150
- Accuracy: 0.8224
- F1: 0.9025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4426 | 2.55 | 500 | 0.4673 | 0.7928 | 0.8844 |
| 0.2845 | 5.1 | 1000 | 0.3099 | 0.8697 | 0.9303 |
| 0.2282 | 7.65 | 1500 | 0.3432 | 0.8589 | 0.9241 |
| 0.1819 | 10.2 | 2000 | 0.2702 | 0.8998 | 0.9472 |
| 0.1461 | 12.76 | 2500 | 0.4852 | 0.8344 | 0.9097 |
| 0.111 | 15.31 | 3000 | 0.6807 | 0.7950 | 0.8858 |
| 0.0883 | 17.86 | 3500 | 0.6150 | 0.8224 | 0.9025 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
e8ed5b359ef6ac77bf05bb1804a94c75
|
Helsinki-NLP/opus-mt-lu-es
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-lu-es
* source languages: lu
* target languages: es
* OPUS readme: [lu-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lu.es | 22.4 | 0.400 |
|
15b9c308ee215eb4407c02b00e31bc16
|
tangoqash/SAM
|
tangoqash
|
distilbert
| 13 | 11 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,045 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAM
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3061
- Accuracy: {'accuracy': 0.8733333333333333}
- F1: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
e152bc1ba69dd2dd337b3461b439c80f
|
leander/bert-finetuned-ner
|
leander
|
bert
| 12 | 3 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0589
- Precision: 0.9329
- Recall: 0.9507
- F1: 0.9417
- Accuracy: 0.9870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0867 | 1.0 | 1756 | 0.0639 | 0.9140 | 0.9386 | 0.9261 | 0.9831 |
| 0.0398 | 2.0 | 3512 | 0.0586 | 0.9326 | 0.9480 | 0.9402 | 0.9858 |
| 0.0212 | 3.0 | 5268 | 0.0589 | 0.9329 | 0.9507 | 0.9417 | 0.9870 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
47dfe92e6e406a583d6e7da18ee5fda8
|
Elytum/tiny-classification-fast
|
Elytum
|
bert
| 64 | 10 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,308 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-classification-fast
This model is a fine-tuned version of [cross-encoder/ms-marco-TinyBERT-L-2-v2](https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L-2-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8673
- Accuracy: 0.7786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9077 | 1.0 | 785 | 1.0466 | 0.7482 |
| 1.0061 | 2.0 | 1570 | 0.8673 | 0.7786 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
9eb789b27976a4104e5727c2c43ee674
|
Sushant45/Adult_contemporary_music-clustered
|
Sushant45
|
distilbert
| 8 | 28 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,879 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Sushant45/Adult_contemporary_music-clustered
This model is a fine-tuned version of [nandysoham16/15-clustered_aug](https://huggingface.co/nandysoham16/15-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2951
- Train End Logits Accuracy: 0.9375
- Train Start Logits Accuracy: 0.9028
- Validation Loss: 0.5855
- Validation End Logits Accuracy: 0.7143
- Validation Start Logits Accuracy: 0.8571
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.2951 | 0.9375 | 0.9028 | 0.5855 | 0.7143 | 0.8571 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
38a4e5ed81cdfd2b7cbe4b999fe3c65b
|
ali2066/finetuned_token_itr0_3e-05_webDiscourse_16_02_2022-20_59_50
|
ali2066
|
distilbert
| 13 | 10 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,805 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_3e-05_webDiscourse_16_02_2022-20_59_50
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5450
- Precision: 0.0049
- Recall: 0.0146
- F1: 0.0074
- Accuracy: 0.7431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6830 | 0.0109 | 0.0323 | 0.0163 | 0.5685 |
| No log | 2.0 | 20 | 0.7187 | 0.0256 | 0.0323 | 0.0286 | 0.5668 |
| No log | 3.0 | 30 | 0.6839 | 0.0076 | 0.0484 | 0.0131 | 0.5848 |
| No log | 4.0 | 40 | 0.6988 | 0.0092 | 0.0484 | 0.0155 | 0.5918 |
| No log | 5.0 | 50 | 0.7055 | 0.0100 | 0.0484 | 0.0165 | 0.5946 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
fda1915a0be8f2eb479a1a7d30ed2f68
|
sd-concepts-library/crinos-form-garou
|
sd-concepts-library
| null | 9 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,045 | false |
### crinos form garou on Stable Diffusion
This is the `<crinos>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
60cf63aed9fda58dbb71176dfc92a0ed
|
svo2/roberta-finetuned-location
|
svo2
|
roberta
| 15 | 16 |
transformers
| 0 |
question-answering
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 983 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-location
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
8393c69afdaa92844f7f462103086137
|
ligerre/xlm-roberta-base-finetuned-panx-it
|
ligerre
|
xlm-roberta
| 10 | 23 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2401
- F1: 0.8246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8187 | 1.0 | 70 | 0.3325 | 0.7337 |
| 0.2829 | 2.0 | 140 | 0.2554 | 0.8003 |
| 0.1894 | 3.0 | 210 | 0.2401 | 0.8246 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
fe29efb116e4c67750b0d46d721a126b
|
Fulccrum/trainii_ac94u-label-classification
|
Fulccrum
| null | 4 | 0 |
sklearn
| 0 |
tabular-classification
| false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['tabular-classification', 'baseline-trainer']
| false | true | true | 7,270 | false |
## Baseline Model trained on trainii_ac94u to apply classification on label
**Metrics of the best model:**
accuracy 0.361046
recall_macro 0.353192
precision_macro 0.240667
f1_macro 0.278231
Name: LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000), dtype: float64
**See model plot below:**
<style>#sk-container-id-9 {color: black;background-color: white;}#sk-container-id-9 pre{padding: 0;}#sk-container-id-9 div.sk-toggleable {background-color: white;}#sk-container-id-9 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-9 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-9 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-9 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-9 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-9 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-9 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-9 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-9 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-9 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-9 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-9 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-9 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-9 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-9 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-9 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-9 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-9 div.sk-item {position: relative;z-index: 1;}#sk-container-id-9 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-9 div.sk-item::before, #sk-container-id-9 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-9 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-9 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-9 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-9 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-9 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-9 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-9 div.sk-label-container {text-align: center;}#sk-container-id-9 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-9 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-9" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
id True False False ... False False False
text False False False ... False True False[2 rows x 7 columns])),('logisticregression',LogisticRegression(C=0.1, class_weight='balanced',max_iter=1000))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-27" type="checkbox" ><label for="sk-estimator-id-27" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
id True False False ... False False False
text False False False ... False True False[2 rows x 7 columns])),('logisticregression',LogisticRegression(C=0.1, class_weight='balanced',max_iter=1000))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-28" type="checkbox" ><label for="sk-estimator-id-28" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
id True False False ... False False False
text False False False ... False True False[2 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-29" type="checkbox" ><label for="sk-estimator-id-29" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt
|
18d7c9200c9cfd0a23a08dfe1b044127
|
jhmin/finetuning-sentiment-model-3000-samples
|
jhmin
|
distilbert
| 13 | 10 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,055 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3144
- Accuracy: 0.8667
- F1: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
99f71ce38065efc9999690d1b88ed1e6
|
elopezlopez/Bio_ClinicalBERT_fold_7_ternary_v1
|
elopezlopez
|
bert
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,668 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_7_ternary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9612
- F1: 0.7939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 291 | 0.5762 | 0.7593 |
| 0.5434 | 2.0 | 582 | 0.5577 | 0.7939 |
| 0.5434 | 3.0 | 873 | 0.6501 | 0.7951 |
| 0.2198 | 4.0 | 1164 | 0.8661 | 0.7939 |
| 0.2198 | 5.0 | 1455 | 1.1493 | 0.7900 |
| 0.0953 | 6.0 | 1746 | 1.1999 | 0.7977 |
| 0.0375 | 7.0 | 2037 | 1.4623 | 0.7759 |
| 0.0375 | 8.0 | 2328 | 1.4526 | 0.7900 |
| 0.0246 | 9.0 | 2619 | 1.6915 | 0.7734 |
| 0.0246 | 10.0 | 2910 | 1.6097 | 0.7913 |
| 0.0113 | 11.0 | 3201 | 1.7091 | 0.8015 |
| 0.0113 | 12.0 | 3492 | 1.7252 | 0.7990 |
| 0.0103 | 13.0 | 3783 | 1.7305 | 0.8015 |
| 0.0079 | 14.0 | 4074 | 1.7932 | 0.8003 |
| 0.0079 | 15.0 | 4365 | 1.7800 | 0.8028 |
| 0.0071 | 16.0 | 4656 | 1.7000 | 0.7977 |
| 0.0071 | 17.0 | 4947 | 1.8342 | 0.8003 |
| 0.0077 | 18.0 | 5238 | 1.8517 | 0.7990 |
| 0.0044 | 19.0 | 5529 | 1.8633 | 0.7964 |
| 0.0044 | 20.0 | 5820 | 1.8813 | 0.7926 |
| 0.0028 | 21.0 | 6111 | 1.8914 | 0.7964 |
| 0.0028 | 22.0 | 6402 | 1.9412 | 0.7926 |
| 0.0043 | 23.0 | 6693 | 1.9760 | 0.7939 |
| 0.0043 | 24.0 | 6984 | 1.9509 | 0.7977 |
| 0.0002 | 25.0 | 7275 | 1.9612 | 0.7939 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
2a88b2acea22a3ba17b2d5005c1d3994
|
jwhe/prompt-extend-1epoch
|
jwhe
|
gpt2
| 12 | 0 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,195 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prompt-extend-1epoch
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 512
- eval_batch_size: 1024
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7574 | 1.0 | 3199 | 2.9530 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.11.0+cu102
- Datasets 2.9.0
- Tokenizers 0.13.2
|
89f96548e4a9f77fc3f5bd764e4485b0
|
faisito/xlm-roberta-base-finetuned-panx-it
|
faisito
|
xlm-roberta
| 9 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2532
- F1: 0.8222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8114 | 1.0 | 70 | 0.3235 | 0.7548 |
| 0.2825 | 2.0 | 140 | 0.2749 | 0.7913 |
| 0.1932 | 3.0 | 210 | 0.2532 | 0.8222 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
8a8fe827cc1b20fb798a9d76e0c1450b
|
thu-coai/CDial-GPT_LCCC-large
|
thu-coai
| null | 5 | 275 |
transformers
| 6 |
conversational
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['conversational']
| false | true | true | 1,084 | false |
## Chinese pre-trained dialogue model (CDial-GPT)
This project provides a large-scale Chinese GPT model pre-trained on the dataset [LCCC](https://huggingface.co/datasets/silver/lccc).
We present a series of Chinese GPT model that are first pre-trained on a Chinese novel dataset and then post-trained on our LCCC dataset.
Similar to [TransferTransfo](https://arxiv.org/abs/1901.08149), we concatenate all dialogue histories into one context sentence, and use this sentence to predict the response. The input of our model consists of word embedding, speaker embedding, and positional embedding of each word.
Paper: [A Large-Scale Chinese Short-Text Conversation Dataset](https://arxiv.org/pdf/2008.03946.pdf)
### How to use
```python
from transformers import OpenAIGPTLMHeadModel, GPT2LMHeadModel, BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained("thu-coai/CDial-GPT_LCCC-large")
model = OpenAIGPTLMHeadModel.from_pretrained("thu-coai/CDial-GPT_LCCC-large")
```
For more details, please refer to our [repo.](https://github.com/thu-coai/CDial-GPT) on github.
|
c72d1c2e72e284628e143a696fff33a3
|
arjunchandra/ddpm-butterflies-128
|
arjunchandra
| null | 13 | 3 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/smithsonian_butterflies_subset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,234 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/arjunchandra/ddpm-butterflies-128/tensorboard?#scalars)
|
5b8cd52b0b65b02a8d93b7bfaa277e02
|
jimregan/psst-partial-timit
|
jimregan
|
wav2vec2
| 6 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['jimregan/psst', 'timit_asr']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition']
| false | true | true | 7,035 | false |
This repository contains a number of experiments for the [PSST Challenge](https://psst.study/).
As the test set is unavailable, all numbers are based on the validation set.
The experiments in the tables below were finetuned on [Wav2vec 2.0 Base, No finetuning](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec)
Our overall best performing model (**FER** 9\.2%, **PER:** 21\.0%) was based on [Wav2vec 2.0 Large, No finetuning](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec) (git tag: `larger-rir`), with the TIMIT subset augmented with Room Impulse Response, based on the experiments below, on the base model.
## Augmented TIMIT subset
Using a subset of TIMIT that could map easily to the phoneset used by the PSST Challenge data (a list of IDs are in the repository), we experimented with augmenting the data to better match the PSST data.
The best results were obtained using Room Impulse Response (tag: `rir`)
| **Augmentation** | **FER** | **PER** | **Git tag** |
| :----------------------------------------------- | :-------- | :--------- | :---------------------------------- |
| unaugmented | 10\.2% | 22\.5% | huggingface-unaugmented |
| Gaussian noise | 10\.0% | 22\.1% | gaussian |
| Pitchshift | 9\.6% | 22\.9% | pitchshift |
| RIR | **9\.6%** | **21\.8%** | rir |
| Time stretch | 10\.1% | 22\.8% | timestretch |
| Gaussian noise + RIR | 10\.0% | 23\.4% | gaussian-rir |
| Pitchshift + Gaussian noise | 9\.9% | 22\.9% | pitchshift-gaussian |
| Pitchshift + RIR | 9\.9% | 22\.8% | pitchshift-rir |
| Tim estretch + Gaussian noise | 10\.2% | 22\.8% | timestretch-gaussian |
| Time stretch + Pitchshift | 9\.8% | 22\.0% | timestretch-pitchshift |
| Time stretch + RIR | 9\.7% | 22\.2% | timestretch-rir |
| Pitchshift + Gaussian noise + RIR | 10\.1% | 23\.5% | pitchshift-gaussian-rir |
| Time stretch + Gaussian noise + RIR | 9\.7% | 22\.3% | timestretch-gaussian-rir |
| Time stretch + Pitchshift + Gaussian noise | 10\.2% | 22\.9% | timestretch-pitchshift-gaussian |
| Time stretch + Pitchshift + RIR | 10\.2% | 22\.5% | timestretch-pitchshift-rir |
| Time stretch + Pitchshift + Gaussian noise + RIR | 10\.9% | 24\.1% | timestretch-pitchshift-gaussian-rir |
## LM experiments
We experimented with a number of language model configurations, combining the data from the PSST challenge, the subset of TIMIT we used, and CMUdict.
We tried combining CMUdict data in a number of ways: unmodified, with a silence token added at the start of the pronunciation, at the end, and at both the start and the end.
The best result was from a 5-gram model, with silences added at the end of the CMUdict data (git tag: `lm-nosil-cmudict-sile.5`).
Evaluation was performed using scripts provided by the PSST Challenge's organisers, so there are no scripts in place to automatically use the LM with the transformers library.
| | **n-gram** | **FER** | **PER** | **Tag** |
| :----------------------------- | :--------- | :--------- | :--------- | :--------- |
| Baseline + TIMIT | --- | **10\.2%** | 22\.5% | huggingface-unaugmented |
| All silences | 4 | 10\.5% | 23\.0% | lm-allsil.4 |
| | 5 | 10\.5% | 22\.6% | lm-allsil.5 |
| | 6 | 10\.3% | 22\.3% | lm-allsil.6 |
| No silences | 4 | 10\.3% | 22\.6% | lm-nosil.4 |
| | 5 | **10\.2%** | 22\.2% | lm-nosil.5 |
| | 6 | **10\.2%** | 22\.4% | lm-nosil.6 |
| PSST and TIMIT without silence | | | | |
| Unmodified CMUdict | 4 | 10\.3% | 22\.6% | lm-nosil-cmudict-nosil.4 |
| | 5 | 10\.2% | 22\.2% | lm-nosil-cmudict-nosil.5 |
| | 6 | **10\.2%** | 22\.4% | lm-nosil-cmudict-nosil.6 |
| CMUdict-end | 4 | 10\.3% | 22\.6% | lm-nosil-cmudict-sile.4 |
| | 5 | **10\.2%** | **22\.1%** | lm-nosil-cmudict-sile.5 |
| | 6 | **10\.2%** | 22\.3% | lm-nosil-cmudict-sile.6 |
| CMUdict-start | 4 | 10\.4% | 22\.6% | lm-nosil-cmudict-sils.4 |
| | 5 | 10\.3% | 22\.4% | lm-nosil-cmudict-sils.5 |
| | 6 | 10\.3% | 22\.3% | lm-nosil-cmudict-sils.6 |
| CMUdict-both | 4 | 10\.4% | 22\.7% | lm-nosil-cmudict-silb.4 |
| | 5 | 10\.4% | 22\.3% | lm-nosil-cmudict-silb.5 |
| | 6 | 10\.3% | 22\.3% | lm-nosil-cmudict-silb.6 |
| Unmodified PSST and TIMIT | | | | |
| Unmodified CMUdict | 4 | 10\.3% | 22\.8% | lm-orig-cmudict-nosil.4 |
| | 5 | 10\.3% | 22\.4% | lm-orig-cmudict-nosil.5 |
| | 6 | **10\.2%** | 22\.4% | lm-orig-cmudict-nosil.6 |
| CMUdict-end | 4 | 10\.3% | 22\.7% | lm-orig-cmudict-sile.4 |
| | 5 | **10\.2%** | 22\.2% | lm-orig-cmudict-sile.5 |
| | 6 | **10\.2%** | 22\.3% | lm-orig-cmudict-sile.6 |
| CMUdict-start | 4 | 10\.5% | 22\.8% | lm-orig-cmudict-sils.4 |
| | 5 | 10\.4% | 22\.5% | lm-orig-cmudict-sils.5 |
| | 6 | 10\.3% | 22\.4% | lm-orig-cmudict-sils.6 |
| CMUdict-both | 4 | 10\.5% | 22\.8% | lm-orig-cmudict-silb.4 |
| | 5 | 10\.4% | 22\.4% | lm-orig-cmudict-silb.5 |
| | 6 | 10\.4% | 22\.4% | lm-orig-cmudict-silb.6 |
|
cb911a75a9de3ddac897f2313cce55b1
|
takehiro067/distilbert-base-uncased-finetuned-emotion
|
takehiro067
|
distilbert
| 16 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,338 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2213
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8391 | 1.0 | 250 | 0.3177 | 0.9035 | 0.9006 |
| 0.2526 | 2.0 | 500 | 0.2213 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
0deda740cab344ce8b8a6dc3005f3c99
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.