repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
it5/mt5-base-question-answering
it5
mt5
11
6
transformers
0
text2text-generation
true
true
true
apache-2.0
['it']
['squad_it']
{'emissions': '40g"', 'source': 'Google Cloud Platform Carbon Footprint', 'training_type': 'fine-tuning', 'geographical_location': 'Eemshaven, Netherlands, Europe', 'hardware_used': '1 TPU v3-8 VM'}
0
0
0
0
0
0
0
['italian', 'sequence-to-sequence', 'squad_it', 'text2text-question-answering', 'text2text-generation']
true
true
true
2,658
false
# mT5 Base for Question Answering ⁉️ 🇮🇹 This repository contains the checkpoint for the [mT5 Base](https://huggingface.co/google/mt5-base) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines qa = pipeline("text2text-generation", model='it5/mt5-base-question-answering') qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?") >>> [{"generated_text": "ultimo massimo glaciale"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/mt5-base-question-answering") model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-base-question-answering") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
a819ac9cb222d77f29ade3f2bd612532
Haifeng1999/ddpm-butterflies-128
Haifeng1999
null
13
3
diffusers
0
null
false
false
false
apache-2.0
['en']
['huggan/smithsonian_butterflies_subset']
null
0
0
0
0
0
0
0
[]
false
true
true
1,233
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/Haifeng1999/ddpm-butterflies-128/tensorboard?#scalars)
1e74370ab98328a7c0b0910cbfb9f6e0
Siqi/marian-finetuned-kde4-en-to-fr-2
Siqi
marian
14
3
transformers
0
translation
true
false
false
apache-2.0
null
['kde4']
null
0
0
0
0
0
0
0
['translation', 'generated_from_trainer']
true
true
true
1,077
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr-2 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8559 - Bleu: 52.9326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ee95d5d01e05a68ce5a5fa0a6d70910e
thisisHJLee/wav2vec2-large-xls-r-1b-korean-sample5
thisisHJLee
wav2vec2
10
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,496
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-1b-korean-sample5 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1118 - Cer: 0.0217 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3411 | 1.0 | 12588 | 0.2680 | 0.0738 | | 0.2237 | 2.0 | 25176 | 0.1812 | 0.0470 | | 0.1529 | 3.0 | 37764 | 0.1482 | 0.0339 | | 0.1011 | 4.0 | 50352 | 0.1168 | 0.0256 | | 0.0715 | 5.0 | 62940 | 0.1118 | 0.0217 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.11.0
d0bd22589e65f0ec701dc4155614afb5
jojoUla/bert-large-cased-sigir-support-no-label-40-sigir-tune2nd-LR100-labelled-30
jojoUla
bert
16
0
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,788
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-sigir-support-no-label-40-sigir-tune2nd-LR100-labelled-30 This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-no-label-40) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.8321 | 1.0 | 2 | 4.3250 | | 3.383 | 2.0 | 4 | 2.4023 | | 1.9548 | 3.0 | 6 | 1.2925 | | 1.4856 | 4.0 | 8 | 1.5152 | | 0.9588 | 5.0 | 10 | 1.7731 | | 1.2668 | 6.0 | 12 | 1.3830 | | 0.8441 | 7.0 | 14 | 1.9760 | | 1.0173 | 8.0 | 16 | 1.2364 | | 0.6814 | 9.0 | 18 | 1.1771 | | 0.9044 | 10.0 | 20 | 1.4721 | | 0.6889 | 11.0 | 22 | 0.8518 | | 0.5845 | 12.0 | 24 | 0.6993 | | 0.4068 | 13.0 | 26 | 1.1771 | | 0.5957 | 14.0 | 28 | 0.5895 | | 0.4277 | 15.0 | 30 | 0.5326 | | 0.3736 | 16.0 | 32 | 1.0893 | | 0.413 | 17.0 | 34 | 1.3267 | | 0.5718 | 18.0 | 36 | 1.0331 | | 0.3892 | 19.0 | 38 | 1.0793 | | 0.3913 | 20.0 | 40 | 0.8742 | | 0.4794 | 21.0 | 42 | 1.1264 | | 0.4626 | 22.0 | 44 | 1.1857 | | 0.2683 | 23.0 | 46 | 1.5181 | | 0.3436 | 24.0 | 48 | 1.4419 | | 0.3793 | 25.0 | 50 | 1.4198 | | 0.356 | 26.0 | 52 | 1.1776 | | 0.2189 | 27.0 | 54 | 0.7166 | | 0.286 | 28.0 | 56 | 0.7601 | | 0.3681 | 29.0 | 58 | 1.2592 | | 0.5858 | 30.0 | 60 | 0.6520 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
03135dae806a9dd569dd337088c1dfeb
ahmeddbahaa/AraBART-finetuned-ar
ahmeddbahaa
mbart
16
3
transformers
0
summarization
true
false
false
apache-2.0
null
['xlsum']
null
0
0
0
0
0
0
0
['summarization', 'generated_from_trainer']
true
true
true
2,392
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AraBART-finetuned-ar This model is a fine-tuned version of [moussaKam/AraBART](https://huggingface.co/moussaKam/AraBART) on the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 3.7449 - Rouge-1: 31.08 - Rouge-2: 14.68 - Rouge-l: 27.36 - Gen Len: 19.64 - Bertscore: 73.86 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 10 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:| | 4.4318 | 1.0 | 2345 | 3.7996 | 28.93 | 13.2 | 25.56 | 19.51 | 73.17 | | 4.0338 | 2.0 | 4690 | 3.7483 | 30.29 | 14.24 | 26.73 | 19.5 | 73.59 | | 3.8586 | 3.0 | 7035 | 3.7281 | 30.44 | 14.44 | 26.92 | 19.75 | 73.58 | | 3.7289 | 4.0 | 9380 | 3.7204 | 30.55 | 14.49 | 26.88 | 19.66 | 73.73 | | 3.6245 | 5.0 | 11725 | 3.7199 | 30.73 | 14.63 | 27.11 | 19.69 | 73.68 | | 3.5392 | 6.0 | 14070 | 3.7221 | 30.85 | 14.65 | 27.21 | 19.7 | 73.77 | | 3.4694 | 7.0 | 16415 | 3.7286 | 31.08 | 14.8 | 27.41 | 19.62 | 73.84 | | 3.4126 | 8.0 | 18760 | 3.7384 | 31.06 | 14.77 | 27.41 | 19.64 | 73.82 | | 3.3718 | 9.0 | 21105 | 3.7398 | 31.18 | 14.89 | 27.49 | 19.67 | 73.87 | | 3.3428 | 10.0 | 23450 | 3.7449 | 31.19 | 14.88 | 27.44 | 19.68 | 73.87 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
47980bd9736fea0ee01c37ae41e7da46
projecte-aina/roberta-base-ca-cased-ner
projecte-aina
roberta
11
102
transformers
1
token-classification
true
false
false
apache-2.0
['ca']
['projecte-aina/ancora-ca-ner']
null
0
0
0
0
0
0
0
['catalan', 'named entity recognition', 'ner', 'CaText', 'Catalan Textual Corpus']
true
true
true
4,787
false
# Catalan BERTa (RoBERTa-base) finetuned for Named Entity Recognition. ## Table of Contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to Use](#how-to-use) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#addional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer)) </details> ## Model description The **roberta-base-ca-cased-ner** is a Named Entity Recognition (NER) model for the Catalan language fine-tuned from the [BERTa](https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details). ## Intended uses and limitations ## How to use ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training We used the NER dataset in Catalan called [Ancora-ca-ner](https://huggingface.co/datasets/projecte-aina/ancora-ca-ner) for training and evaluation. ## Evaluation We evaluated the _roberta-base-ca-cased-ner_ on the Ancora-ca-ner test set against standard multilingual and monolingual baselines: | Model | Ancora-ca-ner (F1)| | ------------|:-------------| | roberta-base-ca-cased-ner | **88.13** | | mBERT | 86.38 | | XLM-RoBERTa | 87.66 | | WikiBERT-ca | 77.66 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to aina@bsc.es ### Copyright Copyright (c) 2021 Text Mining Unit at Barcelona Supercomputing Center ### Licensing Information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Citation information If you use any of these resources (datasets or models) in your work, please cite our latest paper: ```bibtex @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
8edd361d4251083eb3eec42e46b2a9cc
Helsinki-NLP/opus-mt-dra-en
Helsinki-NLP
marian
11
70
transformers
0
translation
true
true
false
apache-2.0
['ta', 'kn', 'ml', 'te', 'dra', 'en']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,272
false
### dra-eng * source group: Dravidian languages * target group: English * OPUS readme: [dra-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md) * model: transformer * source language(s): kan mal tam tel * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kan-eng.kan.eng | 9.1 | 0.312 | | Tatoeba-test.mal-eng.mal.eng | 42.0 | 0.584 | | Tatoeba-test.multi.eng | 30.0 | 0.493 | | Tatoeba-test.tam-eng.tam.eng | 30.2 | 0.467 | | Tatoeba-test.tel-eng.tel.eng | 15.9 | 0.378 | ### System Info: - hf_name: dra-eng - source_languages: dra - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ta', 'kn', 'ml', 'te', 'dra', 'en'] - src_constituents: {'tam', 'kan', 'mal', 'tel'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt - src_alpha3: dra - tgt_alpha3: eng - short_pair: dra-en - chrF2_score: 0.493 - bleu: 30.0 - brevity_penalty: 1.0 - ref_len: 10641.0 - src_name: Dravidian languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: dra - tgt_alpha2: en - prefer_old: False - long_pair: dra-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
e16f73b6b3580a0299925ca796ea13c8
Xeronate/sggryzza
Xeronate
null
18
8
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
418
false
### sggryzza Dreambooth model trained by Xeronate with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
27205d294277084fb052922b2a7ce6bb
StonyBrookNLP/teabreac-poet-large-iirc-retrieved
StonyBrookNLP
bart
9
3
transformers
0
text2text-generation
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['question-answering, multi-step-reasoning, multi-hop-reasoning']
false
true
true
2,638
false
# What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-poet-large-iirc-retrieved" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
3bac07f6542187e7502704840f85b8d4
torayeff/distilbert-base-uncased-finetuned-imdb
torayeff
distilbert
9
9
transformers
0
fill-mask
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,318
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4898 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
a765ca2f8a3a985b97fdf1e796aadae2
42MARU/ko-42maru-wav2vec2-conformer-del-1s
42MARU
wav2vec2-conformer
8
12
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ko']
['KsponSpeech']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition']
false
true
true
2,992
false
# ko-42maru-wav2vec2-conformer-del-1s ## Table of Contents - [ko-42maru-wav2vec2-conformer-del-1s](#ko-42maru-wav2vec2-conformer-del-1s) - [Table of Contents](#table-of-contents) - [Model Details](#model-details) - [Evaluation](#evaluation) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details - **Model Description:** 해당 모델은 wav2vec2-conformer base architecture에 scratch pre-training 되었습니다. <br /> Wav2Vec2ConformerForCTC를 이용하여 KsponSpeech에 대한 Fine-Tuning 모델입니다. <br /> - Dataset use [AIHub KsponSpeech](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=123) <br /> Datasets는 해당 Data를 전처리하여 임의로 만들어 사용하였습니다. <br /> del-1s의 의미는 1초 이하의 데이터 필터링을 의미합니다. <br /> 해당 모델은 **음성전사를 자체 커스텀한 42maru** 기준의 데이터로 학습된 모델입니다. (숫자와 영어는 한글 표기법을 따름) <br /> - **Developed by:** TADev (@lIlBrother, @ddobokki, @jp42maru) - **Language(s):** Korean - **License:** apache-2.0 - **Parent Model:** See the [wav2vec2-conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer) for more information about the pre-trained base model. (해당 모델은 wav2vec2-conformer base architecture에 scratch pre-training 되었습니다.) ## Evaluation Just using `load_metric("wer")` and `load_metric("wer")` in huggingface `datasets` library <br /> ## How to Get Started With the Model KenLM과 혼용된 Wav2Vec2ProcessorWithLM 예제를 보시려면 [42maru-kenlm 예제](https://huggingface.co/42MARU/ko-ctc-kenlm-42maru-only-wiki)를 참고하세요 ```python import librosa from pyctcdecode import build_ctcdecoder from transformers import ( AutoConfig, AutoFeatureExtractor, AutoModelForCTC, AutoTokenizer, Wav2Vec2ProcessorWithLM, ) from transformers.pipelines import AutomaticSpeechRecognitionPipeline audio_path = "" # 모델과 토크나이저, 예측을 위한 각 모듈들을 불러옵니다. model = AutoModelForCTC.from_pretrained("42MARU/ko-42maru-wav2vec2-conformer-del-1s") feature_extractor = AutoFeatureExtractor.from_pretrained("42MARU/ko-42maru-wav2vec2-conformer-del-1s") tokenizer = AutoTokenizer.from_pretrained("42MARU/ko-42maru-wav2vec2-conformer-del-1s") beamsearch_decoder = build_ctcdecoder( labels=list(tokenizer.encoder.keys()), kenlm_model_path=None, ) processor = Wav2Vec2ProcessorWithLM( feature_extractor=feature_extractor, tokenizer=tokenizer, decoder=beamsearch_decoder ) # 실제 예측을 위한 파이프라인에 정의된 모듈들을 삽입. asr_pipeline = AutomaticSpeechRecognitionPipeline( model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, decoder=processor.decoder, device=-1, ) # 음성파일을 불러오고 beamsearch 파라미터를 특정하여 예측을 수행합니다. raw_data, _ = librosa.load(audio_path, sr=16000) kwargs = {"decoder_kwargs": {"beam_width": 100}} pred = asr_pipeline(inputs=raw_data, **kwargs)["text"] # 모델이 자소 분리 유니코드 텍스트로 나오므로, 일반 String으로 변환해줄 필요가 있습니다. result = unicodedata.normalize("NFC", pred) print(result) # 안녕하세요 하나둘셋 테스트입니다. ``` *Beam-100 Result (WER)*: | "clean" | "other" | | ------- | ------- | | 21.52 | 25.72 |
5c44d07235a10258674c8ceca251ed4b
w11wo/indonesian-roberta-base-indonli
w11wo
roberta
11
26
transformers
0
text-classification
true
true
false
mit
['id']
['indonli']
null
2
0
2
0
0
0
0
['indonesian-roberta-base-indonli']
true
true
true
2,899
false
## Indonesian RoBERTa Base IndoNLI Indonesian RoBERTa Base IndoNLI is a natural language inference (NLI) model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`IndoNLI`](https://github.com/ir-nlp-csui/indonli)'s dataset consisting of Indonesian Wikipedia, news, and Web articles [1]. After training, the model achieved an evaluation/dev accuracy of 77.06%. On the benchmark `test_lay` subset, the model achieved an accuracy of 74.24% and on the benchmark `test_expert` subset, the model achieved an accuracy of 61.66%. Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless. ## Model | Model | #params | Arch. | Training/Validation data (text) | | --------------------------------- | ------- | ------------ | ------------------------------- | | `indonesian-roberta-base-indonli` | 124M | RoBERTa Base | `IndoNLI` | ## Evaluation Results The model was trained for 5 epochs, with a batch size of 16, a learning rate of 2e-5, a weight decay of 0.1, and a warmup ratio of 0.2, with linear annealing to 0. The best model was loaded at the end. | Epoch | Training Loss | Validation Loss | Accuracy | | ----- | ------------- | --------------- | -------- | | 1 | 0.989200 | 0.691663 | 0.731452 | | 2 | 0.673000 | 0.621913 | 0.766045 | | 3 | 0.449900 | 0.662543 | 0.770596 | | 4 | 0.293600 | 0.777059 | 0.768320 | | 5 | 0.194200 | 0.948068 | 0.764224 | ## How to Use ### As NLI Classifier ```python from transformers import pipeline pretrained_name = "w11wo/indonesian-roberta-base-indonli" nlp = pipeline( "sentiment-analysis", model=pretrained_name, tokenizer=pretrained_name ) nlp("Andi tersenyum karena mendapat hasil baik. </s></s> Andi sedih.") ``` ## Disclaimer Do consider the biases which come from both the pre-trained RoBERTa model and the `IndoNLI` dataset that may be carried over into the results of this model. ## References [1] Mahendra, R., Aji, A. F., Louvan, S., Rahman, F., & Vania, C. (2021, November). [IndoNLI: A Natural Language Inference Dataset for Indonesian](https://arxiv.org/abs/2110.14566). _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics. ## Author Indonesian RoBERTa Base IndoNLI was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
73d30c9c5b8f26784a5ef5da79671498
heyal/finetuning-sentiment-model-5000-samples
heyal
distilbert
13
10
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,056
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-5000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4210 - Accuracy: 0.8383 - F1: 0.8348 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
f4cb0a61fdb2c0cd4ec2ce404bd18529
MultiBertGunjanPatrick/multiberts-seed-0-500k
MultiBertGunjanPatrick
bert
7
2
transformers
0
null
true
false
false
apache-2.0
['en']
['bookcorpus', 'wikipedia']
null
0
0
0
0
0
0
0
['exbert', 'multiberts', 'multiberts-seed-0']
false
true
true
6,483
false
# MultiBERTs Seed 0 Checkpoint 500k (uncased) Seed 0 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-500k') model = BertModel.from_pretrained("multiberts-seed-0-500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
283a8b685a68065a2b463b23a94e188e
Vandita/distilroberta-base-finetuned-SarcojiComplEmojisDistilRoberta-baseMLM1
Vandita
roberta
9
3
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,303
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-SarcojiComplEmojisDistilRoberta-baseMLM1 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2176 | 1.0 | 768 | 2.9178 | | 2.9632 | 2.0 | 1536 | 2.8355 | | 2.9201 | 3.0 | 2304 | 2.8462 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
5e390f9397cd5b9556fd4e726c3abac7
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-0_female-10_s601
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['de']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'de']
false
true
true
499
false
# exp_w2v2r_de_vp-100k_gender_male-0_female-10_s601 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
ebec7275c0e9c4acfa2158657eb3468e
HPL/roberta-base-unlabeled-gab-semeval2023-task10-45000samplesample
HPL
roberta
11
2
transformers
0
fill-mask
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,381
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-unlabeled-gab-semeval2023-task10-45000samplesample This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1441 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4294 | 1.0 | 1407 | 2.2323 | | 2.3091 | 2.0 | 2814 | 2.1470 | | 2.23 | 3.0 | 4221 | 2.1767 | | 2.1866 | 4.0 | 5628 | 2.1625 | | 2.171 | 5.0 | 7035 | 2.1441 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.10.3
2e2395d102cd3d8206d0d6aa9d6420d5
Helsinki-NLP/opus-mt-de-ee
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-de-ee * source languages: de * target languages: ee * OPUS readme: [de-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-ee/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-ee/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.ee | 24.6 | 0.463 |
5f3e76466ab0bd0022c88ee09b556418
silviacamplani/distilbert-base-uncased-finetuned-dapt-ner-ai_data
silviacamplani
distilbert
18
2
transformers
0
token-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
2,072
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # silviacamplani/distilbert-base-uncased-finetuned-dapt-ner-ai_data This model is a fine-tuned version of [silviacamplani/distilbert-base-uncased-finetuned-ai_data](https://huggingface.co/silviacamplani/distilbert-base-uncased-finetuned-ai_data) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.3549 - Validation Loss: 2.3081 - Train Precision: 0.0 - Train Recall: 0.0 - Train F1: 0.0 - Train Accuracy: 0.6392 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 3.0905 | 2.8512 | 0.0 | 0.0 | 0.0 | 0.6376 | 0 | | 2.6612 | 2.4783 | 0.0 | 0.0 | 0.0 | 0.6392 | 1 | | 2.3549 | 2.3081 | 0.0 | 0.0 | 0.0 | 0.6392 | 2 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
4be39b83f9a31458603e90e4d2bd5275
juliensimon/distilbert-base-uncased-finetuned-cola
juliensimon
distilbert
10
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
1
0
1
0
0
0
0
['generated_from_trainer']
true
true
true
1,571
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7737 - Matthews Correlation: 0.5335 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5225 | 1.0 | 535 | 0.5170 | 0.4007 | | 0.3509 | 2.0 | 1070 | 0.5220 | 0.4837 | | 0.2405 | 3.0 | 1605 | 0.6164 | 0.5186 | | 0.1777 | 4.0 | 2140 | 0.7737 | 0.5335 | | 0.1295 | 5.0 | 2675 | 0.8374 | 0.5162 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
3d3ebf303e9aac8c406357c66a20bcaa
gokuls/distilbert_sa_GLUE_Experiment_rte_384
gokuls
distilbert
17
4
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,740
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_rte_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.6919 - Accuracy: 0.5271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.698 | 1.0 | 10 | 0.6962 | 0.4729 | | 0.6969 | 2.0 | 20 | 0.6966 | 0.4729 | | 0.6955 | 3.0 | 30 | 0.6919 | 0.5271 | | 0.6932 | 4.0 | 40 | 0.6990 | 0.4729 | | 0.6941 | 5.0 | 50 | 0.6931 | 0.5054 | | 0.6892 | 6.0 | 60 | 0.6929 | 0.5199 | | 0.6843 | 7.0 | 70 | 0.6931 | 0.5560 | | 0.6399 | 8.0 | 80 | 0.7372 | 0.4982 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.8.0 - Tokenizers 0.13.2
bef328018fd0e090322ca5ec5f4cdceb
KoichiYasuoka/deberta-large-japanese-unidic-luw-upos
KoichiYasuoka
deberta-v2
8
10
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['ja']
['universal_dependencies']
null
0
0
0
0
0
0
0
['japanese', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
1,411
false
# deberta-large-japanese-unidic-luw-upos ## Model Description This is a DeBERTa(V2) model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [deberta-large-japanese-unidic](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-unidic). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py import torch from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-luw-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-luw-upos") s="国境の長いトンネルを抜けると雪国であった。" t=tokenizer.tokenize(s) p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]] print(list(zip(t,p))) ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/deberta-large-japanese-unidic-luw-upos") print(nlp("国境の長いトンネルを抜けると雪国であった。")) ``` [fugashi](https://pypi.org/project/fugashi), [unidic-lite](https://pypi.org/project/unidic-lite) and [pytokenizations](https://pypi.org/project/pytokenizations) are required. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
677fab8b7a7afed7da0a721dc6befbc9
pyronear/mobilenet_v3_large
pyronear
null
5
2
transformers
0
image-classification
true
false
false
apache-2.0
null
['pyronear/openfire']
null
0
0
0
0
0
0
0
['image-classification', 'pytorch', 'onnx']
false
true
true
3,106
false
# MobileNet V3 - Large model Pretrained on a dataset for wildfire binary classification (soon to be shared). The MobileNet V3 architecture was introduced in [this paper](https://arxiv.org/pdf/1905.02244.pdf). ## Model description The core idea of the author is to simplify the final stage, while using SiLU as activations and making Squeeze-and-Excite blocks larger. ## Installation ### Prerequisites Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision. ### Latest stable release You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows: ```shell pip install pyrovision ``` or using [conda](https://anaconda.org/pyronear/pyrovision): ```shell conda install -c pyronear pyrovision ``` ### Developer mode Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*: ```shell git clone https://github.com/pyronear/pyro-vision.git pip install -e pyro-vision/. ``` ## Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from pyrovision.models import model_from_hf_hub model = model_from_hf_hub("pyronear/mobilenet_v3_large").eval() img = Image.open(path_to_an_image).convert("RGB") # Preprocessing config = model.default_cfg transform = Compose([ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR), PILToTensor(), ConvertImageDtype(torch.float32), Normalize(config['mean'], config['std']) ]) input_tensor = transform(img).unsqueeze(0) # Inference with torch.inference_mode(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ## Citation Original paper ```bibtex @article{DBLP:journals/corr/abs-1905-02244, author = {Andrew Howard and Mark Sandler and Grace Chu and Liang{-}Chieh Chen and Bo Chen and Mingxing Tan and Weijun Wang and Yukun Zhu and Ruoming Pang and Vijay Vasudevan and Quoc V. Le and Hartwig Adam}, title = {Searching for MobileNetV3}, journal = {CoRR}, volume = {abs/1905.02244}, year = {2019}, url = {http://arxiv.org/abs/1905.02244}, eprinttype = {arXiv}, eprint = {1905.02244}, timestamp = {Thu, 27 May 2021 16:20:51 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1905-02244.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{chintala_torchvision_2017, author = {Chintala, Soumith}, month = {4}, title = {{Torchvision}}, url = {https://github.com/pytorch/vision}, year = {2017} } ```
34f5923c6d2388bbb2a802a78d5f8db1
Helsinki-NLP/opus-mt-es-ro
Helsinki-NLP
marian
10
20
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
false
### opus-mt-es-ro * source languages: es * target languages: ro * OPUS readme: [es-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ro/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ro/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.es.ro | 45.7 | 0.666 |
637bd24589b5fa9c6c5d648cbfef0cf2
Guizmus/DarkSoulsDiffusion
Guizmus
null
24
26
diffusers
22
text-to-image
false
false
false
creativeml-openrail-m
['en']
null
null
5
3
2
0
1
0
1
['stable-diffusion', 'text-to-image', 'image-to-image']
false
true
true
1,394
false
# DarkSouls Diffusion <p> <img src="https://huggingface.co/Guizmus/DarkSoulsDiffusion/resolve/main/showcase.jpg"/><br/> This is a Dreamboothed Stable Diffusion model trained on the DarkSouls series Style.<br/> The total dataset is made of 100 pictures, and the training has been done on runawayml 1.5 and the new VAE, with 2500 steps (LR1e-6) then 24k more steps (LR1e-7).<br/> The token "DarkSouls Style" will bring in the new concept.<br/> The recommended sampling is k_Euler_a or DPM++ 2M Karras on 20 steps, CFGS 7 . </p> [CKPT download link](https://huggingface.co/Guizmus/DarkSoulsDiffusion/resolve/main/DarkSoulsStyle_v1-3.ckpt) ## 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "Guizmus/DarkSoulsDiffusion" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a soldier engulfed in fire, DarkSouls Style" image = pipe(prompt).images[0] image.save("./DarkSouls Style.png") ```
4f405ce2eeac796084998552d8653f6e
pinot/wav2vec2-large-xls-r-300m-ja-colab-3
pinot
wav2vec2
13
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice_10_0']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,940
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-ja-colab-3 This model is a fine-tuned version of [pinot/wav2vec2-large-xls-r-300m-ja-colab-2](https://huggingface.co/pinot/wav2vec2-large-xls-r-300m-ja-colab-2) on the common_voice_10_0 dataset. It achieves the following results on the evaluation set: - Loss: 1.2696 - Wer: 0.2299 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 637 | 1.4666 | 0.2862 | | No log | 2.0 | 1274 | 1.4405 | 0.2866 | | No log | 3.0 | 1911 | 1.4162 | 0.2762 | | No log | 4.0 | 2548 | 1.4128 | 0.2709 | | 0.2814 | 5.0 | 3185 | 1.3927 | 0.2613 | | 0.2814 | 6.0 | 3822 | 1.3629 | 0.2536 | | 0.2814 | 7.0 | 4459 | 1.3349 | 0.2429 | | 0.2814 | 8.0 | 5096 | 1.3116 | 0.2356 | | 0.1624 | 9.0 | 5733 | 1.2774 | 0.2307 | | 0.1624 | 10.0 | 6370 | 1.2696 | 0.2299 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.10.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
17be88e92e4dadf8573ffe19b8fa24d7
kasrahabib/all-MiniLM-L6-v2-finetunned-90percentile-384embd-kmeans-propogated
kasrahabib
bert
10
5
transformers
0
text-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
2,361
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # kasrahabib/all-MiniLM-L6-v2-finetunned-90percentile-384embd-kmeans-propogated This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0070 - Validation Loss: 0.1409 - Train Precision: 0.9618 - Train Recall: 0.9758 - Train F1: 0.9688 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4140, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:-----:| | 0.2455 | 0.1360 | 0.9231 | 0.9879 | 0.9544 | 0 | | 0.0735 | 0.1060 | 0.9640 | 0.9734 | 0.9687 | 1 | | 0.0450 | 0.1178 | 0.9485 | 0.9806 | 0.9643 | 2 | | 0.0286 | 0.1038 | 0.9599 | 0.9855 | 0.9725 | 3 | | 0.0194 | 0.1229 | 0.9684 | 0.9661 | 0.9673 | 4 | | 0.0183 | 0.1307 | 0.9617 | 0.9734 | 0.9675 | 5 | | 0.0113 | 0.1295 | 0.9618 | 0.9758 | 0.9688 | 6 | | 0.0101 | 0.1397 | 0.9508 | 0.9831 | 0.9667 | 7 | | 0.0093 | 0.1417 | 0.9618 | 0.9758 | 0.9688 | 8 | | 0.0070 | 0.1409 | 0.9618 | 0.9758 | 0.9688 | 9 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.8.0 - Tokenizers 0.13.2
8b95535994432bb934788f001270aada
anjankumar/Anjan-finetuned-iitbombay-en-to-hi
anjankumar
marian
14
2
transformers
1
translation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation', 'generated_from_trainer']
true
true
true
1,080
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Anjan-finetuned-iitbombay-en-to-hi This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.7924 - Bleu: 6.3001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
80188e438beac487dd2ba0e76450bc83
tftransformers/mt5-small
tftransformers
null
6
2
null
0
null
false
false
false
apache-2.0
['multilingual']
['mc4']
null
0
0
0
0
0
0
0
[]
false
true
true
2,422
false
[Google's mT5](https://github.com/google-research/multilingual-t5) mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* ## Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available. ## Usage ``` from tf_transformers.models import MT5Model # Any MT5 model (mt5-small, mt5-base etc) model_name = 'mt5-small' model = MT5Model.from_pretrained(model_name) ```
82cd93a3a9e2702d65ba183be466bea3
Helsinki-NLP/opus-mt-lg-sv
Helsinki-NLP
marian
10
13
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-lg-sv * source languages: lg * target languages: sv * OPUS readme: [lg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lg.sv | 24.5 | 0.423 |
28b1f6cf957e3b718b8413a9b457d0e7
ueb1/distilbert-base-uncased-finetuned-ner
ueb1
distilbert
13
5
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,555
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0608 - Precision: 0.9290 - Recall: 0.9371 - F1: 0.9331 - Accuracy: 0.9840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2276 | 1.0 | 878 | 0.0685 | 0.9204 | 0.9246 | 0.9225 | 0.9814 | | 0.0498 | 2.0 | 1756 | 0.0622 | 0.9238 | 0.9358 | 0.9298 | 0.9833 | | 0.0298 | 3.0 | 2634 | 0.0608 | 0.9290 | 0.9371 | 0.9331 | 0.9840 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
e970510b5e8ae1cb47a4169e5b3030b3
wietsedv/xlm-roberta-base-ft-udpos28-lzh
wietsedv
xlm-roberta
8
13
transformers
0
token-classification
true
false
false
apache-2.0
['lzh']
['universal_dependencies']
null
0
0
0
0
0
0
0
['part-of-speech', 'token-classification']
true
true
true
579
false
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Classical Chinese This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-lzh") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-lzh") ```
34cf9253d51281c3650a456a353ad8b0
dipteshkanojia/hing-roberta-NCM-run-1
dipteshkanojia
xlm-roberta
9
4
transformers
0
text-classification
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,124
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hing-roberta-NCM-run-1 This model is a fine-tuned version of [l3cube-pune/hing-roberta](https://huggingface.co/l3cube-pune/hing-roberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2912 - Accuracy: 0.6667 - Precision: 0.6513 - Recall: 0.6494 - F1: 0.6502 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.8968 | 1.0 | 927 | 0.8552 | 0.6257 | 0.6508 | 0.5961 | 0.5969 | | 0.7022 | 2.0 | 1854 | 1.1142 | 0.3937 | 0.3270 | 0.3273 | 0.2051 | | 0.5569 | 3.0 | 2781 | 0.9130 | 0.6591 | 0.6566 | 0.6612 | 0.6509 | | 0.363 | 4.0 | 3708 | 1.6630 | 0.6526 | 0.6634 | 0.6414 | 0.6436 | | 0.2801 | 5.0 | 4635 | 2.0458 | 0.6451 | 0.6339 | 0.6345 | 0.6330 | | 0.1925 | 6.0 | 5562 | 2.3378 | 0.6570 | 0.6439 | 0.6254 | 0.6277 | | 0.1297 | 7.0 | 6489 | 2.5205 | 0.6839 | 0.6719 | 0.6651 | 0.6675 | | 0.114 | 8.0 | 7416 | 2.8373 | 0.6505 | 0.6379 | 0.6249 | 0.6280 | | 0.0994 | 9.0 | 8343 | 2.5358 | 0.6634 | 0.6539 | 0.6446 | 0.6474 | | 0.0977 | 10.0 | 9270 | 2.8244 | 0.6537 | 0.6489 | 0.6210 | 0.6238 | | 0.0623 | 11.0 | 10197 | 2.7593 | 0.6764 | 0.6602 | 0.6487 | 0.6510 | | 0.0537 | 12.0 | 11124 | 2.9823 | 0.6677 | 0.6679 | 0.6450 | 0.6488 | | 0.0432 | 13.0 | 12051 | 3.0792 | 0.6537 | 0.6465 | 0.6352 | 0.6378 | | 0.0406 | 14.0 | 12978 | 3.0707 | 0.6688 | 0.6592 | 0.6509 | 0.6534 | | 0.0296 | 15.0 | 13905 | 3.3289 | 0.6667 | 0.6596 | 0.6452 | 0.6486 | | 0.0288 | 16.0 | 14832 | 3.2147 | 0.6645 | 0.6592 | 0.6512 | 0.6528 | | 0.024 | 17.0 | 15759 | 3.3284 | 0.6645 | 0.6470 | 0.6405 | 0.6425 | | 0.0201 | 18.0 | 16686 | 3.2428 | 0.6688 | 0.6515 | 0.6515 | 0.6515 | | 0.0176 | 19.0 | 17613 | 3.2680 | 0.6710 | 0.6574 | 0.6536 | 0.6547 | | 0.0168 | 20.0 | 18540 | 3.2912 | 0.6667 | 0.6513 | 0.6494 | 0.6502 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.1+cu111 - Datasets 2.3.2 - Tokenizers 0.12.1
842177b9364682d6b313f941480891b8
google/multiberts-seed_3-step_800k
google
bert
8
12
transformers
0
null
true
true
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_800k']
false
true
true
3,521
false
# MultiBERTs, Intermediate Checkpoint - Seed 3, Step 800k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #3, captured at step 800k (max: 2000k, i.e., 2M steps). ## Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure for the fully trained model are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_800k') model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_800k') model = BertModel.from_pretrained("google/multiberts-seed_3-step_800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
ecf44ae9c688b20cddcbd58208e82062
henryscheible/eval_v2_rte
henryscheible
bert
13
1
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
886
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eval_v2_rte This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE RTE dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
0eec04edd03e4739474dd856ed108442
stevenwh/indobert-base-p2-finetuned-mer-10k
stevenwh
bert
10
3
transformers
0
fill-mask
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,663
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # indobert-base-p2-finetuned-mer-10k This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3370 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.9568 | 1.0 | 274 | 3.6237 | | 3.4802 | 2.0 | 548 | 3.0803 | | 3.0626 | 3.0 | 822 | 2.8108 | | 2.8591 | 4.0 | 1096 | 2.6345 | | 2.7182 | 5.0 | 1370 | 2.5492 | | 2.6223 | 6.0 | 1644 | 2.4692 | | 2.5426 | 7.0 | 1918 | 2.4122 | | 2.5019 | 8.0 | 2192 | 2.3611 | | 2.4649 | 9.0 | 2466 | 2.3447 | | 2.4631 | 10.0 | 2740 | 2.3392 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Tokenizers 0.13.2
372f39a03810d5144682384f7eb164a6
GW12/wav2vec2-libri-train360_2-colab
GW12
wav2vec2
15
11
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
7,583
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-libri-train360_2-colab This model is a fine-tuned version of [GW12/wav2vec2-libri-train360-colab](https://huggingface.co/GW12/wav2vec2-libri-train360-colab) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1024 - Wer: 0.0959 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.219 | 0.04 | 500 | 0.1976 | 0.1215 | | 0.0762 | 0.08 | 1000 | 0.2818 | 0.1324 | | 0.0824 | 0.12 | 1500 | 0.4541 | 0.1602 | | 0.0807 | 0.15 | 2000 | 0.1556 | 0.1162 | | 0.0799 | 0.19 | 2500 | 0.1618 | 0.1164 | | 0.0826 | 0.23 | 3000 | 0.3510 | 0.1379 | | 0.0809 | 0.27 | 3500 | 0.1486 | 0.1182 | | 0.0854 | 0.31 | 4000 | 0.1267 | 0.1177 | | 0.0817 | 0.35 | 4500 | 0.1581 | 0.1218 | | 0.0835 | 0.38 | 5000 | 0.1670 | 0.1251 | | 0.0841 | 0.42 | 5500 | 0.1576 | 0.1179 | | 0.0798 | 0.46 | 6000 | 0.2201 | 0.1300 | | 0.083 | 0.5 | 6500 | 0.1165 | 0.1179 | | 0.0878 | 0.54 | 7000 | 0.2640 | 0.1430 | | 0.0811 | 0.58 | 7500 | 0.1585 | 0.1288 | | 0.083 | 0.62 | 8000 | 0.3127 | 0.1370 | | 0.083 | 0.65 | 8500 | 0.4790 | 0.1449 | | 0.0775 | 0.69 | 9000 | 0.1651 | 0.1163 | | 0.0787 | 0.73 | 9500 | 1.6426 | 0.2083 | | 0.0781 | 0.77 | 10000 | 0.2307 | 0.1324 | | 0.0827 | 0.81 | 10500 | 0.1765 | 0.1318 | | 0.0816 | 0.85 | 11000 | 0.1679 | 0.1201 | | 0.0797 | 0.88 | 11500 | 0.2506 | 0.1508 | | 0.0813 | 0.92 | 12000 | 0.1893 | 0.1239 | | 0.0758 | 0.96 | 12500 | 0.1266 | 0.1147 | | 0.091 | 1.0 | 13000 | 0.1606 | 0.1180 | | 0.0677 | 1.04 | 13500 | 0.1107 | 0.1118 | | 0.0733 | 1.08 | 14000 | 0.1734 | 0.1565 | | 0.072 | 1.12 | 14500 | 0.1141 | 0.1126 | | 0.0731 | 1.15 | 15000 | 0.1125 | 0.1112 | | 0.0793 | 1.19 | 15500 | 0.1818 | 0.1146 | | 0.07 | 1.23 | 16000 | 0.2678 | 0.1265 | | 0.0658 | 1.27 | 16500 | 0.2909 | 0.1203 | | 0.0678 | 1.31 | 17000 | 0.3241 | 0.1280 | | 0.0681 | 1.35 | 17500 | 0.3243 | 0.1497 | | 0.0666 | 1.38 | 18000 | 0.2056 | 0.1150 | | 0.0667 | 1.42 | 18500 | 0.4678 | 0.1252 | | 0.0656 | 1.46 | 19000 | 0.1603 | 0.1138 | | 0.0662 | 1.5 | 19500 | 0.1554 | 0.1115 | | 0.0669 | 1.54 | 20000 | 0.1215 | 0.1101 | | 0.0681 | 1.58 | 20500 | 0.1118 | 0.1083 | | 0.0708 | 1.62 | 21000 | 0.1743 | 0.1146 | | 0.0673 | 1.65 | 21500 | 0.1509 | 0.1109 | | 0.0667 | 1.69 | 22000 | 0.3411 | 0.1495 | | 0.065 | 1.73 | 22500 | 0.1045 | 0.1067 | | 0.0644 | 1.77 | 23000 | 0.0999 | 0.1075 | | 0.0643 | 1.81 | 23500 | 0.1019 | 0.1073 | | 0.0675 | 1.85 | 24000 | 0.1196 | 0.1073 | | 0.0618 | 1.88 | 24500 | 0.1092 | 0.1086 | | 0.0626 | 1.92 | 25000 | 0.1256 | 0.1070 | | 0.0635 | 1.96 | 25500 | 0.1183 | 0.1069 | | 0.0621 | 2.0 | 26000 | 0.1180 | 0.1091 | | 0.0548 | 2.04 | 26500 | 0.1199 | 0.1048 | | 0.0548 | 2.08 | 27000 | 0.1215 | 0.1057 | | 0.0531 | 2.12 | 27500 | 0.1086 | 0.1036 | | 0.0548 | 2.15 | 28000 | 0.1103 | 0.1043 | | 0.054 | 2.19 | 28500 | 0.1078 | 0.1048 | | 0.0521 | 2.23 | 29000 | 0.1094 | 0.1039 | | 0.0534 | 2.27 | 29500 | 0.1058 | 0.1037 | | 0.0539 | 2.31 | 30000 | 0.1035 | 0.1026 | | 0.0516 | 2.35 | 30500 | 0.1009 | 0.1027 | | 0.0525 | 2.38 | 31000 | 0.1292 | 0.1056 | | 0.0501 | 2.42 | 31500 | 0.1124 | 0.1033 | | 0.052 | 2.46 | 32000 | 0.1020 | 0.1028 | | 0.0519 | 2.5 | 32500 | 0.1131 | 0.1038 | | 0.0498 | 2.54 | 33000 | 0.1036 | 0.1031 | | 0.0525 | 2.58 | 33500 | 0.0994 | 0.1005 | | 0.0506 | 2.61 | 34000 | 0.1093 | 0.1015 | | 0.0484 | 2.65 | 34500 | 0.1048 | 0.1005 | | 0.0493 | 2.69 | 35000 | 0.1192 | 0.1028 | | 0.048 | 2.73 | 35500 | 0.1208 | 0.1020 | | 0.0473 | 2.77 | 36000 | 0.1410 | 0.1042 | | 0.0472 | 2.81 | 36500 | 0.1382 | 0.1052 | | 0.0467 | 2.85 | 37000 | 0.1118 | 0.1012 | | 0.0473 | 2.88 | 37500 | 0.1032 | 0.1002 | | 0.0466 | 2.92 | 38000 | 0.1041 | 0.1004 | | 0.0455 | 2.96 | 38500 | 0.1056 | 0.1004 | | 0.0483 | 3.0 | 39000 | 0.1091 | 0.0995 | | 0.0408 | 3.04 | 39500 | 0.1170 | 0.1012 | | 0.0395 | 3.08 | 40000 | 0.1106 | 0.0995 | | 0.0407 | 3.11 | 40500 | 0.1075 | 0.0998 | | 0.0403 | 3.15 | 41000 | 0.1129 | 0.1000 | | 0.0397 | 3.19 | 41500 | 0.1062 | 0.0993 | | 0.0389 | 3.23 | 42000 | 0.1072 | 0.0990 | | 0.0385 | 3.27 | 42500 | 0.1032 | 0.0985 | | 0.0389 | 3.31 | 43000 | 0.0989 | 0.0973 | | 0.0404 | 3.35 | 43500 | 0.1031 | 0.0973 | | 0.0387 | 3.38 | 44000 | 0.0998 | 0.0974 | | 0.0391 | 3.42 | 44500 | 0.1000 | 0.0969 | | 0.0387 | 3.46 | 45000 | 0.0982 | 0.0968 | | 0.0407 | 3.5 | 45500 | 0.1057 | 0.0979 | | 0.038 | 3.54 | 46000 | 0.1026 | 0.0974 | | 0.0399 | 3.58 | 46500 | 0.1020 | 0.0970 | | 0.0387 | 3.61 | 47000 | 0.1022 | 0.0968 | | 0.0379 | 3.65 | 47500 | 0.1016 | 0.0961 | | 0.0369 | 3.69 | 48000 | 0.1012 | 0.0957 | | 0.0372 | 3.73 | 48500 | 0.0993 | 0.0956 | | 0.0361 | 3.77 | 49000 | 0.1013 | 0.0951 | | 0.0366 | 3.81 | 49500 | 0.1020 | 0.0956 | | 0.0377 | 3.85 | 50000 | 0.1014 | 0.0961 | | 0.0363 | 3.88 | 50500 | 0.1019 | 0.0962 | | 0.0368 | 3.92 | 51000 | 0.1033 | 0.0963 | | 0.0381 | 3.96 | 51500 | 0.1026 | 0.0960 | | 0.0364 | 4.0 | 52000 | 0.1024 | 0.0959 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.11.0
421bb82108e112f1726fcfd1d4d85359
yuhuizhang/finetuned_gpt2_sst2_negation0.0005_pretrainedTrue
yuhuizhang
gpt2
11
0
transformers
0
text-generation
true
false
false
mit
null
['sst2']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,248
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_gpt2_sst2_negation0.0005_pretrainedTrue This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 3.5276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1086 | 1.0 | 1059 | 3.5051 | | 2.9257 | 2.0 | 2118 | 3.5195 | | 2.833 | 3.0 | 3177 | 3.5276 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.13.1+cu117 - Datasets 2.5.2 - Tokenizers 0.12.1
9c498cb16b2896a10f56d1e930636a8c
timhbach/Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract
timhbach
distilbert
17
11
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,222
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0231 - eval_precision: 0.7448 - eval_recall: 0.75 - eval_f1: 0.7474 - eval_accuracy: 0.9942 - eval_runtime: 61.7618 - eval_samples_per_second: 27.201 - eval_steps_per_second: 3.4 - epoch: 3.0 - step: 5670 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cpu - Datasets 2.0.0 - Tokenizers 0.11.6
c7ed2c0f2213ce047f77c66b547a2ae2
s3nh/DialoGPT-large-Morty
s3nh
gpt2
9
4
transformers
0
conversational
true
false
false
openrail
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
3,587
false
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> <img src = 'https://images.unsplash.com/photo-1592564630984-7410f94db184?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1146&q=80'> ### Description DialogGPT is a variant of the GPT (Generative Pretrained Transformer) language model developed by OpenAI. It's a deep neural network-based language model that's trained on massive amounts of text data to generate human-like text. DialogGPT uses the transformer architecture, which is a type of neural network designed for processing sequential data such as language. During the training phase, the model is exposed to a large corpus of text and learns to predict the next word in a sequence given the previous words. In the context of dialog, DialogGPT is trained to predict the response in a conversation, given the context of the conversation. This context can include one or more turns of the conversation, along with any additional information such as the topic of the conversation or the speaker's personality. At inference time, the model takes the current context of the conversation as input and generates a response. The response is generated by sampling from the model's predicted distribution over the vocabulary. Overall, DialogGPT provides a flexible and powerful solution for generating human-like text in a conversational context, allowing for the creation of a wide range of applications such as chatbots, conversational agents, and virtual assistants ## Parameters Model was trained for 40 epochs, using params as follows. ``` per_gpu_train_batch_size: int = 2 self.per_gpu_eval_batch_size: int = 2 self.gradient_accumulation_steps: int = 1 self.learning_rate: float = 5e-5 self.weight_decay: float = 0.0 self.adam_epsilon: float = 1e-8 self.max_grad_norm: int = 1.0 self.num_train_epochs: int = 40 self.max_steps: int = -1 self.warmup_steps: int = 0 self.logging_steps: int = 1000 self.save_steps: int = 3500 self.save_total_limit = None self.eval_all_checkpoints: bool = False self.no_cuda: bool = False self.overwrite_output_dir: bool = True self.overwrite_cache: bool = True self.should_continue: bool = False self.seed: int = 42 self.local_rank: int = -1 self.fp16: bool = False self.fp16_opt_level: str = 'O1' ``` ## Usage DialoGPT large version, finetuned on Morty's sequences (Rick and Morty Cartoon character). Simple snippet of how to infer of this model: ```python from transformers import AutoModelWithLMHead, AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('s3nh/DialoGPT-small-morty') model = AutoModelWithLMHead.from_pretrained('s3nh/DialoGPT-small-morty') for step in range(4): new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) print("MortyBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
fa3e44a3054d344161d2c185391370c7
DOOGLAK/Tagged_One_500v4_NER_Model_3Epochs_AUGMENTED
DOOGLAK
bert
13
5
transformers
0
token-classification
true
false
false
apache-2.0
null
['tagged_one500v4_wikigold_split']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,565
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_One_500v4_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v4_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2804 - Precision: 0.6656 - Recall: 0.6225 - F1: 0.6433 - Accuracy: 0.9187 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 183 | 0.2784 | 0.5897 | 0.5076 | 0.5456 | 0.9064 | | No log | 2.0 | 366 | 0.2816 | 0.6535 | 0.5787 | 0.6138 | 0.9112 | | 0.1091 | 3.0 | 549 | 0.2804 | 0.6656 | 0.6225 | 0.6433 | 0.9187 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
75dd49c466bff8708c0a80f553405bc8
TimKond/S-PubMedBert-MedQuAD
TimKond
bert
12
10
sentence-transformers
0
sentence-similarity
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
true
true
3,652
false
# S-PubMedBert-MedQuAD This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('TimKond/S-PubMedBert-MedQuAD') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('TimKond/S-PubMedBert-MedQuAD') model = AutoModel.from_pretrained('TimKond/S-PubMedBert-MedQuAD') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.DataLoader` of length 82590 with parameters: ``` {'batch_size': 2, 'shuffle':True} ``` **Loss**: `sentence_transformers.losses.SoftmaxLoss` with parameters: ``` {'num_labels': 2, 'sentence_embedding_dimension': '768'} ``` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 1, "evaluation_steps": 0, "evaluator": None, "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 8259, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
e3facf46fcd01f0072e2bcfd97c50123
lewtun/sagemaker-distilbert-emotion
lewtun
distilbert
10
4
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
27
25
2
0
0
0
0
['generated_from_trainer']
true
true
true
1,285
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sagemaker-distilbert-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2322 - Accuracy: 0.921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9306 | 1.0 | 500 | 0.2322 | 0.921 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
cb5c7b217c0d3974d5a2243a47120589
SantoshUske/my_awesome_wnut_model
SantoshUske
distilbert
14
11
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
894
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Tokenizers 0.13.2
d68aac9e022ec69f30f66d3b39494efe
sd-concepts-library/bada-club
sd-concepts-library
null
9
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,018
false
### bada club on Stable Diffusion This is the `<bada-club>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<bada-club> 0](https://huggingface.co/sd-concepts-library/bada-club/resolve/main/concept_images/2.jpeg) ![<bada-club> 1](https://huggingface.co/sd-concepts-library/bada-club/resolve/main/concept_images/3.jpeg) ![<bada-club> 2](https://huggingface.co/sd-concepts-library/bada-club/resolve/main/concept_images/1.jpeg) ![<bada-club> 3](https://huggingface.co/sd-concepts-library/bada-club/resolve/main/concept_images/0.jpeg)
19d5e8ec9da8db5c42df95f43c8a796b
bluepen5805/blue_pencil
bluepen5805
null
9
0
null
11
text-to-image
false
false
false
creativeml-openrail-m
['ja']
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'text-to-image']
false
true
true
3,071
false
# blue_pencil <strong>blue_pencil</strong> は、様々なモデルを適当な配合でマージしたモデルです。 有名なモデルをいくつか思い浮かべてください。 あなたが思い浮かべたモデルは、恐らくこのモデルに含まれています。 このマージモデルの特徴はわかりません。 いろいろなモデルをマージしてみることが目的なので、質も高くありません。 全てのモデルは [stable-diffusion-webui-model-tookit](https://github.com/arenatemp/stable-diffusion-webui-model-toolkit) を用いて `fp16` にしています。 --- ## `blue_pencil-v1b` <small>(`@20230212`)</small> `blue_pencil-v1` の [Amalgam_Mix](https://civitai.com/models/4758/amalgammix) の代わりに [Balor-V2](https://huggingface.co/ploughB660/Balor-V2) を階層マージしたモデルです v1 とはちょっと傾向が違います ### 推奨設定 * VAE * [vae-ft-mse-840000-ema-pruned](https://huggingface.co/stabilityai/sd-vae-ft-mse-original) * Negative Embedding * [EasyNegative](https://huggingface.co/datasets/gsdf/EasyNegative) ### 出力例 ``` girl, tokyo, scenery Negative prompt: EasyNegative Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 205537258 Size: 768x768, Clip skip: 2 Denoising strength: 0.65, Hires upscale: 2, Hires upscaler: Latent (nearest-exact) ``` ![blue_pencil-v1b_1](../../resolve/main/images/blue_pencil-v1b/1.png) --- ## `blue_pencil-v1` <small>(`@20230211`)</small> 以下のモデルが含まれています(順不同) <details> * [Defmix-v1.1](https://huggingface.co/Defpoint/Defmix-v1.0) * Counterfeit v1.0 * Counterfeit v2.0 * Basil Mix * Anything v4.0 * [PastelRainier](https://huggingface.co/Hemlok/RainierMix) * ACertainThing * Anything-V4.5 * Counterfeit-V2.0 * Evt_V4-preview * basil_mix * pastel-mix * [GingerMixR](https://huggingface.co/Hemlok/GingerMix) * LimeMixV2 * [Elysium_Anime_V3](https://huggingface.co/hesw23168/SD-Elysium-Model) * [SukiyakiMix-v1.0](https://huggingface.co/Vsukiyaki/SukiyakiMix-v1.0) * pastel-mix * AbyssOrangeMix2 * [HD-20](https://www.cognitionai.org/hdhowtogetstarted) * [7th_anime_v3_testA](https://huggingface.co/syaimu/7th_test) * [AniReal](https://huggingface.co/Hosioka/AniReal) * [TriPhaze_B](https://huggingface.co/Lucetepolis/TriPhaze) * ultracolor.v4 * Counterfeit-V2.5 * Treebark * [Nabylon-v1.2](https://huggingface.co/NegiInNattoMaki/Nabylon-v1.0) * AbyssOrangeMix2 * LonganMix * and more * [atwcustom_V4](https://huggingface.co/atsuwo/ATW-custom) * [Amalgam_Mix](https://civitai.com/models/4758/amalgammix) </details> ### 推奨設定 * VAE * [vae-ft-mse-840000-ema-pruned](https://huggingface.co/stabilityai/sd-vae-ft-mse-original) * Negative Embedding * [EasyNegative](https://huggingface.co/datasets/gsdf/EasyNegative) ### 出力例 #### 1 ``` girl, tokyo, scenery Negative prompt: EasyNegative Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 2526423076 Size: 768x768, Clip skip: 2 ``` ![blue_pencil-v1_1-1](../../resolve/main/images/blue_pencil-v1/1-1.png) ##### Hires. fix ``` Denoising strength: 0.6, Hires upscale: 2, Hires upscaler: Latent (nearest-exact) ``` ![blue_pencil-v1_1-2](../../resolve/main/images/blue_pencil-v1/1-2.png) #### 2 ``` girl, early teen, kimono, sakura, particles Negative prompt: EasyNegative Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 4036639388, Size: 512x768, Clip skip: 2 ``` ![blue_pencil-v1_2-1](../../resolve/main/images/blue_pencil-v1/2-1.png) ##### Hires. fix ``` Denoising strength: 0.62, Hires upscale: 2, Hires upscaler: Latent (nearest-exact) ``` ![blue_pencil-v1_2-2](../../resolve/main/images/blue_pencil-v1/2-2.png) #### 3 ``` girl, early teen, t-shirt, pants, from behind, landscape, scenery, apocalyptic Negative prompt: EasyNegative Steps: 40, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 748447692, Size: 768x512, Clip skip: 2 ``` ![blue_pencil-v1_3](../../resolve/main/images/blue_pencil-v1/3.png)
d4675c81f0e7f1ada9494da479d653d9
elopezlopez/xlnet-base-cased_fold_4_binary_v1
elopezlopez
xlnet
12
1
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,637
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased_fold_4_binary_v1 This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5724 - F1: 0.8315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 289 | 0.4043 | 0.8009 | | 0.4373 | 2.0 | 578 | 0.4093 | 0.8260 | | 0.4373 | 3.0 | 867 | 0.5084 | 0.8206 | | 0.2707 | 4.0 | 1156 | 0.5945 | 0.8087 | | 0.2707 | 5.0 | 1445 | 0.6389 | 0.8251 | | 0.1691 | 6.0 | 1734 | 0.8131 | 0.8156 | | 0.1012 | 7.0 | 2023 | 0.9865 | 0.8190 | | 0.1012 | 8.0 | 2312 | 1.1356 | 0.8342 | | 0.0506 | 9.0 | 2601 | 1.0624 | 0.8369 | | 0.0506 | 10.0 | 2890 | 1.2604 | 0.8255 | | 0.0384 | 11.0 | 3179 | 1.2648 | 0.8183 | | 0.0384 | 12.0 | 3468 | 1.3763 | 0.8158 | | 0.0318 | 13.0 | 3757 | 1.4966 | 0.8217 | | 0.0221 | 14.0 | 4046 | 1.3889 | 0.8250 | | 0.0221 | 15.0 | 4335 | 1.4014 | 0.8284 | | 0.0145 | 16.0 | 4624 | 1.5321 | 0.8289 | | 0.0145 | 17.0 | 4913 | 1.4914 | 0.8233 | | 0.0172 | 18.0 | 5202 | 1.3946 | 0.8314 | | 0.0172 | 19.0 | 5491 | 1.5032 | 0.8269 | | 0.0135 | 20.0 | 5780 | 1.5111 | 0.8328 | | 0.0087 | 21.0 | 6069 | 1.4899 | 0.8318 | | 0.0087 | 22.0 | 6358 | 1.5562 | 0.8311 | | 0.0061 | 23.0 | 6647 | 1.5384 | 0.8327 | | 0.0061 | 24.0 | 6936 | 1.5798 | 0.8304 | | 0.0052 | 25.0 | 7225 | 1.5724 | 0.8315 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
78f66df125b3c45592efd74efec52214
flamesbob/BrokenM_style
flamesbob
null
3
0
null
0
null
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
904
false
`Broken mirror, shattered mirror, brokenM_style` this style gives a shattered mirror / reflection to prompts. License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
1ae7d1f4840f9d97a13ac76613150966
aprilzoo/distilbert-base-uncased-finetuned-emotion
aprilzoo
distilbert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,343
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2202 - Accuracy: 0.923 - F1: 0.9232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8244 | 1.0 | 250 | 0.3104 | 0.9025 | 0.8997 | | 0.2478 | 2.0 | 500 | 0.2202 | 0.923 | 0.9232 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
859921c396305501d880be0fde10f183
Norod78/hebrew-gpt_neo-xl-poetry
Norod78
gpt_neo
10
8
transformers
1
text-generation
true
false
true
mit
['he']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,441
false
# hebrew-gpt_neo-xl-poetry Hebrew poetry text generation model which was fine tuned upon on [hebrew-gpt_neo-xl](https://huggingface.co/Norod78/hebrew-gpt_neo-xl). ## Datasets An assortment of various Hebrew books, magazines and poetry corpuses ## Training Config Similar to [this one](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-xl/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-xl/Norod78_hebrew_gpt_neo_xl_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.3 transformers==4.8.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-xl-poetry") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-xl-poetry", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 2048: max_len = 2048 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\n{}: {}".format(i, text)) print("\n" + 100 * '-') ```
d3b2b2976595af5c08c903bb8e7c69da
ali2066/finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42
ali2066
distilbert
13
6
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,622
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0914 - Accuracy: 0.9746 - F1: 0.9870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0501 | 0.9828 | 0.9913 | | No log | 2.0 | 208 | 0.0435 | 0.9828 | 0.9913 | | No log | 3.0 | 312 | 0.0414 | 0.9828 | 0.9913 | | No log | 4.0 | 416 | 0.0424 | 0.9799 | 0.9898 | | 0.0547 | 5.0 | 520 | 0.0482 | 0.9828 | 0.9913 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
98b039f32af7726b57e16900d18bd368
Alireza1044/albert-base-v2-wnli
Alireza1044
albert
14
1
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
1
1
0
0
0
0
0
['generated_from_trainer']
false
true
true
994
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6898 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
f26cd2a444faa88f1626e835421993c2
gabrielsgaspar/test-trainer
gabrielsgaspar
bert
18
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,374
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-trainer This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2394 - Accuracy: 0.9395 - F1: 0.9396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2518 | 1.0 | 2000 | 0.1971 | 0.931 | 0.9305 | | 0.1678 | 2.0 | 4000 | 0.1782 | 0.9405 | 0.9406 | | 0.1048 | 3.0 | 6000 | 0.2394 | 0.9395 | 0.9396 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
f1f2ba21795546bda7d0ca803a7dab07
xiaoxiao012/ddpm-butterflies-128
xiaoxiao012
null
13
0
diffusers
0
null
false
false
false
apache-2.0
['en']
['huggan/smithsonian_butterflies_subset']
null
0
0
0
0
0
0
0
[]
false
true
true
1,233
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/xiaoxiao012/ddpm-butterflies-128/tensorboard?#scalars)
12616ad388e13c3b9e567b36c4d4a1dd
timm/maxvit_xlarge_tf_384.in21k_ft_in1k
timm
null
4
196
timm
0
image-classification
true
false
false
apache-2.0
null
['imagenet-1k', 'imagenet-21k']
null
0
0
0
0
0
0
0
['image-classification', 'timm']
false
true
true
22,179
false
# Model card for maxvit_xlarge_tf_384.in21k_ft_in1k An official MaxViT image classification model. Pretrained in tensorflow on ImageNet-21k (21843 Google specific instance of ImageNet-22k) and fine-tuned on ImageNet-1k by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman. ### Model Variants in [maxxvit.py](https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 475.3 - GMACs: 292.8 - Activations (M): 668.8 - Image size: 384 x 384 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('maxvit_xlarge_tf_384.in21k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_xlarge_tf_384.in21k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 128, 192, 192]) # torch.Size([1, 128, 96, 96]) # torch.Size([1, 256, 48, 48]) # torch.Size([1, 512, 24, 24]) # torch.Size([1, 1024, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_xlarge_tf_384.in21k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, num_features, H, W) tensor output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
713491aa4ba0f2bf5fde7dc3ff162276
Graphcore/hubert-base-superb-ks
Graphcore
hubert
19
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['superb']
null
0
0
0
0
0
0
0
['audio-classification', 'generated_from_trainer']
true
true
true
1,217
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hubert-base-superb-ks This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0848 - Accuracy: 0.9822 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 0 - distributed_type: IPU - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - training precision: Mixed Precision ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cpu - Datasets 2.1.0 - Tokenizers 0.12.1
d629a7c2521c68106f00971fba968319
Helsinki-NLP/opus-mt-fr-ht
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-fr-ht * source languages: fr * target languages: ht * OPUS readme: [fr-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ht/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.ht | 29.2 | 0.461 |
d62700482b82d321ca061b88d691ee70
Helsinki-NLP/opus-mt-fi-fj
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-fi-fj * source languages: fi * target languages: fj * OPUS readme: [fi-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-fj/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-fj/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-fj/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-fj/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fi.fj | 26.6 | 0.500 |
5128145ee2a054f59bca53e37f506c6f
DOOGLAK/Article_50v3_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
bert
13
5
transformers
0
token-classification
true
false
false
apache-2.0
null
['article50v3_wikigold_split']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,550
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_50v3_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v3_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.7382 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 6 | 0.9648 | 0.1172 | 0.0042 | 0.0081 | 0.7782 | | No log | 2.0 | 12 | 0.7740 | 0.0 | 0.0 | 0.0 | 0.7789 | | No log | 3.0 | 18 | 0.7382 | 0.0 | 0.0 | 0.0 | 0.7789 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
9e7329113d5b80a1ec8081ee996dcae4
ArafatBHossain/debert_base_fine_tuned_sent140
ArafatBHossain
deberta
8
3
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,335
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # debert_base_fine_tuned_sent140 This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9678 - Accuracy: 0.7647 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 408 | 0.8139 | 0.7219 | | 0.8198 | 2.0 | 816 | 0.7742 | 0.7460 | | 0.4479 | 3.0 | 1224 | 0.9678 | 0.7647 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
07d6b9a43535bdc8f0b5e6a7d4bb62a9
FredZhang7/paint-journey-v2
FredZhang7
null
30
719
diffusers
13
text-to-image
false
false
false
creativeml-openrail-m
['en']
null
null
1
0
1
0
0
0
0
['text-to-image', 'midjourney', 'stable-diffusion', 'disco-diffusion', 'art', 'arxiv:2208.12242']
false
true
true
6,551
false
## Paint Journey V2 is [V1](https://huggingface.co/FredZhang7/paint-journey-v1) fine-tuned on 768x768 oil paintings by Midjourney V4, Open Journey V2, Disco Diffusion, and artists given permission Begin the prompt with **((oil painting))** to add the oil paint effect. For digital and other painting styles, use similar prompts as you would for Midjourney V4 (with some tweaks), Stable Diffusion v1.5 (add more styles), Open Journey V2, or Disco Diffusion. [![Open with Camenduru's WebUI in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AMLA-UBC/100-Exploring-the-World-of-Modern-Machine-Learning/blob/main/assets/PaintJourneyV2.ipynb) ## Examples *All examples were generated using Camenduru's WebUI (see the Colab file)* ![](./assets/characters.png) *⬆️ 768x1136 portraits, generated using descriptive prompts and without face restoration, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/character_settings.txt)* ![](./assets/nature.png) *⬆️ 1280x768 (mostly) natural landscapes, used shorter prompts, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/nature_settings.txt)* ![](./assets/outerspace.png) *⬆️ 1152x768 outerspace landscapes, used descriptive prompts, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/outerspace_settings.txt)* ![](./assets/lamborghini.png) *⬆️ 1280x768 lamborghini, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/lamborghini_settings.txt)* ![](./assets/eevee.png) *⬆️ 960x768 Eevee, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/eevee_settings.txt)* ## Comparisons Paint Journey V2's paintings are closer to human-drawn art than Open Journey V2. Compared to models like Dreamlike Diffusion 1.0, PJ V2 tends to generate 768x768 or higher resolution images with reduced noise levels. This model is also capable of generating stunning portraits at 768x1136 resolution without duplicated faces (with [Camenduru's WebUI](https://github.com/camenduru/stable-diffusion-webui)), a difficult task to models like DreamShaper 3.3. At lower resolutions, DreamShaper 3.3 tends to generate higher quality portraits than PJ V2 in terms of noise levels, given the same (short) postive and negative prompts. However, PJ V2 can craft more stunning masterpieces with more descriptive positive and negative prompts and can still generate beautiful landscapes with shorter prompts. ## Training Instead of solely fine-tuning its Unet, Paint Journey V2 focuses on fine-tuning its text encoder with a diverse range of prompts. This allows for a seamless blend of the digital and oil painting styles into various other types of prompts, resulting in a more natural and dynamic output. This model was trained on a curated dataset of roughly 300 images hand-picked from Midjourney, [Prompt Hero](https://prompthero.com/), [PixaBay](https://pixabay.com/images/search/paintings/), Open Journey V2, and Reddit. Before training, I used R-ESRGAN 4x on many images to increase their resolution and reduce noise. ## Running out of prompts? Useful resources: [Lexica.art](https://lexica.art/), [Fast GPT PromptGen](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion-v2), [Prompt Hero](https://prompthero.com/) ## Output Dimensions Portrait sizes include, but are not limited to, `512x768`, `768x768`, and `768x1136`. Landscape sizes include, but are not limited to, `768x512`, `768x768`, `1152x768`, and `1280x768`. ## Camenduru's WebUI ``` git clone -b v1.6 https://github.com/camenduru/stable-diffusion-webui ``` <details> <summary> Click to use Automatic1111's Webui instead, but may not output images as artistic </summary> ``` git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git ``` </details> Download [checkpoint](./paint_journey_v2.ckpt) and [vae](./paint_journey_v2.vae.pt) to the `./stable-diffusion-webui/models/Stable-diffusion` folder. Run `webui-user.bat`. ## 🧨 Diffusers *Tip: using double, tripple, or quadriple brackets around some letters WORD (e.g. "((WORD))") will put an 'emphasis' on WORD* ```bash pip install --upgrade diffusers transformers ``` ```python # see more sampling algorithms at https://huggingface.co/docs/diffusers/using-diffusers/schedulers#changing-the-scheduler from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler import torch, random, datetime pipe = StableDiffusionPipeline.from_pretrained("FredZhang7/paint-journey-v2") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") def random_seed(): return random.randint(0, 2**32 - 1) prompt = "((oil painting)), gentle waves, bright blue sky, white sails billowing, sun glistening on the surface, salty sea air, distant horizon, calm breeze, birds soaring overhead, vibrant colors, artstation digital painting, high resolution, uhd, 4 k, 8k wallpaper" # what you want to see negative_prompt = "low-res, blurry, haze, dark clouds looming, choppy waves, engine failing, sails tattered, stormy winds".split(", ") # what you don't want to see seed = random_seed() # replace with the desired seed if needed width, height = 1280, 768 # width and height of the generated image cfg_scale = 7.5 # classifer free guidance scale, smaller means more creative, 7 to 11 is usually a good range num_inference_steps = 40 # sampling steps, 30 to 40 is usually good for Euler Ancestral generator = torch.Generator("cuda").manual_seed(seed) with torch.autocast("cuda"): image = pipe(prompt=prompt, num_inference_steps=num_inference_steps, width=width, height=height, generator=generator, guidance_scale=cfg_scale).images[0] def generate_filename(string, seed): invalid_chars = ["<", ">", ":", '"', "/", "\\", "|", "?", "*"] for char in invalid_chars: string = string.replace(char, "") return f"{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}_{seed}_{string}" image.save(f"./{generate_filename(prompt, seed)}.png") ``` ## Safety Checker V2 The official [stable diffusion safety checker](https://huggingface.co/CompVis/stable-diffusion-safety-checker) uses up 1.22GB VRAM. I recommend using [Google Safesearch Mini V2](https://huggingface.co/FredZhang7/google-safesearch-mini-v2) (220MB) to save 1.0GB VRAM.
84647894c584fbdd23bc415dba58684c
RuiqianLi/wav2vec2-xls-r-300m_Mrbrown_finetune1
RuiqianLi
wav2vec2
15
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['uob_singlish']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,017
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m_Mrbrown_finetune1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the uob_singlish dataset. ## This time use self-made dataset(cut the audio of "https://www.youtube.com/watch?v=a2ZOTD3R7JI" into slices and write the corresponding transcript, totally 4 mins), don't know why the word-error-rate keep 1. But can know that much be the problem of dataset, because last time use the same pre-trained model and standard singlish corpus fine-tune get nice result. (can find it at:RuiqianLi/wav2vec2-large-xls-r-300m-singlish-colab) It achieves the following results on the evaluation set: - Loss: 3.0927 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.01 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 3.7943 | 20.0 | 200 | 3.0597 | 1.0 | | 2.9902 | 40.0 | 400 | 3.1604 | 1.0 | | 2.9696 | 60.0 | 600 | 3.1112 | 1.0 | | 2.8885 | 80.0 | 800 | 3.0234 | 1.0 | | 2.8154 | 100.0 | 1000 | 3.0927 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
3bdcdbaeeca9c91d582e296d4c613e02
jonatasgrosman/exp_w2v2r_en_vp-100k_accent_us-5_england-5_s203
jonatasgrosman
wav2vec2
10
4
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['en']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'en']
false
true
true
497
false
# exp_w2v2r_en_vp-100k_accent_us-5_england-5_s203 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
d28d70a217bf4e86d6cf4d4a652e909b
ljh1/mrpc
ljh1
bert
17
8
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,042
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mrpc This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.5611 - Accuracy: 0.6912 - F1: 0.8158 - Combined Score: 0.7535 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.12.1+cu116 - Datasets 2.6.1 - Tokenizers 0.12.1
b446b8726d1487a3a01cdc651719fe93
gustavecortal/roberta_reman
gustavecortal
roberta
11
1
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,782
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_reman This model is a fine-tuned version of [ibm/ColD-Fusion](https://huggingface.co/ibm/ColD-Fusion) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4272 - F1: 0.7004 - Roc Auc: 0.7862 - Accuracy: 0.4330 - Recall: 0.6831 - Precision: 0.7185 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | Recall | Precision | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|:------:|:---------:| | No log | 1.0 | 113 | 0.4673 | 0.5668 | 0.6955 | 0.2990 | 0.4930 | 0.6667 | | No log | 2.0 | 226 | 0.4187 | 0.6397 | 0.7403 | 0.3918 | 0.5563 | 0.7524 | | No log | 3.0 | 339 | 0.4272 | 0.7004 | 0.7862 | 0.4330 | 0.6831 | 0.7185 | | No log | 4.0 | 452 | 0.4191 | 0.6566 | 0.7539 | 0.3918 | 0.6127 | 0.7073 | | 0.3529 | 5.0 | 565 | 0.4246 | 0.6788 | 0.7706 | 0.4124 | 0.6549 | 0.7045 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+rocm5.2 - Datasets 2.8.0 - Tokenizers 0.13.2
e583bc93ab97c9f43abfcb0af8caf633
MultiBertGunjanPatrick/multiberts-seed-2-1700k
MultiBertGunjanPatrick
bert
7
4
transformers
0
null
true
false
false
apache-2.0
['en']
['bookcorpus', 'wikipedia']
null
0
0
0
0
0
0
0
['exbert', 'multiberts', 'multiberts-seed-2']
false
true
true
6,487
false
# MultiBERTs Seed 2 Checkpoint 1700k (uncased) Seed 2 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1700k') model = BertModel.from_pretrained("multiberts-seed-2-1700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
bc36f536363a0a15023ba7681980cff1
BSC-LT/sciroshot
BSC-LT
roberta
11
8
transformers
0
zero-shot-classification
true
false
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['zero-shot', 'text-classification', 'science', 'mag']
false
true
true
8,424
false
# SCIroShot ## Overview <details> <summary>Click to expand</summary> - **Model type:** Language Model - **Architecture:** RoBERTa-large - **Language:** English - **License:** Apache 2.0 - **Task:** Zero-Shot Text Classification - **Data:** Microsoft Academic Graph - **Additional Resources:** - [Paper]() <-- WiP (soon to be published in EACL 2023) - [GitHub](https://github.com/TeMU-BSC/sciroshot) </details> ## Model description SCIroShot is an entailment-based Zero-Shot Text Classification model that has been fine-tuned using a self-made dataset composed of scientific articles from [Microsoft Academic Graph](https://www.microsoft.com/en-us/research/project/microsoft-academic-graph/) (MAG). The resulting model achieves SOTA performance in the scientific domain and very competitive results in other areas. ## Intended Usage This model is intended to be used for zero-shot text classification in English. ## How to use ```python from transformers import pipeline zstc = pipeline("zero-shot-classification", model="BSC-LT/sciroshot") sentence = "Leo Messi is the best player ever." candidate_labels = ["politics", "science", "sports", "environment"] template = "This example is {}" output = zstc(sentence, candidate_labels, hypothesis_template=template, multi_label=False) print(output) print(f'Predicted class: {output["labels"][0]}') ``` ## Limitations and bias No measures have been taken to estimate the bias and toxicity embedded in the model. Even though the fine-tuning data (which is of a scientific nature) may seem harmless, it is important to note that the corpus used to pre-train the vanilla model is very likely to contain a lot of unfiltered content from the internet, as stated in the [RoBERTa-large model card](https://huggingface.co/roberta-large#limitations-and-bias). ## Training ### Training data Our data builds on top of scientific-domain annotated data from Microsoft Academic Graph (MAG). This database consists of a heterogeneous graph with billions of records from both scientific publications and patents, in addition to metadata information such as the authors, institutions, journals, conferences and their citation relationships. The documents are organized in a six-level hierarchical structure of scientific concepts, where the two top-most levels are manually curated in order to guarantee a high level of accuracy. To create the training corpus, a random sample of scientific articles with a publication year between 2000 and 2021 were retrieved from MAG with their respective titles and abstracts in English. This results in over 2M documents with their corresponding Field Of Study, which was obtained from the 1-level MAG taxonomy (292 possible classes, such as "Computational biology" or "Transport Engineering"). The fine-tuning dataset was constructed in a weakly supervised manner by converting text classification data to the entailment format. Using the relationship between scientific texts and their matching concepts in the 1-level MAG taxonomy we are able to generate the premise- hypothesis pairs corresponding to the entailment label. Conversely, we generate the pairs for the neutral label by removing the actual relationship between the texts and their scientific concepts and creating a virtual relationship with those to which they are not matched. ### Training procedure The newly-created scientific dataset described in the previous section was used to fine-tune a 355M parameters RoBERTa model on the entailment task. To do so, the model has to compute the entailment score between every text that is fed to it and all candidate labels. The final prediction would be the highest-scoring class in a single-label classification setup, or the N classes above a certain threshold in a multi-label scenario. A subset of 52 labels from the training data were kept apart so that they could be used as a development set of fully-unseen classes. As a novelty, the validation was not performed on the entailment task (which is used a proxy) but directly on the target text classification task. This allows us to stop training at the right time via early stopping, which prevents the model from "overfitting" to the training task. This method was our way to counteract an effect that was empirically discovered during the experimentation period, where it was observed that after a certain point the model can start to worsen in the target task (ZSTC) despite still continuing to improve in the training task (RTE). The simple act of shortening the training time led to a boost in performance. Read the paper for more details on the methodology and the analysis of RTE/ZSTC correlation. ## Evaluation ### Evaluation data The model's performance was evaluated on a collection of disciplinary-labeled textual datasets, both from the scientific domain (closer to training data) and the general domain (to assess generalizability). The following table provides an overview of the number of examples and labels for each dataset: | Dataset | Labels | Size | |------------------|--------|--------| | arXiv | 11 | 3,838 | | SciDocs-MeSH | 11 | 16,433 | | SciDocs-MAG | 19 | 17,501 | | Konstanz | 24 | 10,000 | | Elsevier | 26 | 14,738 | | PubMed | 109 | 5,000 | | Topic Categorization (Yahoo! Answers) | 10 | 60,000 | | Emotion Detection (UnifyEmotion) | 10 | 15,689 | | Situation Frame Detection (Situation Typing) | 12 | 3,311 | Please refer to the paper for further details on each particular dataset. ### Evaluation results These are the official results reported in the paper: #### Scientific domain benchmark | Model | arXiv | SciDocs-MesH | SciDocs-MAG | Konstanz | Elsevier | PubMed | |-------|-------|--------------|-------------|----------|----------|--------| | [fb/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) | 33.28 | **66.18**🔥 | 51.77 | 54.62 | 28.41 | **31.59**🔥 | | SCIroShot | **42.22**🔥 | 59.34 | **69.86**🔥 | **66.07**🔥 | **54.42**🔥 | 27.93 | #### General domain benchmark | Model | Topic | Emotion | Situation | |-------|-------|---------|-----------| | RTE [(Yin et al., 2019)](https://arxiv.org/pdf/1909.00161.pdf) | 43.8 | 12.6 | **37.2**🔥 | | FEVER [(Yin et al., 2019)](https://arxiv.org/pdf/1909.00161.pdf) | 40.1 | 24.7 | 21.0 | | MNLI [(Yin et al., 2019)](https://arxiv.org/pdf/1909.00161.pdf) | 37.9 | 22.3 | 15.4 | | NSP [(Ma et al., 2021)](https://aclanthology.org/2021.acl-short.99.pdf) | 50.6 | 16.5 | 25.8 | | NSP-Reverse [(Ma et al., 2021)](https://aclanthology.org/2021.acl-short.99.pdf) | 53.1 | 16.1 | 19.9 | | SCIroShot | **59.08**🔥 | **24.94**🔥 | 27.42 All the numbers reported above represent **label-wise weighted F1** except for the Topic classification dataset, which is evaluated in terms of **accuracy** following the notation from [(Yin et al., 2019)](https://arxiv.org/pdf/1909.00161.pdf). ## Additional information ### Authors - SIRIS Lab, Research Division of SIRIS Academic. - Language Technologies Unit, Barcelona Supercomputing Center. ### Contact For further information, send an email to either <langtech@bsc.es> or <info@sirisacademic.com>. ### License This work is distributed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Funding This work was partially funded by 2 projects under EU’s H2020 Research and Innovation Programme: - INODE (grant agreement No 863410). - IntelComp (grant agreement No 101004870). ### Citation ```bibtex Soon to be published in EACL 2023. ``` ### Disclaimer <details> <summary>Click to expand</summary> The model published in this repository is intended for a generalist purpose and is made available to third parties under a Apache v2.0 License. Please keep in mind that the model may have bias and/or any other undesirable distortions. When third parties deploy or provide systems and/or services to other parties using this model (or a system based on it) or become users of the model itself, they should note that it is under their responsibility to mitigate the risks arising from its use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owners and creators of the model be liable for any results arising from the use made by third parties. </details>
5d427feb5e67fbadfbc76e938402a944
tugstugi/wav2vec2-large-xlsr-53-kalmyk
tugstugi
wav2vec2
8
9
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['xal']
null
null
0
0
0
0
0
0
0
['speech', 'audio', 'automatic-speech-recognition']
false
true
true
791
false
## Info This Wav2Vec2 model was first pretrained on 500 hours Kalmyk TV recordings and 1000 hours Mongolian speech recognition dataset. After that, the model was finetuned on a 300 hours [Kalmyk synthetic STT dataset](https://github.com/tugstugi/mongolian-nlp#datasets) created by a voice conversion model. * 50% WER on a private test set created from Kalmyk TV recordnings * on clean voice recordings, the model should have much lower WER * voice conversion info * 300 hours [Kalmyk synthetic STT dataset](https://github.com/tugstugi/mongolian-nlp#datasets) * The source voice is a Kalmyk female voice TTS * Target voices are from the VCTK dataset * example data: https://twitter.com/tugstugi/status/1409111296897912835 * each WAV has a different text created from Kalmyk books
1f7598a628a5e3b365ace7a22b16749e
Khalsuu/english-filipino-wav2vec2-l-xls-r-test-07
Khalsuu
wav2vec2
13
10
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['filipino_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,187
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # english-filipino-wav2vec2-l-xls-r-test-07 This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the filipino_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.6768 - Wer: 0.3755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9255 | 2.09 | 400 | 0.7742 | 0.7694 | | 0.5792 | 4.19 | 800 | 0.5368 | 0.5250 | | 0.3611 | 6.28 | 1200 | 0.4796 | 0.4718 | | 0.2742 | 8.38 | 1600 | 0.5308 | 0.4764 | | 0.201 | 10.47 | 2000 | 0.5885 | 0.4723 | | 0.164 | 12.57 | 2400 | 0.5595 | 0.4750 | | 0.1374 | 14.66 | 2800 | 0.5836 | 0.4366 | | 0.1138 | 16.75 | 3200 | 0.6110 | 0.4628 | | 0.0991 | 18.85 | 3600 | 0.6179 | 0.4174 | | 0.0837 | 20.94 | 4000 | 0.6681 | 0.4170 | | 0.0722 | 23.04 | 4400 | 0.6665 | 0.4103 | | 0.0576 | 25.13 | 4800 | 0.7538 | 0.4068 | | 0.052 | 27.23 | 5200 | 0.6808 | 0.3844 | | 0.0449 | 29.32 | 5600 | 0.6768 | 0.3755 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
acaefe49bafc684b259307e91c327f24
jjmcarrascosa/vit_receipts_classifier
jjmcarrascosa
vit
128
3
transformers
1
image-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['image-classification', 'generated_from_trainer']
true
true
true
2,374
false
# vit_receipts_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cord, rvl-cdip, visual-genome and an external receipt dataset to carry out Binary Classification (`ticket` vs `no_ticket`). Ticket here is used as a synonym to "receipt". It achieves the following results on the evaluation set, which contain pictures from the above datasets in scanned, photography or mobile picture formats (color and grayscale): - Loss: 0.0116 - F1: 0.9991 ## Model description This model is a Binary Classifier finetuned version of ViT, to predict if an input image is a picture / scan of receipts(s) o something else. ## Intended uses & limitations Use this model to classify your images into tickets or not tickers. WIth the tickets group, you can use Multimodal Information Extraction, as Visual Named Entity Recognition, to extract the ticket items, amounts, total, etc. Check the Cord dataset for more information. ## Training and evaluation data This model used 2 datasets as positive class (`ticket`): - `cord` - `https://expressexpense.com/blog/free-receipt-images-ocr-machine-learning-dataset/` For the negative class (`no_ticket`), the following datasets were used: - A subset of `RVL-CDIP` - A subset of `visual-genome` ## Training procedure Datasets were loaded with different distributions of data for positive and negative classes. Then, normalization and resizing is carried out to adapt it to ViT expected input. Different runs were carried out changing the data distribution and the hyperparameters to maximize F1. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0026 | 0.28 | 500 | 0.0187 | 0.9982 | | 0.0186 | 0.56 | 1000 | 0.0116 | 0.9991 | | 0.0006 | 0.84 | 1500 | 0.0044 | 0.9997 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.11.0+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
234ae6e9263b6b3b979d7558e2c2ef0f
woolion/cosmoose-sd
woolion
null
23
2
diffusers
0
null
false
false
false
mit
null
null
null
2
2
0
0
0
0
0
[]
false
true
true
1,753
false
### Cosmoose-SD on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook #### Model by woolion This your the Stable Diffusion model fine-tuned the Cosmoose-SD concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt(s)` with `csmoos_style`. The DreamBooth step was trained on Cosmoose(.org) images, all drawn by Woolion. You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb). You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Sample pictures of this concept: ![1668380733319.png 0](https://huggingface.co/woolion/cosmoose-sd/resolve/main/concept_images/1668380733319.png) ![1668380618746.png 1](https://huggingface.co/woolion/cosmoose-sd/resolve/main/concept_images/1668380618746.png) ![1668379756916.png 2](https://huggingface.co/woolion/cosmoose-sd/resolve/main/concept_images/1668379756916.png) ![1668380287261.png 3](https://huggingface.co/woolion/cosmoose-sd/resolve/main/concept_images/1668380287261.png)
6dbaf7f8d1ba4ffc1815ace7d77be4fc
wicharnkeisei/thai-xlm-roberta-base-squad2
wicharnkeisei
xlm-roberta
12
72
transformers
0
question-answering
true
false
false
cc-by-4.0
['th']
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,275
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # thai-squad This model is a fine-tuned version of [deepset/xlm-roberta-base-squad2](https://huggingface.co/deepset/xlm-roberta-base-squad2) on Thai dataset from [iApp Technology Co., Ltd.](https://github.com/iapp-technology/iapp-wiki-qa-dataset). ## Intended uses & limitations This model intends to use with Thai question and answering task ## Training and evaluation data Trained and evaluated by [iApp Technology Co., Ltd.](https://github.com/iapp-technology/iapp-wiki-qa-dataset) dataset. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ## Performance Evaluated on the SQuAD 1.0 test dataset ``` "exact": 62.51728907330567 "f1": 73.62388955749958 "total": 723 ``` ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
d818c1f6cfe02c7f97bf2ec5972cd0e1
jonatasgrosman/exp_w2v2t_ru_vp-es_s35
jonatasgrosman
wav2vec2
10
2
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ru']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'ru']
false
true
true
468
false
# exp_w2v2t_ru_vp-es_s35 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
d5d052bc02a8e02e9e1a791776f4d66c
sameearif88/wav2vec2-base-timit-demo-colab4
sameearif88
wav2vec2
12
2
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,341
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab4 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9149 - Wer: 0.5907 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.9363 | 13.89 | 500 | 2.7532 | 1.0 | | 0.9875 | 27.78 | 1000 | 0.9149 | 0.5907 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
f0ca6b2bd6007de73427589d4f49cdd1
microsoft/swinv2-large-patch4-window12-192-22k
microsoft
swinv2
5
1,627
transformers
0
image-classification
true
false
false
apache-2.0
null
['imagenet-1k']
null
0
0
0
0
0
0
0
['vision', 'image-classification']
false
true
true
3,782
false
# Swin Transformer v2 (large-sized model) Swin Transformer v2 model pre-trained on ImageNet-21k at resolution 192x192. It was introduced in the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer). Disclaimer: The team releasing Swin Transformer v2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally. Swin Transformer v2 adds 3 main improvements: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) a log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) a self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png) [Source](https://paperswithcode.com/method/swin-transformer) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swinv2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 21k ImageNet classes: ```python from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-large-patch4-window12-192-22k") model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-large-patch4-window12-192-22k") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 21k ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swinv2.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2111-09883, author = {Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo}, title = {Swin Transformer {V2:} Scaling Up Capacity and Resolution}, journal = {CoRR}, volume = {abs/2111.09883}, year = {2021}, url = {https://arxiv.org/abs/2111.09883}, eprinttype = {arXiv}, eprint = {2111.09883}, timestamp = {Thu, 02 Dec 2021 15:54:22 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2111-09883.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
f435f58b9645957b7279d6a4ac714042
facebook/s2t-large-librispeech-asr
facebook
speech_to_text
11
262
transformers
7
automatic-speech-recognition
true
true
false
mit
['en']
['librispeech_asr']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
true
true
true
5,031
false
# S2T-LARGE-LIBRISPEECH-ASR `s2t-large-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR). The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text) ## Model description S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard autoregressive cross-entropy loss and generates the transcripts autoregressively. ## Intended uses & limitations This model can be used for end-to-end speech recognition (ASR). See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints. ### How to use As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the transcripts by passing the speech features to the model. *Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the filter bank features. Make sure to install the `torchaudio` package before running this example.* You could either install those as extra speech dependancies with `pip install transformers"[speech, sentencepiece]"` or install the packages seperatly with `pip install torchaudio sentencepiece`. ```python import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset import soundfile as sf model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-large-librispeech-asr") processor = Speech2Textprocessor.from_pretrained("facebook/s2t-large-librispeech-asr") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset( "patrickvonplaten/librispeech_asr_dummy", "clean", split="validation" ) ds = ds.map(map_to_array) input_features = processor( ds["speech"][0], sampling_rate=16_000, return_tensors="pt" ).input_features # Batch size 1 generated_ids = model.generate(input_ids=input_features) transcription = processor.batch_decode(generated_ids) ``` #### Evaluation on LibriSpeech Test The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset. ```python from datasets import load_dataset, load_metric from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor import soundfile as sf librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset wer = load_metric("wer") model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-large-librispeech-asr").to("cuda") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-large-librispeech-asr", do_upper_case=True) def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=16000, padding=True, return_tensors="pt") input_features = features.input_features.to("cuda") attention_mask = features.attention_mask.to("cuda") gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask) batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True) return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"]) print("WER:", wer(predictions=result["transcription"], references=result["text"])) ``` *Result (WER)*: | "clean" | "other" | |:-------:|:-------:| | 3.3 | 7.5 | ## Training data The S2T-LARGE-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of approximately 1000 hours of 16kHz read English speech. ## Training procedure ### Preprocessing The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization) is applied to each example. The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000. ### Training The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779). The encoder receives speech features, and the decoder generates the transcripts autoregressively. ### BibTeX entry and citation info ```bibtex @inproceedings{wang2020fairseqs2t, title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, year = {2020}, } ```
b3d996a520b8cf5283933cad8d9d58ba
gokuls/distilbert_add_GLUE_Experiment_qqp_384
gokuls
distilbert
17
4
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,049
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_qqp_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.4096 - Accuracy: 0.8095 - F1: 0.7372 - Combined Score: 0.7734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.5518 | 1.0 | 1422 | 0.5289 | 0.7376 | 0.6535 | 0.6955 | | 0.4901 | 2.0 | 2844 | 0.4655 | 0.7772 | 0.6744 | 0.7258 | | 0.4098 | 3.0 | 4266 | 0.4096 | 0.8095 | 0.7372 | 0.7734 | | 0.3273 | 4.0 | 5688 | 0.4343 | 0.8211 | 0.7536 | 0.7873 | | 0.2681 | 5.0 | 7110 | 0.4322 | 0.8286 | 0.7519 | 0.7902 | | 0.223 | 6.0 | 8532 | 0.4789 | 0.8301 | 0.7502 | 0.7901 | | 0.1883 | 7.0 | 9954 | 0.4715 | 0.8329 | 0.7663 | 0.7996 | | 0.1603 | 8.0 | 11376 | 0.5090 | 0.8346 | 0.7577 | 0.7961 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.8.0 - Tokenizers 0.13.2
52837edb2d442ffd06ef5aa1e094abb8
siddharth963/vit-base-patch16-224-in21k-finetuned-cassava3
siddharth963
vit
16
7
transformers
0
image-classification
true
false
false
apache-2.0
null
['image_folder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,911
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-cassava3 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.3419 - Accuracy: 0.8855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5624 | 0.99 | 133 | 0.5866 | 0.8166 | | 0.4717 | 1.99 | 266 | 0.4245 | 0.8692 | | 0.4105 | 2.99 | 399 | 0.3708 | 0.8811 | | 0.3753 | 3.99 | 532 | 0.3646 | 0.8787 | | 0.2997 | 4.99 | 665 | 0.3655 | 0.8780 | | 0.3176 | 5.99 | 798 | 0.3545 | 0.8822 | | 0.2849 | 6.99 | 931 | 0.3441 | 0.8850 | | 0.2931 | 7.99 | 1064 | 0.3419 | 0.8855 | | 0.27 | 8.99 | 1197 | 0.3419 | 0.8848 | | 0.2927 | 9.99 | 1330 | 0.3403 | 0.8853 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
2e1cf0cda0512e937fb06acf5b524ab7
grantslewis/spelling-correction-english-base-location-unique-2-2
grantslewis
bart
13
3
transformers
0
text2text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,488
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spelling-correction-english-base-location-unique-2-2 This model is a fine-tuned version of [grantslewis/spelling-correction-english-base-location-unique-2](https://huggingface.co/grantslewis/spelling-correction-english-base-location-unique-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8272 - Cer: 0.1685 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 70 - eval_batch_size: 70 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 470 | 0.8853 | 0.1740 | | 0.808 | 2.0 | 940 | 0.8494 | 0.1679 | | 0.7434 | 3.0 | 1410 | 0.8288 | 0.1700 | | 0.7324 | 4.0 | 1880 | 0.8272 | 0.1685 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.12.1
db43f23d5d79742949ecae7377de64c0
jgriffi/xlm-roberta-base-finetuned-panx-all
jgriffi
xlm-roberta
10
11
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1448 - F1: 0.8881 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3029 | 1.0 | 1669 | 0.2075 | 0.7971 | | 0.164 | 2.0 | 3338 | 0.1612 | 0.8680 | | 0.1025 | 3.0 | 5007 | 0.1448 | 0.8881 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
521ce5cb91456a89ba5f66bef9c15eee
danielbispov/t5-small-finetuned-fi-to-en
danielbispov
t5
14
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['wmt19']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,254
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-fi-to-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset. It achieves the following results on the evaluation set: - Loss: 3.5235 - Bleu: 1.129 - Gen Len: 17.088 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-----:|:-------:| | 3.414 | 1.0 | 6250 | 3.5235 | 1.129 | 17.088 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
6019e5142f064f531625565a3cfef683
Geotrend/bert-base-en-fr-es-de-zh-cased
Geotrend
bert
8
4
transformers
0
fill-mask
true
true
true
apache-2.0
['multilingual']
['wikipedia']
null
1
1
0
0
0
0
0
[]
false
true
true
1,319
false
# bert-base-en-fr-es-de-zh-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-es-de-zh-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-es-de-zh-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
60ae37e28ca37ed29f43531ae4ffc9ff
google/t5-base-lm-adapt
google
t5
10
14,383
transformers
12
text2text-generation
true
true
false
apache-2.0
['en']
['c4']
null
1
1
0
0
0
0
0
['t5-lm-adapt']
false
true
true
3,127
false
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 - LM-Adapted ## Version 1.1 - LM-Adapted [T5 Version 1.1 - LM Adapted](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) includes the following improvements compared to the original [T5 model](https://huggingface.co/t5-base): - GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks. - no parameter sharing between embedding and classifier layer - "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. and is pretrained on both the denoising and language modeling objective. More specifically, this checkpoint is initialized from [T5 Version 1.1 - Base](https://huggingface.co/google/https://huggingface.co/google/t5-v1_1-base) and then trained for an additional 100K steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). This adaptation improves the ability of the model to be used for prompt tuning. **Note**: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is [BigScience's T0pp](https://huggingface.co/bigscience/T0pp). Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?other=t5-lm-adapt) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
44670b16c1ec31ac0d39e37406643d91
muhtasham/bert-tiny-finetuned-finer
muhtasham
bert
10
14
transformers
1
token-classification
true
false
false
apache-2.0
null
['finer-139', 'nlpaueb/finer-139']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,564
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertiny-finetuned-finer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the finer-139 dataset. It achieves the following results on the evaluation set: - Loss: 0.0882 - Precision: 0.5339 - Recall: 0.0360 - F1: 0.0675 - Accuracy: 0.9847 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0871 | 1.0 | 11255 | 0.0952 | 0.0 | 0.0 | 0.0 | 0.9843 | | 0.0864 | 2.0 | 22510 | 0.0895 | 0.7640 | 0.0082 | 0.0162 | 0.9844 | | 0.0929 | 3.0 | 33765 | 0.0882 | 0.5339 | 0.0360 | 0.0675 | 0.9847 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ed3712e679a77b89b2c4fde095540297
BlackKakapo/t5-base-paraphrase-ro
BlackKakapo
t5
8
1
transformers
0
text2text-generation
true
false
false
['apache-2.0']
['ro']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,662
false
# Romanian paraphrase ![v1.0](https://img.shields.io/badge/V.1-03.08.2022-brightgreen) Fine-tune t5-base model for paraphrase. Since there is no Romanian dataset for paraphrasing, I had to create my own [dataset](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro-v1). The dataset contains ~60k examples. ### How to use ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("BlackKakapo/t5-base-paraphrase-ro") model = AutoModelForSeq2SeqLM.from_pretrained("BlackKakapo/t5-base-paraphrase-ro") ``` ### Or ```python from transformers import T5ForConditionalGeneration, T5TokenizerFast model = T5ForConditionalGeneration.from_pretrained("BlackKakapo/t5-base-paraphrase-ro") tokenizer = T5TokenizerFast.from_pretrained("BlackKakapo/t5-base-paraphrase-ro") ``` ### Generate ```python text = "Am impresia că fac multe greșeli." encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device) beam_outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, do_sample=True, max_length=256, top_k=10, top_p=0.9, early_stopping=False, num_return_sequences=5 ) for beam_output in beam_outputs: text_para = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True) if text.lower() != text_para.lower() or text not in final_outputs: final_outputs.append(text_para) break print(final_outputs) ``` ### Output ```out ['Cred că fac multe greșeli.'] ```
c0cb3ae4c43cbe710171965bbd2e3eec
elRivx/gGWoman
elRivx
null
3
0
null
2
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'text-to-image']
false
true
true
1,456
false
# gGWoman This is my new Stable Diffusion custom model that bring to you a generic woman generated with non-licenced images. The magic word is: gGWoman If you enjoy my work, please consider supporting me: [![Buy me a coffee](https://badgen.net/badge/icon/buymeacoffee?icon=buymeacoffee&label)](https://www.buymeacoffee.com/elrivx) Examples: <img src=https://imgur.com/CQR59kd.png width=30% height=30%> <img src=https://imgur.com/WVh9kE1.png width=30% height=30%> <img src=https://imgur.com/y0twso7.png width=30% height=30%> <img src=https://imgur.com/FVxkzzj.png width=30% height=30%> ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
bca55f5399ed5159b8ab04eabd6c4de3
henryscheible/eval_masked_102_cola
henryscheible
null
13
0
null
0
null
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,023
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eval_masked_102_cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6601 - Matthews Correlation: 0.5989 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
ce1714860b61afa3cf7bee51b127aca0
riddhi17pawar/finetuning-sentiment-model-3000-samples
riddhi17pawar
distilbert
13
9
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,055
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3029 - Accuracy: 0.8667 - F1: 0.8675 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
063323cade8c0b4394d386fbc195d1f3
lmqg/flan-t5-small-squad-qg
lmqg
t5
17
6
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['en']
['lmqg/qg_squad']
null
0
0
0
0
0
0
0
['question generation']
true
true
true
5,263
false
# Model Card of `lmqg/flan-t5-small-squad-qg` This model is fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/flan-t5-small-squad-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/flan-t5-small-squad-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/flan-t5-small-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 90.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 56.66 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 40.4 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 30.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 24.34 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 25.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 63.77 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 51.26 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/flan-t5-small-squad-ae`](https://huggingface.co/lmqg/flan-t5-small-squad-ae). [raw metric file](https://huggingface.co/lmqg/flan-t5-small-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_flan-t5-small-squad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 92.34 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 63.8 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 92.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 63.89 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 92.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 63.8 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: paragraph_answer - output_types: question - prefix_types: ['qg'] - model: google/flan-t5-small - max_length: 512 - max_length_output: 32 - epoch: 7 - batch: 64 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 1 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/flan-t5-small-squad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
0fe8731b92cf998c5be5af038305569c
Ahmedshabana/distilbert-base-uncased-finetuned-mnli
Ahmedshabana
distilbert
20
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,473
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.1091 - Accuracy: 0.42 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 32 | 1.1005 | 0.28 | | No log | 2.0 | 64 | 1.1038 | 0.3 | | No log | 3.0 | 96 | 1.1074 | 0.32 | | No log | 4.0 | 128 | 1.1088 | 0.42 | | No log | 5.0 | 160 | 1.1091 | 0.42 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
53033267a267da6630207084b42948d1
SGrannemann/marian-finetuned-kde4-en-to-fr
SGrannemann
marian
4
1
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,453
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6859 - Validation Loss: 0.8062 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0582 | 0.8792 | 0 | | 0.7977 | 0.8250 | 1 | | 0.6859 | 0.8062 | 2 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 1.18.4 - Tokenizers 0.11.6
51b53178d9b020c3d653ef87c9cd8c65
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5
CAMeL-Lab
bert
11
45
transformers
0
text-classification
true
true
false
apache-2.0
['ar']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,894
false
# CAMeLBERT-MSA DID MADAR Twitter-5 Model ## Model description **CAMeLBERT-MSA DID MADAR Twitter-5 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model. For the fine-tuning, we used the [MADAR Twitter-5](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 21 labels. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-MSA DID MADAR Twitter-5 model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5') >>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟'] >>> did(sentences) [{'label': 'Egypt', 'score': 0.5741344094276428}, {'label': 'Kuwait', 'score': 0.5225679278373718}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
67d49a8509d40cf7ccaadde78e9a8cce
coreml/coreml-kurzgesagtish
coreml
null
3
0
null
1
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['coreml', 'stable-diffusion', 'text-to-image']
false
true
true
794
false
# Core ML Converted Model: - This model was converted to Core ML for use on Apple Silicon devices. Instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-files-to-Core-ML).<br> - Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.<br> - `split_einsum` version is compatible with all compute unit options including Neural Engine.<br> # Kurzgesagtish: Source(s): [CivitAI](https://civitai.com/models/1212/kurzgesagtish) Here it is the kurzgesagtish model, honestly i didnt know what to call it but it kept being compared to the style used on the kurzgesagt youtube channel, hope you all make amazing things :) Activation prompt : illustration style kurzgesagtish
b7a283c3d3fc290f16d2b1c6b5a8d60d
anas-awadalla/roberta-large-houlsby-few-shot-k-1024-finetuned-squad-seed-4
anas-awadalla
null
19
0
null
0
null
false
false
false
mit
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,095
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-houlsby-few-shot-k-1024-finetuned-squad-seed-4 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 128 - seed: 4 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
789a9ef0439b9bfa20d57e4eec928bff
autoevaluate/glue-mrpc
autoevaluate
distilbert
10
2
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
2
2
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,516
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # glue-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3654 - Accuracy: 0.8554 - F1: 0.8998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.4039 | 0.8039 | 0.8611 | | No log | 2.0 | 460 | 0.3654 | 0.8554 | 0.8998 | | 0.4368 | 3.0 | 690 | 0.4146 | 0.8407 | 0.8885 | | 0.4368 | 4.0 | 920 | 0.5756 | 0.8456 | 0.8941 | | 0.1744 | 5.0 | 1150 | 0.5523 | 0.8456 | 0.8916 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.11.6
066ff49976a98c7b3c5e34a2d7cfc5bb
Rahul-AppOrchid/donut-base-sroie
Rahul-AppOrchid
vision-encoder-decoder
14
1
transformers
0
null
true
false
false
mit
null
['imagefolder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
981
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
48bd1f09a6f668b15ffde2f474ca8506
pmfsl/pt-bert-large-finetuned-rte
pmfsl
bert
10
2
transformers
0
text-classification
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,479
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # pmfsl/pt-bert-large-finetuned-rte This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3300 - Validation Loss: 0.1597 - Train Accuracy: 0.9432 - Train F1: 0.9439 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 406, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Train F1 | Epoch | |:----------:|:---------------:|:--------------:|:--------:|:-----:| | 0.3300 | 0.1597 | 0.9432 | 0.9439 | 0 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
ecfc235017581ab082aaafa75553a954
rhitabrat/bert-finetuned-news
rhitabrat
bert
8
3
transformers
0
question-answering
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,294
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # rhitabrat/bert-finetuned-news This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7333 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 19448, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.1505 | 0 | | 0.7333 | 1 | ### Framework versions - Transformers 4.21.2 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
c8596d9f7f22c42439c797c0c8aec6f4
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_cola_256
gokuls
distilbert
17
2
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,028
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_logit_kd_cola_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6808 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.8053 | 1.0 | 34 | 0.6856 | 0.0 | | 0.7977 | 2.0 | 68 | 0.6837 | 0.0 | | 0.7952 | 3.0 | 102 | 0.6832 | 0.0 | | 0.7934 | 4.0 | 136 | 0.6852 | 0.0 | | 0.7703 | 5.0 | 170 | 0.6808 | 0.0 | | 0.7008 | 6.0 | 204 | 0.6885 | 0.0675 | | 0.6386 | 7.0 | 238 | 0.7263 | 0.1037 | | 0.6059 | 8.0 | 272 | 0.7450 | 0.0825 | | 0.577 | 9.0 | 306 | 0.7559 | 0.1071 | | 0.5531 | 10.0 | 340 | 0.7794 | 0.1048 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
367a7e7841fb3fc1c81c1437df2b8664