repo_id
stringlengths
4
110
author
stringlengths
2
27
โŒ€
model_type
stringlengths
2
29
โŒ€
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
โŒ€
likes
int64
0
4.34k
pipeline
stringlengths
5
30
โŒ€
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
โŒ€
datasets
stringlengths
2
2.58k
โŒ€
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
clementchadebec/reproduced_aae
clementchadebec
null
8
0
pythae
0
null
false
false
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['pythae', 'reproducibility']
false
true
true
660
false
### Downloading this model from the Hub This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub` ```python >>> from pythae.models import AutoModel >>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_aae") ``` ## Reproducibility This trained model reproduces the results of Table 1 in [1]. | Model | Dataset | Metric | Obtained value | Reference value | |:---:|:---:|:---:|:---:|:---:| | AAE | CELEBA 64 | FID | 43.3 | 42 | [1] Tolstikhin, O Bousquet, S Gelly, and B Schรถlkopf. Wasserstein auto-encoders. In 6th International Conference on Learning Representations (ICLR 2018), 2018.
618982bda9f3d7471c3beaf0221b52ca
roscazo/DISO_bsc_test16
roscazo
roberta
14
3
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,955
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DISO_bsc_test16 This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1732 - Diso Precision: 0.7577 - Diso Recall: 0.7757 - Diso F1: 0.7666 - Diso Number: 4552 - Overall Precision: 0.7577 - Overall Recall: 0.7757 - Overall F1: 0.7666 - Overall Accuracy: 0.9732 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Diso Precision | Diso Recall | Diso F1 | Diso Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.0948 | 1.0 | 1400 | 0.0766 | 0.7157 | 0.7594 | 0.7369 | 4552 | 0.7157 | 0.7594 | 0.7369 | 0.9710 | | 0.0631 | 2.0 | 2800 | 0.0818 | 0.7442 | 0.7599 | 0.7520 | 4552 | 0.7442 | 0.7599 | 0.7520 | 0.9726 | | 0.0454 | 3.0 | 4200 | 0.0842 | 0.7544 | 0.7654 | 0.7599 | 4552 | 0.7544 | 0.7654 | 0.7599 | 0.9728 | | 0.0311 | 4.0 | 5600 | 0.1113 | 0.7678 | 0.7700 | 0.7689 | 4552 | 0.7678 | 0.7700 | 0.7689 | 0.9732 | | 0.0217 | 5.0 | 7000 | 0.1231 | 0.7745 | 0.7687 | 0.7716 | 4552 | 0.7745 | 0.7687 | 0.7716 | 0.9743 | | 0.015 | 6.0 | 8400 | 0.1482 | 0.7651 | 0.7733 | 0.7691 | 4552 | 0.7651 | 0.7733 | 0.7691 | 0.9735 | | 0.0101 | 7.0 | 9800 | 0.1498 | 0.7576 | 0.7709 | 0.7642 | 4552 | 0.7576 | 0.7709 | 0.7642 | 0.9730 | | 0.0073 | 8.0 | 11200 | 0.1732 | 0.7577 | 0.7757 | 0.7666 | 4552 | 0.7577 | 0.7757 | 0.7666 | 0.9732 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
9ad7f3bd67d8c0a90b82847fd66aa875
vesteinn/IceBERT-finetuned-iec-sentence-bs16
vesteinn
roberta
11
3
transformers
0
text-classification
true
false
false
gpl-3.0
null
null
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,557
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IceBERT-finetuned-iec-sentence-bs16 This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2508 - Matthews Correlation: 0.8169 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:-----:|:---------------:|:--------------------:| | 0.5278 | 1.0 | 3640 | 0.4777 | 0.5396 | | 0.4648 | 2.0 | 7280 | 0.3886 | 0.6437 | | 0.3807 | 3.0 | 10920 | 0.3478 | 0.7060 | | 0.3061 | 4.0 | 14560 | 0.2523 | 0.8083 | | 0.2477 | 5.0 | 18200 | 0.2508 | 0.8169 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.8.0 - Datasets 1.15.1 - Tokenizers 0.10.3
d9f01386cf5d2e6cf46102fa13a0e6f2
jaeyeon/korean-aihub-learning-math-16batch
jaeyeon
wav2vec2
13
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,113
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # korean-aihub-learning-math-16batch This model is a fine-tuned version of [kresnik/wav2vec2-large-xlsr-korean](https://huggingface.co/kresnik/wav2vec2-large-xlsr-korean) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1497 - Wer: 0.5260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 20 | 32.0718 | 1.0 | | No log | 2.0 | 40 | 24.7403 | 1.0808 | | No log | 3.0 | 60 | 5.8389 | 1.0 | | No log | 4.0 | 80 | 4.8543 | 1.0 | | 19.6583 | 5.0 | 100 | 4.4453 | 1.0 | | 19.6583 | 6.0 | 120 | 4.3923 | 1.0 | | 19.6583 | 7.0 | 140 | 4.2902 | 1.0 | | 19.6583 | 8.0 | 160 | 3.9026 | 0.9959 | | 19.6583 | 9.0 | 180 | 3.0616 | 0.9740 | | 3.7358 | 10.0 | 200 | 2.2049 | 0.8534 | | 3.7358 | 11.0 | 220 | 1.6666 | 0.7288 | | 3.7358 | 12.0 | 240 | 1.4123 | 0.6603 | | 3.7358 | 13.0 | 260 | 1.3113 | 0.6164 | | 3.7358 | 14.0 | 280 | 1.2269 | 0.6356 | | 0.8398 | 15.0 | 300 | 1.2349 | 0.5945 | | 0.8398 | 16.0 | 320 | 1.1970 | 0.5658 | | 0.8398 | 17.0 | 340 | 1.2144 | 0.5562 | | 0.8398 | 18.0 | 360 | 1.2551 | 0.5658 | | 0.8398 | 19.0 | 380 | 1.1971 | 0.5493 | | 0.2649 | 20.0 | 400 | 1.1967 | 0.5247 | | 0.2649 | 21.0 | 420 | 1.2796 | 0.5849 | | 0.2649 | 22.0 | 440 | 1.2156 | 0.5521 | | 0.2649 | 23.0 | 460 | 1.2118 | 0.5425 | | 0.2649 | 24.0 | 480 | 1.1637 | 0.5384 | | 0.1801 | 25.0 | 500 | 1.1846 | 0.5562 | | 0.1801 | 26.0 | 520 | 1.1927 | 0.5534 | | 0.1801 | 27.0 | 540 | 1.2015 | 0.5384 | | 0.1801 | 28.0 | 560 | 1.2077 | 0.5397 | | 0.1801 | 29.0 | 580 | 1.1554 | 0.5260 | | 0.1364 | 30.0 | 600 | 1.1497 | 0.5260 | ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
33791677d4908c0af629ea1512354bd7
nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp
nlp-waseda
roberta
7
323
transformers
1
fill-mask
true
false
false
cc-by-sa-4.0
['ja']
['wikipedia', 'cc100']
null
0
0
0
0
0
0
0
[]
false
true
true
2,299
false
# nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp ## Model description This is a Japanese RoBERTa large model pretrained on Japanese Wikipedia and the Japanese portion of CC-100 with the maximum sequence length of 512. ## How to use You can use this model for masked language modeling as follows: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp") model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp") sentence = 'ๆ—ฉ็จฒ็”ฐๅคงๅญฆใง่‡ช็„ถ่จ€่ชžๅ‡ฆ็†ใ‚’[MASK]ใ™ใ‚‹ใ€‚' encoding = tokenizer(sentence, return_tensors='pt') ... ``` You can fine-tune this model on downstream tasks. ## Tokenization `BertJapaneseTokenizer` now supports automatic tokenization for [Juman++](https://github.com/ku-nlp/jumanpp). However, if your dataset is large, you may take a long time since `BertJapaneseTokenizer` still does not supoort fast tokenization. You can still do the Juman++ tokenization by your self and use the old model [nlp-waseda/roberta-large-japanese-seq512](https://huggingface.co/nlp-waseda/roberta-large-japanese-seq512). Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece). ## Vocabulary The vocabulary consists of 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece). ## Training procedure This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100 from the checkpoint of [nlp-waseda/roberta-large-japanese](https://huggingface.co/nlp-waseda/roberta-large-japanese). It took a week using eight NVIDIA A100 GPUs. The following hyperparameters were used during pretraining: - learning_rate: 6e-5 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 4120 (max_seq_length=128), 4032 (max_seq_length=512) - max_seq_length: 512 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-6 - lr_scheduler_type: linear - training_steps: 670000 (max_seq_length=128) + 70000 (max_seq_length=512) - warmup_steps: 10000 - mixed_precision_training: Native AMP
ad8e1b35b922e58246be07ff9496d771
malay-patel/bert-finetuned-squad-nq
malay-patel
roberta
9
5
transformers
0
question-answering
false
true
false
apache-2.0
null
null
null
2
0
2
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,716
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # malay-patel/bert-finetuned-squad-nq This model is a fine-tuned version of [nlpconnect/roberta-base-squad2-nq](https://huggingface.co/nlpconnect/roberta-base-squad2-nq) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5461 - Train End Logits Accuracy: 0.6253 - Train Start Logits Accuracy: 0.6120 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 861, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:-----:| | 1.5548 | 0.6236 | 0.6172 | 0 | | 1.5423 | 0.6286 | 0.6192 | 1 | | 1.5461 | 0.6253 | 0.6120 | 2 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.7.1 - Tokenizers 0.13.2
15d230072a6ee14ba8fc6e8010c34f86
varun1/bert-finetuned-squad
varun1
bert
8
5
transformers
0
question-answering
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,264
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # varun1/bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.2322 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5546, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.2322 | 0 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.7.1 - Tokenizers 0.13.2
608cbce46cd757fd1e833c0bc7477797
emmyapi/distilbart-podimo-data-eval-1
emmyapi
bart
13
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,206
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbart-podimo-data-eval-1 This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3983 - Rouge1: 34.6132 - Rouge2: 7.9113 - Rougel: 17.9418 - Rougelsum: 31.5251 - Gen Len: 141.5587 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:| | 4.1934 | 0.98 | 44 | 3.7592 | 32.8148 | 6.457 | 16.8696 | 29.6986 | 141.4441 | | 3.6362 | 1.98 | 88 | 3.5809 | 33.0442 | 6.851 | 17.1323 | 30.1382 | 141.324 | | 3.3554 | 2.98 | 132 | 3.4835 | 33.66 | 7.1375 | 17.5152 | 30.5783 | 141.2793 | | 3.1566 | 3.98 | 176 | 3.4301 | 34.524 | 7.757 | 17.995 | 31.5808 | 141.7151 | | 3.0107 | 4.98 | 220 | 3.4099 | 34.3459 | 7.7512 | 18.0605 | 31.4531 | 141.4106 | | 2.901 | 5.98 | 264 | 3.4073 | 35.028 | 7.9099 | 17.9907 | 31.8304 | 141.5419 | | 2.8246 | 6.98 | 308 | 3.3983 | 34.1937 | 7.8606 | 17.7858 | 31.1331 | 141.5279 | | 2.7306 | 7.98 | 352 | 3.3983 | 34.6132 | 7.9113 | 17.9418 | 31.5251 | 141.5587 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.11.0 - Datasets 2.2.1 - Tokenizers 0.12.1
6d574fb8269e13f378a6ceb3fe55391c
jonatasgrosman/exp_w2v2t_et_vp-it_s222
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['et']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'et']
false
true
true
469
false
# exp_w2v2t_et_vp-it_s222 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
6cf6a09cc2c7b788f7e403da08016244
nlp04/kobart_64_5e-5_datav2_min30_lp5.0_temperature1.0
nlp04
bart
15
2
transformers
0
text2text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
994
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobart_64_5e-5_datav2_min30_lp5.0_temperature1.0 This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
36c43d92bf85519f51bd92926446f376
Apocalypse-19/Vishu-the-Cat
Apocalypse-19
null
15
476
diffusers
65
text-to-image
true
false
false
creativeml-openrail-m
null
null
null
1
0
1
0
0
0
0
['pytorch', 'diffusers', 'text-to-image', 'dreambooth-hackathon', 'animal']
false
true
true
1,686
false
# Dreambooth Model for Animals trained on a custom dataset. This is a Stable Diffusion model fine-tuned on the animal concept with DreamBooth. It can be used by modifying the `instance_prompt`: **A photo of vishu cat** This model was created as part of the DreamBooth Hackathon ๐Ÿ”ฅ. ## Description Model finetuned on the pictures of our cat named Vishu, made for the Dreambooth Hackathon, finetuned on Stable diffusion 2.1 Base ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('Apocalypse-19/Vishu-the-Cat') image = pipeline().images[0] image ``` ## Examples Some examples of images generated with their prompts are (Guidance scale = 7.5 and Number of Inference steps = 50 for all): Prompt = A photo of vishu cat as a genshin impact character ![a photo of vishu cat as a genshin impact character, high res, infstep=50, gs=7.5.png](https://s3.amazonaws.com/moonup/production/uploads/1673376172445-6366451164bcbbd03e2fcd19.png) Prompt = A photo of vishu cat shaking hands with Donald Trump ![a photo of vishu cat shaking hands with Donald Trump, infstep=50, gs=7.5, no neg prompts.png](https://s3.amazonaws.com/moonup/production/uploads/1673376265681-6366451164bcbbd03e2fcd19.png) Prompt = A photo of vishu cat as a Disney Princess ![vishu cat as a disney princess, infstep=50, gs=7.5, seed=1024.png](https://s3.amazonaws.com/moonup/production/uploads/1673376287080-6366451164bcbbd03e2fcd19.png) Prompt = A photo of vishu cat cocking a gun ![a photo of vishu cat cocking a gun, infstep=50, gs=7.5, seed=1024.png](https://s3.amazonaws.com/moonup/production/uploads/1673376294767-6366451164bcbbd03e2fcd19.png)
9de048516e63822cc85c6d3f45954c33
aliosm/ComVE-distilgpt2
aliosm
gpt2
9
9
transformers
0
text-generation
true
false
true
mit
['en']
['ComVE']
null
0
0
0
0
0
0
0
['exbert', 'commonsense', 'semeval2020', 'comve']
false
true
true
2,464
false
# ComVE-distilgpt2 ## Model description Finetuned model on Commonsense Validation and Explanation (ComVE) dataset introduced in [SemEval2020 Task4](https://competitions.codalab.org/competitions/21080) using a causal language modeling (CLM) objective. The model is able to generate a reason why a given natural language statement is against commonsense. ## Intended uses & limitations You can use the raw model for text generation to generate reasons why natural language statements are against commonsense. #### How to use You can use this model directly to generate reasons why the given statement is against commonsense using [`generate.sh`](https://github.com/AliOsm/SemEval2020-Task4-ComVE/tree/master/TaskC-Generation) script. *Note:* make sure that you are using version `2.4.1` of `transformers` package. Newer versions has some issue in text generation and the model repeats the last token generated again and again. #### Limitations and bias The model biased to negate the entered sentence usually instead of producing a factual reason. ## Training data The model is initialized from the [distilgpt2](https://github.com/huggingface/transformers/blob/master/model_cards/distilgpt2-README.md) model and finetuned using [ComVE](https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation) dataset which contains 10K against commonsense sentences, each of them is paired with three reference reasons. ## Training procedure Each natural language statement that against commonsense is concatenated with its reference reason with `<|continue|>` as a separator, then the model finetuned using CLM objective. The model trained on Nvidia Tesla P100 GPU from Google Colab platform with 5e-5 learning rate, 15 epochs, 128 maximum sequence length and 64 batch size. <center> <img src="https://i.imgur.com/xKbrwBC.png"> </center> ## Eval results The model achieved 13.7582/13.8026 BLEU scores on SemEval2020 Task4: Commonsense Validation and Explanation development and testing dataset. ### BibTeX entry and citation info ```bibtex @article{fadel2020justers, title={JUSTers at SemEval-2020 Task 4: Evaluating Transformer Models Against Commonsense Validation and Explanation}, author={Fadel, Ali and Al-Ayyoub, Mahmoud and Cambria, Erik}, year={2020} } ``` <a href="https://huggingface.co/exbert/?model=aliosm/ComVE-distilgpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
a5932900b085fd715c6b61c5fd884ce0
PrimeQA/open-nq-colbert-xlmr-large
PrimeQA
bert
9
4
transformers
0
null
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,217
false
# Model Description This is a retriever model based on ColBERT v2 with [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) language model.<br> This model was trained with the OpenNQ data.<br> The architecture of the model and hyper parameters are described in the paper โ€˜Relevance-guided Supervision for OpenQA with ColBERTโ€™. ## Intended uses & limitations This model uses the xlm-roberta-large LM. Biases associated with the pre-trained language model we used may be present in this ColBERT v2 model. ## Usage This model can be used with [PrimeQA](https://github.com/primeqa/primeqa)โ€™s [ColBERT](https://github.com/primeqa/primeqa/blob/main/primeqa/ir/README.md) engine. ## BibTeX entry and citation info ```bibtex @article{Khattab2021RelevanceguidedSF, title={Relevance-guided Supervision for OpenQA with ColBERT}, author={O. Khattab and Christopher Potts and Matei A. Zaharia}, journal={Transactions of the Association for Computational Linguistics}, year={2021}, } ``` ```bibtex @article{Lee2019LatentRF, title={Latent Retrieval for Weakly Supervised Open Domain Question Answering}, author={Kenton Lee and Ming-Wei Chang and Kristina Toutanova}, journal={ACL}, year={2019} } ```
abf3b23363a8ed6cd3f233d1c008ac2c
ali2066/distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
ali2066
bert
13
10
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,739
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2811 - Precision: 0.3231 - Recall: 0.5151 - F1: 0.3971 - Accuracy: 0.8913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.2881 | 0.2089 | 0.3621 | 0.2650 | 0.8715 | | No log | 2.0 | 60 | 0.2500 | 0.2619 | 0.3842 | 0.3115 | 0.8845 | | No log | 3.0 | 90 | 0.2571 | 0.2327 | 0.4338 | 0.3030 | 0.8809 | | No log | 4.0 | 120 | 0.2479 | 0.3051 | 0.4761 | 0.3719 | 0.8949 | | No log | 5.0 | 150 | 0.2783 | 0.3287 | 0.4761 | 0.3889 | 0.8936 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
18f951dcffd8d88164676d434f01cb77
Helsinki-NLP/opus-mt-bi-sv
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-bi-sv * source languages: bi * target languages: sv * OPUS readme: [bi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bi.sv | 22.7 | 0.403 |
401e0928a24a4aac0bcc8a1fa07e30ff
flax-community/gpt-neo-125M-apps
flax-community
gpt_neo
12
51
transformers
0
text-generation
true
false
true
mit
['en', 'python']
['apps']
null
0
0
0
0
0
0
0
['gpt_neo', 'code_synthesis']
false
true
true
4,833
false
# GPT-Neo-125M-APPS > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot** ## Model Description GPT-Neo-125M-APPS is a GPT-Neo-125M finetuned on APPS dataset. This model is specialized to solve programming tasks. ## Training data The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each. ## Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py). Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script: ```bash python run_clm_apps.py \ --output_dir $HOME/gpt-neo-125M-apps \ --model_name_or_path EleutherAI/gpt-neo-125M \ --dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \ --dataset_config_name formatted \ --do_train --do_eval \ --block_size="1024" \ --per_device_train_batch_size="16" \ --per_device_eval_batch_size="16" \ --preprocessing_num_workers="16" \ --learning_rate="8e-5" \ --warmup_steps="800" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --weight_decay="0.1" \ --overwrite_output_dir \ --num_train_epochs="5" \ --logging_steps="50" \ --eval_steps="2000" \ --report_to="wandb" \ --dtype="bfloat16" \ --save_strategy epoch \ --gradient_accumulation_steps 2 \ ``` ## Intended Use and Limitations The model is finetuned to solve programming problems given a text description and optional starter code. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-125M-apps") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-125M-apps") prompt = """ A function to greet user. Given a user name it should say hello def greet(name): ANSWER: """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` ### Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset. GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details. ## Eval results Coming soon...
e9c5391fdaa32f762ffb4aa25b87eb67
hyorea1/KoT5-test-add-data-from5ep
hyorea1
t5
11
3
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,307
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # KoT5-test-add-data-from5ep This model is a fine-tuned version of [hyorea1/KoT5-test](https://huggingface.co/hyorea1/KoT5-test) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1737 - Rouge1: 11.8294 - Rouge2: 3.2314 - Rougel: 11.7891 - Rougelsum: 11.8237 - Gen Len: 35.2824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 100 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 1.9029 | 0.16 | 400 | 1.1695 | 12.8243 | 3.2659 | 12.7542 | 12.8276 | 35.5743 | | 1.7971 | 0.32 | 800 | 1.1646 | 12.259 | 3.0668 | 12.1254 | 12.1927 | 35.2353 | | 1.4396 | 0.48 | 1200 | 1.1681 | 12.1151 | 3.1908 | 11.9507 | 12.0305 | 35.3125 | | 1.0945 | 0.64 | 1600 | 1.1703 | 12.0576 | 2.9688 | 11.9292 | 11.9792 | 35.0926 | | 1.1924 | 0.8 | 2000 | 1.1667 | 11.7835 | 2.9605 | 11.6755 | 11.7318 | 35.3596 | | 1.3711 | 0.97 | 2400 | 1.1668 | 11.9873 | 3.1107 | 11.9369 | 12.0207 | 34.5309 | | 1.6031 | 1.13 | 2800 | 1.1673 | 11.6049 | 3.1121 | 11.5527 | 11.5976 | 34.6551 | | 1.5254 | 1.29 | 3200 | 1.1693 | 11.6803 | 2.8527 | 11.6116 | 11.6829 | 34.8066 | | 1.641 | 1.45 | 3600 | 1.1737 | 11.8294 | 3.2314 | 11.7891 | 11.8237 | 35.2824 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
a0a7eb325240da065672cefa1b3e7f84
aemami1/distilbert-base-uncased-finetuned-wnli
aemami1
distilbert
13
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,475
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-wnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6950 - Accuracy: 0.5493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.6929 | 0.5211 | | No log | 2.0 | 80 | 0.6951 | 0.4789 | | No log | 3.0 | 120 | 0.6950 | 0.5493 | | No log | 4.0 | 160 | 0.6966 | 0.5352 | | No log | 5.0 | 200 | 0.6966 | 0.5352 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
2ce6fe602248fceb3fc3bd1b7abca19e
jonatasgrosman/exp_w2v2t_ar_unispeech-sat_s504
jonatasgrosman
unispeech-sat
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ar']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'ar']
false
true
true
463
false
# exp_w2v2t_ar_unispeech-sat_s504 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
a5a794d3ea4908b0511aabc494f0482c
Elron/deberta-v3-large-sentiment
Elron
deberta-v2
16
4
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
10,747
false
# deberta-v3-large-sentiment This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset. ## Model description Test set results: | Model | Emotion | Hate | Irony | Offensive | Sentiment | | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | | deberta-v3-large | **86.3** | **61.3** | **87.1** | **86.4** | **73.9** | | BERTweet | 79.3 | - | 82.1 | 79.5 | 73.4 | | RoB-RT | 79.5 | 52.3 | 61.7 | 80.5 | 69.3 | [source:papers_with_code](https://paperswithcode.com/sota/sentiment-analysis-on-tweeteval) ## Intended uses & limitations Classifying attributes of interest on tweeter like data. ## Training and evaluation data [tweet_eval](https://huggingface.co/datasets/tweet_eval) dataset. ## Training procedure Fine tuned and evaluated with [run_glue.py]() ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0614 | 0.07 | 100 | 1.0196 | 0.4345 | | 0.8601 | 0.14 | 200 | 0.7561 | 0.6460 | | 0.734 | 0.21 | 300 | 0.6796 | 0.6955 | | 0.6753 | 0.28 | 400 | 0.6521 | 0.7000 | | 0.6408 | 0.35 | 500 | 0.6119 | 0.7440 | | 0.5991 | 0.42 | 600 | 0.6034 | 0.7370 | | 0.6069 | 0.49 | 700 | 0.5976 | 0.7375 | | 0.6122 | 0.56 | 800 | 0.5871 | 0.7425 | | 0.5908 | 0.63 | 900 | 0.5935 | 0.7445 | | 0.5884 | 0.7 | 1000 | 0.5792 | 0.7520 | | 0.5839 | 0.77 | 1100 | 0.5780 | 0.7555 | | 0.5772 | 0.84 | 1200 | 0.5727 | 0.7570 | | 0.5895 | 0.91 | 1300 | 0.5601 | 0.7550 | | 0.5757 | 0.98 | 1400 | 0.5613 | 0.7525 | | 0.5121 | 1.05 | 1500 | 0.5867 | 0.7600 | | 0.5254 | 1.12 | 1600 | 0.5595 | 0.7630 | | 0.5074 | 1.19 | 1700 | 0.5594 | 0.7585 | | 0.4947 | 1.26 | 1800 | 0.5697 | 0.7575 | | 0.5019 | 1.33 | 1900 | 0.5665 | 0.7580 | | 0.5005 | 1.4 | 2000 | 0.5484 | 0.7655 | | 0.5125 | 1.47 | 2100 | 0.5626 | 0.7605 | | 0.5241 | 1.54 | 2200 | 0.5561 | 0.7560 | | 0.5198 | 1.61 | 2300 | 0.5602 | 0.7600 | | 0.5124 | 1.68 | 2400 | 0.5654 | 0.7490 | | 0.5096 | 1.75 | 2500 | 0.5803 | 0.7515 | | 0.4885 | 1.82 | 2600 | 0.5889 | 0.75 | | 0.5111 | 1.89 | 2700 | 0.5508 | 0.7665 | | 0.4868 | 1.96 | 2800 | 0.5621 | 0.7635 | | 0.4599 | 2.04 | 2900 | 0.5995 | 0.7615 | | 0.4147 | 2.11 | 3000 | 0.6202 | 0.7530 | | 0.4233 | 2.18 | 3100 | 0.5875 | 0.7625 | | 0.4324 | 2.25 | 3200 | 0.5794 | 0.7610 | | 0.4141 | 2.32 | 3300 | 0.5902 | 0.7460 | | 0.4306 | 2.39 | 3400 | 0.6053 | 0.7545 | | 0.4266 | 2.46 | 3500 | 0.5979 | 0.7570 | | 0.4227 | 2.53 | 3600 | 0.5920 | 0.7650 | | 0.4226 | 2.6 | 3700 | 0.6166 | 0.7455 | | 0.3978 | 2.67 | 3800 | 0.6126 | 0.7560 | | 0.3954 | 2.74 | 3900 | 0.6152 | 0.7550 | | 0.4209 | 2.81 | 4000 | 0.5980 | 0.75 | | 0.3982 | 2.88 | 4100 | 0.6096 | 0.7490 | | 0.4016 | 2.95 | 4200 | 0.6541 | 0.7425 | | 0.3966 | 3.02 | 4300 | 0.6377 | 0.7545 | | 0.3074 | 3.09 | 4400 | 0.6860 | 0.75 | | 0.3551 | 3.16 | 4500 | 0.6160 | 0.7550 | | 0.3323 | 3.23 | 4600 | 0.6714 | 0.7520 | | 0.3171 | 3.3 | 4700 | 0.6538 | 0.7535 | | 0.3403 | 3.37 | 4800 | 0.6774 | 0.7465 | | 0.3396 | 3.44 | 4900 | 0.6726 | 0.7465 | | 0.3259 | 3.51 | 5000 | 0.6465 | 0.7480 | | 0.3392 | 3.58 | 5100 | 0.6860 | 0.7460 | | 0.3251 | 3.65 | 5200 | 0.6697 | 0.7495 | | 0.3253 | 3.72 | 5300 | 0.6770 | 0.7430 | | 0.3455 | 3.79 | 5400 | 0.7177 | 0.7360 | | 0.3323 | 3.86 | 5500 | 0.6943 | 0.7400 | | 0.3335 | 3.93 | 5600 | 0.6507 | 0.7555 | | 0.3368 | 4.0 | 5700 | 0.6580 | 0.7485 | | 0.2479 | 4.07 | 5800 | 0.7667 | 0.7430 | | 0.2613 | 4.14 | 5900 | 0.7513 | 0.7505 | | 0.2557 | 4.21 | 6000 | 0.7927 | 0.7485 | | 0.243 | 4.28 | 6100 | 0.7792 | 0.7450 | | 0.2473 | 4.35 | 6200 | 0.8107 | 0.7355 | | 0.2447 | 4.42 | 6300 | 0.7851 | 0.7370 | | 0.2515 | 4.49 | 6400 | 0.7529 | 0.7465 | | 0.274 | 4.56 | 6500 | 0.7390 | 0.7465 | | 0.2674 | 4.63 | 6600 | 0.7658 | 0.7460 | | 0.2416 | 4.7 | 6700 | 0.7915 | 0.7485 | | 0.2432 | 4.77 | 6800 | 0.7989 | 0.7435 | | 0.2595 | 4.84 | 6900 | 0.7850 | 0.7380 | | 0.2736 | 4.91 | 7000 | 0.7577 | 0.7395 | | 0.2783 | 4.98 | 7100 | 0.7650 | 0.7405 | | 0.2304 | 5.05 | 7200 | 0.8542 | 0.7385 | | 0.1937 | 5.12 | 7300 | 0.8390 | 0.7345 | | 0.1878 | 5.19 | 7400 | 0.9150 | 0.7330 | | 0.1921 | 5.26 | 7500 | 0.8792 | 0.7405 | | 0.1916 | 5.33 | 7600 | 0.8892 | 0.7410 | | 0.2011 | 5.4 | 7700 | 0.9012 | 0.7325 | | 0.211 | 5.47 | 7800 | 0.8608 | 0.7420 | | 0.2194 | 5.54 | 7900 | 0.8852 | 0.7320 | | 0.205 | 5.61 | 8000 | 0.8803 | 0.7385 | | 0.1981 | 5.68 | 8100 | 0.8681 | 0.7330 | | 0.1908 | 5.75 | 8200 | 0.9020 | 0.7435 | | 0.1942 | 5.82 | 8300 | 0.8780 | 0.7410 | | 0.1958 | 5.89 | 8400 | 0.8937 | 0.7345 | | 0.1883 | 5.96 | 8500 | 0.9121 | 0.7360 | | 0.1819 | 6.04 | 8600 | 0.9409 | 0.7430 | | 0.145 | 6.11 | 8700 | 1.1390 | 0.7265 | | 0.1696 | 6.18 | 8800 | 0.9189 | 0.7430 | | 0.1488 | 6.25 | 8900 | 0.9718 | 0.7400 | | 0.1637 | 6.32 | 9000 | 0.9702 | 0.7450 | | 0.1547 | 6.39 | 9100 | 1.0033 | 0.7410 | | 0.1605 | 6.46 | 9200 | 0.9973 | 0.7355 | | 0.1552 | 6.53 | 9300 | 1.0491 | 0.7290 | | 0.1731 | 6.6 | 9400 | 1.0271 | 0.7335 | | 0.1738 | 6.67 | 9500 | 0.9575 | 0.7430 | | 0.1669 | 6.74 | 9600 | 0.9614 | 0.7350 | | 0.1347 | 6.81 | 9700 | 1.0263 | 0.7365 | | 0.1593 | 6.88 | 9800 | 1.0173 | 0.7360 | | 0.1549 | 6.95 | 9900 | 1.0398 | 0.7350 | | 0.1675 | 7.02 | 10000 | 0.9975 | 0.7380 | | 0.1182 | 7.09 | 10100 | 1.1059 | 0.7350 | | 0.1351 | 7.16 | 10200 | 1.0933 | 0.7400 | | 0.1496 | 7.23 | 10300 | 1.0731 | 0.7355 | | 0.1197 | 7.3 | 10400 | 1.1089 | 0.7360 | | 0.1111 | 7.37 | 10500 | 1.1381 | 0.7405 | | 0.1494 | 7.44 | 10600 | 1.0252 | 0.7425 | | 0.1235 | 7.51 | 10700 | 1.0906 | 0.7360 | | 0.133 | 7.58 | 10800 | 1.1796 | 0.7375 | | 0.1248 | 7.65 | 10900 | 1.1332 | 0.7420 | | 0.1268 | 7.72 | 11000 | 1.1304 | 0.7415 | | 0.1368 | 7.79 | 11100 | 1.1345 | 0.7380 | | 0.1228 | 7.86 | 11200 | 1.2018 | 0.7320 | | 0.1281 | 7.93 | 11300 | 1.1884 | 0.7350 | | 0.1449 | 8.0 | 11400 | 1.1571 | 0.7345 | | 0.1025 | 8.07 | 11500 | 1.1538 | 0.7345 | | 0.1199 | 8.14 | 11600 | 1.2113 | 0.7390 | | 0.1016 | 8.21 | 11700 | 1.2882 | 0.7370 | | 0.114 | 8.28 | 11800 | 1.2872 | 0.7390 | | 0.1019 | 8.35 | 11900 | 1.2876 | 0.7380 | | 0.1142 | 8.42 | 12000 | 1.2791 | 0.7385 | | 0.1135 | 8.49 | 12100 | 1.2883 | 0.7380 | | 0.1139 | 8.56 | 12200 | 1.2829 | 0.7360 | | 0.1107 | 8.63 | 12300 | 1.2698 | 0.7365 | | 0.1183 | 8.7 | 12400 | 1.2660 | 0.7345 | | 0.1064 | 8.77 | 12500 | 1.2889 | 0.7365 | | 0.0895 | 8.84 | 12600 | 1.3480 | 0.7330 | | 0.1244 | 8.91 | 12700 | 1.2872 | 0.7325 | | 0.1209 | 8.98 | 12800 | 1.2681 | 0.7375 | | 0.1144 | 9.05 | 12900 | 1.2711 | 0.7370 | | 0.1034 | 9.12 | 13000 | 1.2801 | 0.7360 | | 0.113 | 9.19 | 13100 | 1.2801 | 0.7350 | | 0.0994 | 9.26 | 13200 | 1.2920 | 0.7360 | | 0.0966 | 9.33 | 13300 | 1.2761 | 0.7335 | | 0.0939 | 9.4 | 13400 | 1.2909 | 0.7365 | | 0.0975 | 9.47 | 13500 | 1.2953 | 0.7360 | | 0.0842 | 9.54 | 13600 | 1.3179 | 0.7335 | | 0.0871 | 9.61 | 13700 | 1.3149 | 0.7385 | | 0.1162 | 9.68 | 13800 | 1.3124 | 0.7350 | | 0.085 | 9.75 | 13900 | 1.3207 | 0.7355 | | 0.0966 | 9.82 | 14000 | 1.3248 | 0.7335 | | 0.1064 | 9.89 | 14100 | 1.3261 | 0.7335 | | 0.1046 | 9.96 | 14200 | 1.3255 | 0.7360 | ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.9.0 - Datasets 2.2.2 - Tokenizers 0.11.6
70213760c0d9050af1b75b5637db4d73
ozioh/trainer_log
ozioh
bert
8
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
7,509
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trainer_log This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4907 - Accuracy: 0.8742 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.047 | 0.04 | 5 | 0.9927 | 0.5753 | | 0.938 | 0.08 | 10 | 0.9320 | 0.5753 | | 0.8959 | 0.12 | 15 | 0.8764 | 0.5773 | | 0.8764 | 0.16 | 20 | 0.8308 | 0.6639 | | 0.7968 | 0.2 | 25 | 0.8045 | 0.6577 | | 0.8644 | 0.25 | 30 | 0.7779 | 0.6639 | | 0.7454 | 0.29 | 35 | 0.7561 | 0.6412 | | 0.7008 | 0.33 | 40 | 0.7157 | 0.6845 | | 0.7627 | 0.37 | 45 | 0.7027 | 0.6907 | | 0.7568 | 0.41 | 50 | 0.7270 | 0.6763 | | 0.7042 | 0.45 | 55 | 0.6770 | 0.7031 | | 0.6683 | 0.49 | 60 | 0.6364 | 0.7134 | | 0.6312 | 0.53 | 65 | 0.6151 | 0.7278 | | 0.5789 | 0.57 | 70 | 0.6003 | 0.7443 | | 0.5964 | 0.61 | 75 | 0.5665 | 0.7835 | | 0.5178 | 0.66 | 80 | 0.5506 | 0.8 | | 0.5698 | 0.7 | 85 | 0.5240 | 0.8 | | 0.5407 | 0.74 | 90 | 0.5223 | 0.7814 | | 0.6141 | 0.78 | 95 | 0.4689 | 0.8268 | | 0.4998 | 0.82 | 100 | 0.4491 | 0.8227 | | 0.4853 | 0.86 | 105 | 0.4448 | 0.8268 | | 0.4561 | 0.9 | 110 | 0.4646 | 0.8309 | | 0.5058 | 0.94 | 115 | 0.4317 | 0.8495 | | 0.4229 | 0.98 | 120 | 0.4014 | 0.8515 | | 0.2808 | 1.02 | 125 | 0.3834 | 0.8619 | | 0.3721 | 1.07 | 130 | 0.3829 | 0.8619 | | 0.3432 | 1.11 | 135 | 0.4212 | 0.8598 | | 0.3616 | 1.15 | 140 | 0.3930 | 0.8680 | | 0.3912 | 1.19 | 145 | 0.3793 | 0.8639 | | 0.4141 | 1.23 | 150 | 0.3646 | 0.8619 | | 0.2726 | 1.27 | 155 | 0.3609 | 0.8701 | | 0.2021 | 1.31 | 160 | 0.3640 | 0.8680 | | 0.3468 | 1.35 | 165 | 0.3655 | 0.8701 | | 0.2729 | 1.39 | 170 | 0.4054 | 0.8495 | | 0.3885 | 1.43 | 175 | 0.3559 | 0.8639 | | 0.446 | 1.48 | 180 | 0.3390 | 0.8680 | | 0.3337 | 1.52 | 185 | 0.3505 | 0.8660 | | 0.3507 | 1.56 | 190 | 0.3337 | 0.8804 | | 0.3864 | 1.6 | 195 | 0.3476 | 0.8660 | | 0.3495 | 1.64 | 200 | 0.3574 | 0.8577 | | 0.3388 | 1.68 | 205 | 0.3426 | 0.8701 | | 0.358 | 1.72 | 210 | 0.3439 | 0.8804 | | 0.1761 | 1.76 | 215 | 0.3461 | 0.8722 | | 0.3089 | 1.8 | 220 | 0.3638 | 0.8639 | | 0.279 | 1.84 | 225 | 0.3527 | 0.8742 | | 0.3468 | 1.89 | 230 | 0.3497 | 0.8619 | | 0.2969 | 1.93 | 235 | 0.3572 | 0.8598 | | 0.2719 | 1.97 | 240 | 0.3391 | 0.8804 | | 0.1936 | 2.01 | 245 | 0.3415 | 0.8619 | | 0.2475 | 2.05 | 250 | 0.3477 | 0.8784 | | 0.1759 | 2.09 | 255 | 0.3718 | 0.8660 | | 0.2443 | 2.13 | 260 | 0.3758 | 0.8619 | | 0.2189 | 2.17 | 265 | 0.3670 | 0.8639 | | 0.1505 | 2.21 | 270 | 0.3758 | 0.8722 | | 0.2283 | 2.25 | 275 | 0.3723 | 0.8722 | | 0.155 | 2.3 | 280 | 0.4442 | 0.8330 | | 0.317 | 2.34 | 285 | 0.3700 | 0.8701 | | 0.1566 | 2.38 | 290 | 0.4218 | 0.8619 | | 0.2294 | 2.42 | 295 | 0.3820 | 0.8660 | | 0.1567 | 2.46 | 300 | 0.3891 | 0.8660 | | 0.1875 | 2.5 | 305 | 0.3973 | 0.8722 | | 0.2741 | 2.54 | 310 | 0.4042 | 0.8742 | | 0.2363 | 2.58 | 315 | 0.3777 | 0.8660 | | 0.1964 | 2.62 | 320 | 0.3891 | 0.8639 | | 0.156 | 2.66 | 325 | 0.3998 | 0.8639 | | 0.1422 | 2.7 | 330 | 0.4022 | 0.8722 | | 0.2141 | 2.75 | 335 | 0.4239 | 0.8701 | | 0.1616 | 2.79 | 340 | 0.4094 | 0.8722 | | 0.1032 | 2.83 | 345 | 0.4263 | 0.8784 | | 0.2313 | 2.87 | 350 | 0.4579 | 0.8598 | | 0.0843 | 2.91 | 355 | 0.3989 | 0.8742 | | 0.2567 | 2.95 | 360 | 0.4051 | 0.8660 | | 0.1749 | 2.99 | 365 | 0.4136 | 0.8660 | | 0.1116 | 3.03 | 370 | 0.4312 | 0.8619 | | 0.1058 | 3.07 | 375 | 0.4007 | 0.8701 | | 0.1085 | 3.11 | 380 | 0.4174 | 0.8660 | | 0.0578 | 3.16 | 385 | 0.4163 | 0.8763 | | 0.1381 | 3.2 | 390 | 0.4578 | 0.8660 | | 0.1137 | 3.24 | 395 | 0.4259 | 0.8660 | | 0.2068 | 3.28 | 400 | 0.3976 | 0.8701 | | 0.0792 | 3.32 | 405 | 0.3824 | 0.8763 | | 0.1711 | 3.36 | 410 | 0.3793 | 0.8742 | | 0.0686 | 3.4 | 415 | 0.4013 | 0.8742 | | 0.1102 | 3.44 | 420 | 0.4113 | 0.8639 | | 0.1102 | 3.48 | 425 | 0.4276 | 0.8619 | | 0.0674 | 3.52 | 430 | 0.4222 | 0.8804 | | 0.0453 | 3.57 | 435 | 0.4326 | 0.8722 | | 0.0704 | 3.61 | 440 | 0.4684 | 0.8722 | | 0.1151 | 3.65 | 445 | 0.4640 | 0.8701 | | 0.1225 | 3.69 | 450 | 0.4408 | 0.8763 | | 0.0391 | 3.73 | 455 | 0.4520 | 0.8639 | | 0.0566 | 3.77 | 460 | 0.4558 | 0.8680 | | 0.1222 | 3.81 | 465 | 0.4599 | 0.8660 | | 0.1035 | 3.85 | 470 | 0.4630 | 0.8763 | | 0.1845 | 3.89 | 475 | 0.4796 | 0.8680 | | 0.087 | 3.93 | 480 | 0.4697 | 0.8742 | | 0.1599 | 3.98 | 485 | 0.4663 | 0.8784 | | 0.0632 | 4.02 | 490 | 0.5139 | 0.8536 | | 0.1218 | 4.06 | 495 | 0.4920 | 0.8722 | | 0.0916 | 4.1 | 500 | 0.4846 | 0.8763 | | 0.0208 | 4.14 | 505 | 0.5269 | 0.8722 | | 0.0803 | 4.18 | 510 | 0.5154 | 0.8784 | | 0.1318 | 4.22 | 515 | 0.4907 | 0.8742 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
4e10514d670b474372a39a90b5c0ef86
LawalAfeez/englishreview-ds
LawalAfeez
distilbert
8
2
transformers
0
fill-mask
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
962
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # englishreview-ds This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
c4aabf75d4b7a39dc22bbdb92e83ab90
aidiary/xlm-roberta-base-finetuned-panx-de
aidiary
xlm-roberta
10
7
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1375 - F1: 0.8587 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2584 | 1.0 | 525 | 0.1682 | 0.8242 | | 0.1299 | 2.0 | 1050 | 0.1354 | 0.8447 | | 0.0822 | 3.0 | 1575 | 0.1375 | 0.8587 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
5772fe4bb6ad21e600878ae5ed6b54fc
Pro0100Hy6/test_trainer
Pro0100Hy6
bert
6
4
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,203
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7773 - Accuracy: 0.6375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7753 | 1.0 | 400 | 0.7773 | 0.6375 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
84618b32b573b4b8e7e3246b2c4336ed
hackathon-pln-es/Detect-Acoso-Twitter-Es
hackathon-pln-es
roberta
22
14
transformers
4
text-classification
true
false
false
apache-2.0
['es']
['hackathon-pln-es/Dataset-Acoso-Twitter-Es']
null
0
0
0
0
0
0
0
['generated_from_trainer', 'es', 'text-classification', 'acoso', 'twitter', 'cyberbullying']
true
true
true
1,725
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Detecciรณn de acoso en Twitter Espaรฑol This model is a fine-tuned version of [mrm8488/distilroberta-finetuned-tweets-hate-speech](https://huggingface.co/mrm8488/distilroberta-finetuned-tweets-hate-speech) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1628 - Accuracy: 0.9167 # UNL: Universidad Nacional de Loja ## Miembros del equipo: - Anderson Quizhpe <br> - Luis Negrรณn <br> - David Pacheco <br> - Bryan Requenes <br> - Paul Pasaca ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6732 | 1.0 | 27 | 0.3797 | 0.875 | | 0.5537 | 2.0 | 54 | 0.3242 | 0.9167 | | 0.5218 | 3.0 | 81 | 0.2879 | 0.9167 | | 0.509 | 4.0 | 108 | 0.2606 | 0.9167 | | 0.4196 | 5.0 | 135 | 0.1628 | 0.9167 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
9437aeb48b9c9577f1124cf6d438053a
icon-it-tdtu/mt-vi-en-optimum
icon-it-tdtu
marian
9
26
transformers
1
translation
false
false
false
apache-2.0
['vi', 'en']
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
578
false
# MarianMT exported to the ONNX format ## Install Optimum ```bash pip install optimum ``` ## Usage example ```python from transformers import AutoTokenizer from optimum.onnxruntime import ORTModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("icon-it-tdtu/mt-vi-en-optimum") model = ORTModelForSeq2SeqLM.from_pretrained("icon-it-tdtu/mt-vi-en-optimum") text = "Tรดi lร  mแป™t sinh viรชn." inputs = tokenizer(text, return_tensors='pt') outputs = model.generate(**inputs) result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(result) # I am a student. ```
761f01bba573f8b6be7d0bd7f842583a
vikram15/bert-finetuned-ner
vikram15
bert
12
3
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,518
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0630 - Precision: 0.9310 - Recall: 0.9488 - F1: 0.9398 - Accuracy: 0.9862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0911 | 1.0 | 1756 | 0.0702 | 0.9197 | 0.9345 | 0.9270 | 0.9826 | | 0.0336 | 2.0 | 3512 | 0.0623 | 0.9294 | 0.9480 | 0.9386 | 0.9864 | | 0.0174 | 3.0 | 5268 | 0.0630 | 0.9310 | 0.9488 | 0.9398 | 0.9862 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
217d03d0685a2d5d9f7f8df89b6cb83a
BiggieW/classification_chnsenticorp_aug
BiggieW
bert
14
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,352
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # classification_chnsenticorp_aug This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3776 - Accuracy: 0.85 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4438 | 1.0 | 20 | 0.5145 | 0.75 | | 0.0666 | 2.0 | 40 | 0.4066 | 0.9 | | 0.0208 | 3.0 | 60 | 0.3776 | 0.85 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
0ee43656a30ddde0333ea1d3441bf132
SauravMaheshkar/clr-finetuned-bert-large-uncased
SauravMaheshkar
bert
7
8
transformers
0
fill-mask
true
false
false
cc0-1.0
null
['Commonlit-Readibility']
null
0
0
0
0
0
0
0
['kaggle']
false
true
true
1,457
false
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
5a5156ceaae31d747cd244c8ad44a10d
henryscheible/eval_masked_102_mrpc
henryscheible
null
13
0
null
0
null
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,049
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eval_masked_102_mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.5646 - Accuracy: 0.8113 - F1: 0.8702 - Combined Score: 0.8407 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
b26d25897135fa275130a44529560131
sutd-ai/distilbert-base-uncased-finetuned-squad
sutd-ai
distilbert
12
2
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad_v2']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,287
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.5027 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2343 | 1.0 | 8235 | 1.3121 | | 0.9657 | 2.0 | 16470 | 1.2259 | | 0.7693 | 3.0 | 24705 | 1.5027 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
312602b105ed1f1165f6f1c0a538e6bc
kaipo-chang/distilbert-base-uncased-finetuned-squad
kaipo-chang
distilbert
12
6
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
928
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
970f29cd9d8e89c58c4055f74b31acec
i-am-holmes/vit-base-patch16-224-finetuned-flower
i-am-holmes
vit
7
10
transformers
0
image-classification
true
false
false
apache-2.0
null
['imagefolder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
964
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
8f7103656e8c157905f7feebd5956e16
Prajeevan/malavika2
Prajeevan
null
36
5
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image']
false
true
true
1,924
false
### malavika2 Dreambooth model trained by Prajeevan with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: malavika2 (use that on your prompt) ![malavika2 0](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%281%29.jpg)![malavika2 1](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%282%29.jpg)![malavika2 2](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%283%29.jpg)![malavika2 3](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%284%29.jpg)![malavika2 4](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%285%29.jpg)![malavika2 5](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%286%29.jpg)![malavika2 6](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%287%29.jpg)![malavika2 7](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%288%29.jpg)![malavika2 8](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%289%29.jpg)![malavika2 9](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%2810%29.jpg)![malavika2 10](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%2811%29.jpg)![malavika2 11](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%2812%29.jpg)![malavika2 12](https://huggingface.co/Prajeevan/malavika2/resolve/main/concept_images/malavika2_%2813%29.jpg)
898eabf5f7823aac3ecf2b87804c3191
linydub/bart-large-samsum
linydub
bart
18
1,924
transformers
9
summarization
true
false
false
apache-2.0
['en']
['samsum']
null
0
0
0
0
0
0
0
['summarization', 'azureml', 'azure', 'codecarbon', 'bart']
true
true
true
4,296
false
## `bart-large-samsum` This model was trained using Microsoft's [`Azure Machine Learning Service`](https://azure.microsoft.com/en-us/services/machine-learning). It was fine-tuned on the [`samsum`](https://huggingface.co/datasets/samsum) corpus from [`facebook/bart-large`](https://huggingface.co/facebook/bart-large) checkpoint. ## Usage (Inference) ```python from transformers import pipeline summarizer = pipeline("summarization", model="linydub/bart-large-samsum") input_text = ''' Henry: Hey, is Nate coming over to watch the movie tonight? Kevin: Yea, he said he'll be arriving a bit later at around 7 since he gets off of work at 6. Have you taken out the garbage yet? Henry: Oh I forgot. I'll do that once I'm finished with my assignment for my math class. Kevin: Yea, you should take it out as soon as possible. And also, Nate is bringing his girlfriend. Henry: Nice, I'm really looking forward to seeing them again. ''' summarizer(input_text) ``` ## Fine-tune on AzureML [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Flinydub%2Fazureml-greenai-txtsum%2Fmain%2F.cloud%2Ftemplate-hub%2Flinydub%2Farm-bart-large-samsum.json) [![Visualize](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/visualizebutton.svg?sanitize=true)](http://armviz.io/#/?load=https://raw.githubusercontent.com/linydub/azureml-greenai-txtsum/main/.cloud/template-hub/linydub/arm-bart-large-samsum.json) More information about the fine-tuning process (including samples and benchmarks): **[Preview]** https://github.com/linydub/azureml-greenai-txtsum ## Resource Usage These results were retrieved from [`Azure Monitor Metrics`](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/data-platform-metrics). All experiments were ran on AzureML low priority compute clusters. | Key | Value | | --- | ----- | | Region | US West 2 | | AzureML Compute SKU | STANDARD_ND40RS_V2 | | Compute SKU GPU Device | 8 x NVIDIA V100 32GB (NVLink) | | Compute Node Count | 1 | | Run Duration | 6m 48s | | Compute Cost (Dedicated/LowPriority) | $2.50 / $0.50 USD | | Average CPU Utilization | 47.9% | | Average GPU Utilization | 69.8% | | Average GPU Memory Usage | 25.71 GB | | Total GPU Energy Usage | 370.84 kJ | *Compute cost ($) is estimated from the run duration, number of compute nodes utilized, and SKU's price per hour. Updated SKU pricing could be found [here](https://azure.microsoft.com/en-us/pricing/details/machine-learning). ### Carbon Emissions These results were obtained using [`CodeCarbon`](https://github.com/mlco2/codecarbon). The carbon emissions are estimated from training runtime only (excl. setup and evaluation runtimes). | Key | Value | | --- | ----- | | timestamp | 2021-09-16T23:54:25 | | duration | 263.2430217266083 | | emissions | 0.029715544634717518 | | energy_consumed | 0.09985062041235725 | | country_name | USA | | region | Washington | | cloud_provider | azure | | cloud_region | westus2 | ## Hyperparameters - max_source_length: 512 - max_target_length: 90 - fp16: True - seed: 1 - per_device_train_batch_size: 16 - per_device_eval_batch_size: 16 - gradient_accumulation_steps: 1 - learning_rate: 5e-5 - num_train_epochs: 3.0 - weight_decay: 0.1 ## Results | ROUGE | Score | | ----- | ----- | | eval_rouge1 | 55.0234 | | eval_rouge2 | 29.6005 | | eval_rougeL | 44.914 | | eval_rougeLsum | 50.464 | | predict_rouge1 | 53.4345 | | predict_rouge2 | 28.7445 | | predict_rougeL | 44.1848 | | predict_rougeLsum | 49.1874 | | Metric | Value | | ------ | ----- | | epoch | 3.0 | | eval_gen_len | 30.6027 | | eval_loss | 1.4327096939086914 | | eval_runtime | 22.9127 | | eval_samples | 818 | | eval_samples_per_second | 35.701 | | eval_steps_per_second | 0.306 | | predict_gen_len | 30.4835 | | predict_loss | 1.4501988887786865 | | predict_runtime | 26.0269 | | predict_samples | 819 | | predict_samples_per_second | 31.467 | | predict_steps_per_second | 0.269 | | train_loss | 1.2014821151207233 | | train_runtime | 263.3678 | | train_samples | 14732 | | train_samples_per_second | 167.811 | | train_steps_per_second | 1.321 | | total_steps | 348 | | total_flops | 4.26008990669865e+16 |
bfd68ba83456c9a3f08edaaca797c9a7
regisss/distilbert_xnli
regisss
distilbert
10
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['xnli']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
950
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_xnli This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the xnli dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.1+cu116 - Datasets 2.6.1 - Tokenizers 0.12.1
fa265a9cbc165cd3eec6a7ffec4ef501
birgermoell/wav2vec2-liepa-1-percent
birgermoell
wav2vec2
11
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['lt']
null
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
true
true
true
5,153
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-liepa-1-percent This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - LT dataset. It achieves the following results on the evaluation set: - Loss: 0.5774 - Wer: 0.5079 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.23 | 100 | 3.3596 | 1.0 | | No log | 0.46 | 200 | 2.9280 | 1.0 | | No log | 0.69 | 300 | 1.5091 | 0.9650 | | No log | 0.93 | 400 | 0.9943 | 0.9177 | | 3.1184 | 1.16 | 500 | 0.7590 | 0.7793 | | 3.1184 | 1.39 | 600 | 0.7336 | 0.7408 | | 3.1184 | 1.62 | 700 | 0.7040 | 0.7618 | | 3.1184 | 1.85 | 800 | 0.6815 | 0.7233 | | 3.1184 | 2.08 | 900 | 0.6457 | 0.6865 | | 0.7917 | 2.31 | 1000 | 0.5705 | 0.6813 | | 0.7917 | 2.55 | 1100 | 0.5708 | 0.6620 | | 0.7917 | 2.78 | 1200 | 0.5888 | 0.6462 | | 0.7917 | 3.01 | 1300 | 0.6509 | 0.6970 | | 0.7917 | 3.24 | 1400 | 0.5871 | 0.6462 | | 0.5909 | 3.47 | 1500 | 0.6199 | 0.6813 | | 0.5909 | 3.7 | 1600 | 0.6230 | 0.5919 | | 0.5909 | 3.94 | 1700 | 0.5721 | 0.6427 | | 0.5909 | 4.17 | 1800 | 0.5331 | 0.5867 | | 0.5909 | 4.4 | 1900 | 0.5561 | 0.6007 | | 0.4607 | 4.63 | 2000 | 0.5414 | 0.5849 | | 0.4607 | 4.86 | 2100 | 0.5390 | 0.5587 | | 0.4607 | 5.09 | 2200 | 0.5313 | 0.5569 | | 0.4607 | 5.32 | 2300 | 0.5893 | 0.5797 | | 0.4607 | 5.56 | 2400 | 0.5507 | 0.5954 | | 0.3933 | 5.79 | 2500 | 0.5521 | 0.6025 | | 0.3933 | 6.02 | 2600 | 0.5663 | 0.5989 | | 0.3933 | 6.25 | 2700 | 0.5636 | 0.5832 | | 0.3933 | 6.48 | 2800 | 0.5464 | 0.5919 | | 0.3933 | 6.71 | 2900 | 0.5623 | 0.5832 | | 0.3367 | 6.94 | 3000 | 0.5324 | 0.5692 | | 0.3367 | 7.18 | 3100 | 0.5907 | 0.5394 | | 0.3367 | 7.41 | 3200 | 0.5653 | 0.5814 | | 0.3367 | 7.64 | 3300 | 0.5707 | 0.5814 | | 0.3367 | 7.87 | 3400 | 0.5754 | 0.5429 | | 0.2856 | 8.1 | 3500 | 0.5953 | 0.5569 | | 0.2856 | 8.33 | 3600 | 0.6275 | 0.5394 | | 0.2856 | 8.56 | 3700 | 0.6253 | 0.5569 | | 0.2856 | 8.8 | 3800 | 0.5930 | 0.5429 | | 0.2856 | 9.03 | 3900 | 0.6082 | 0.5219 | | 0.2522 | 9.26 | 4000 | 0.6026 | 0.5447 | | 0.2522 | 9.49 | 4100 | 0.6052 | 0.5271 | | 0.2522 | 9.72 | 4200 | 0.5871 | 0.5219 | | 0.2522 | 9.95 | 4300 | 0.5870 | 0.5236 | | 0.2522 | 10.19 | 4400 | 0.5881 | 0.5131 | | 0.2167 | 10.42 | 4500 | 0.6122 | 0.5289 | | 0.2167 | 10.65 | 4600 | 0.6128 | 0.5166 | | 0.2167 | 10.88 | 4700 | 0.6135 | 0.5377 | | 0.2167 | 11.11 | 4800 | 0.6055 | 0.5184 | | 0.2167 | 11.34 | 4900 | 0.6725 | 0.5569 | | 0.1965 | 11.57 | 5000 | 0.6482 | 0.5429 | | 0.1965 | 11.81 | 5100 | 0.6037 | 0.5096 | | 0.1965 | 12.04 | 5200 | 0.5931 | 0.5131 | | 0.1965 | 12.27 | 5300 | 0.5853 | 0.5114 | | 0.1965 | 12.5 | 5400 | 0.5798 | 0.5219 | | 0.172 | 12.73 | 5500 | 0.5775 | 0.5009 | | 0.172 | 12.96 | 5600 | 0.5782 | 0.5044 | | 0.172 | 13.19 | 5700 | 0.5804 | 0.5184 | | 0.172 | 13.43 | 5800 | 0.5977 | 0.5219 | | 0.172 | 13.66 | 5900 | 0.6069 | 0.5236 | | 0.1622 | 13.89 | 6000 | 0.5850 | 0.5131 | | 0.1622 | 14.12 | 6100 | 0.5758 | 0.5096 | | 0.1622 | 14.35 | 6200 | 0.5752 | 0.5009 | | 0.1622 | 14.58 | 6300 | 0.5727 | 0.5184 | | 0.1622 | 14.81 | 6400 | 0.5795 | 0.5044 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.10.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
d350a3136c6db49cfc6108fc00e0d2a1
hfl/chinese-roberta-wwm-ext-large
hfl
bert
11
181,315
transformers
26
fill-mask
true
true
true
apache-2.0
['zh']
null
null
0
0
0
0
0
0
0
['bert']
false
true
true
2,007
false
# Please use 'Bert' related functions to load this model! ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on๏ผšhttps://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
8952aed5c4f82b0a89c3dfccb964397e
nouman-10/roberta_base_model_fine_tuned
nouman-10
roberta
13
8
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,196
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_base_model_fine_tuned This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2488 - Accuracy: 0.9018 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4049 | 1.0 | 875 | 0.2488 | 0.9018 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
36f65e1fa79504d467ab42ae70f2697c
muhtasham/bert-small-finetuned-finer
muhtasham
bert
9
5
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,285
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-small-finetuned-finer This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6137 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8994 | 1.0 | 2433 | 1.7597 | | 1.7226 | 2.0 | 4866 | 1.6462 | | 1.6752 | 3.0 | 7299 | 1.6137 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
8cb9c1a3e2530adf4d72d93c12e5509e
ckiplab/bert-base-chinese-qa
ckiplab
bert
7
45
transformers
0
question-answering
true
false
false
gpl-3.0
['zh']
null
null
0
0
0
0
0
0
0
['pytorch', 'question-answering', 'bert', 'zh']
false
true
true
959
false
# CKIP BERT Base Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). ้€™ๅ€‹ๅฐˆๆกˆๆไพ›ไบ†็น้ซ”ไธญๆ–‡็š„ transformers ๆจกๅž‹๏ผˆๅŒ…ๅซ ALBERTใ€BERTใ€GPT2๏ผ‰ๅŠ่‡ช็„ถ่ชž่จ€่™•็†ๅทฅๅ…ท๏ผˆๅŒ…ๅซๆ–ท่ฉžใ€่ฉžๆ€งๆจ™่จ˜ใ€ๅฏฆ้ซ”่พจ่ญ˜๏ผ‰ใ€‚ ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. ่ซ‹ไฝฟ็”จ BertTokenizerFast ่€Œ้ž AutoTokenizerใ€‚ ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/bert-base-chinese-qa') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. ๆœ‰้—œๅฎŒๆ•ดไฝฟ็”จๆ–นๆณ•ๅŠๅ…ถไป–่ณ‡่จŠ๏ผŒ่ซ‹ๅƒ่ฆ‹ https://github.com/ckiplab/ckip-transformers ใ€‚
69db767d1a30a7b764e663896afee4df
muhtasham/small-mlm-glue-mrpc-target-glue-mnli
muhtasham
bert
10
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,814
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-mrpc-target-glue-mnli This model is a fine-tuned version of [muhtasham/small-mlm-glue-mrpc](https://huggingface.co/muhtasham/small-mlm-glue-mrpc) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6541 - Accuracy: 0.7253 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9151 | 0.04 | 500 | 0.8235 | 0.6375 | | 0.8111 | 0.08 | 1000 | 0.7776 | 0.6659 | | 0.7745 | 0.12 | 1500 | 0.7510 | 0.6748 | | 0.7502 | 0.16 | 2000 | 0.7329 | 0.6886 | | 0.7431 | 0.2 | 2500 | 0.7189 | 0.6921 | | 0.7325 | 0.24 | 3000 | 0.7032 | 0.6991 | | 0.7139 | 0.29 | 3500 | 0.6793 | 0.7129 | | 0.7031 | 0.33 | 4000 | 0.6678 | 0.7215 | | 0.6778 | 0.37 | 4500 | 0.6761 | 0.7236 | | 0.6811 | 0.41 | 5000 | 0.6541 | 0.7253 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
5d46ba970c3e05252984b916af9359e2
garyw/clinical-embeddings-600d-ft-cr
garyw
null
9
0
null
0
null
false
false
false
gpl-3.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,548
false
Pre-trained word embeddings using the text of published clinical case reports. These embeddings use 600 dimensions and were trained using the fasttext algorithm on published clinical case reports found in the [PMC Open Access Subset](https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/). See the paper here: https://pubmed.ncbi.nlm.nih.gov/34920127/ Citation: ``` @article{flamholz2022word, title={Word embeddings trained on published case reports are lightweight, effective for clinical tasks, and free of protected health information}, author={Flamholz, Zachary N and Crane-Droesch, Andrew and Ungar, Lyle H and Weissman, Gary E}, journal={Journal of Biomedical Informatics}, volume={125}, pages={103971}, year={2022}, publisher={Elsevier} } ``` ## Quick start Word embeddings are compatible with the [`gensim` Python package](https://radimrehurek.com/gensim/) format. First download the files from this archive. Then load the embeddings into Python. ```python from gensim.models import FastText, Word2Vec, KeyedVectors # KeyedVectors are used to load the GloVe models # Load the model model = FastText.load('ft_oa_corp_600d.bin') # Return 100-dimensional vector representations of each word model.wv.word_vec('diabetes') model.wv.word_vec('cardiac_arrest') model.wv.word_vec('lymphangioleiomyomatosis') # Try out cosine similarity model.wv.similarity('copd', 'chronic_obstructive_pulmonary_disease') model.wv.similarity('myocardial_infarction', 'heart_attack') model.wv.similarity('lymphangioleiomyomatosis', 'lam') ```
c06075b6654357c9e6aca6fb7d993652
DOOGLAK/Article_50v7_NER_Model_3Epochs_UNAUGMENTED
DOOGLAK
bert
13
6
transformers
0
token-classification
true
false
false
apache-2.0
null
['article50v7_wikigold_split']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,559
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Article_50v7_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v7_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.7894 - Precision: 0.3333 - Recall: 0.0002 - F1: 0.0005 - Accuracy: 0.7783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 6 | 1.0271 | 0.1183 | 0.0102 | 0.0188 | 0.7768 | | No log | 2.0 | 12 | 0.8250 | 0.4 | 0.0005 | 0.0010 | 0.7783 | | No log | 3.0 | 18 | 0.7894 | 0.3333 | 0.0002 | 0.0005 | 0.7783 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
0b3430bb07e23fcee4a41e9a427bbb12
mp6kv/IQA_classification
mp6kv
roberta
15
3
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,419
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IQA_classification This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0718 - Accuracy: 0.4862 - Precision: 0.3398 - Recall: 0.4862 - F1: 0.3270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.3973 | 1.0 | 28 | 1.1588 | 0.4771 | 0.2276 | 0.4771 | 0.3082 | | 1.1575 | 2.0 | 56 | 1.0718 | 0.4862 | 0.3398 | 0.4862 | 0.3270 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
478170b013c2873995b89a6dd0432eec
Satyamatury/wav2vec2-large-xls-r-300m-hindi-colab
Satyamatury
wav2vec2
18
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,330
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 1.7529 - Wer: 0.9130 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.2923 | 44.42 | 400 | 1.7529 | 0.9130 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
933695b0fe83d179c8089ca63ca8d1c0
kadirnar/strongsort
kadirnar
null
2
0
null
0
object-detection
false
false
false
gpl-3.0
null
null
null
0
0
0
0
0
0
0
['object-detection', 'computer-vision', 'sort', 'tracker', 'strongsort']
false
true
true
729
false
### Model Description [StrongSort](https://arxiv.org/abs/2202.13514): Make DeepSORT Great Again <img src="https://raw.githubusercontent.com/dyhBUPT/StrongSORT/master/assets/MOTA-IDF1-HOTA.png" width="1000"/> ### Installation ``` pip install strongsort ``` ### Tracker ```python from strong_sort import StrongSORT tracker = StrongSORT(model_weights='model.pt', device='cuda') pred = model(img) for i, det in enumerate(pred): det[i] = tracker[i].update(detection, im0s) ``` ### BibTeX Entry and Citation Info ``` @article{du2022strongsort, title={Strongsort: Make deepsort great again}, author={Du, Yunhao and Song, Yang and Yang, Bo and Zhao, Yanyun}, journal={arXiv preprint arXiv:2202.13514}, year={2022} } ```
d9c19179833fbb544c174d60a643ceb7
CLTL/icf-levels-fac
CLTL
roberta
11
10
transformers
1
text-classification
true
false
false
mit
['nl']
null
null
0
0
0
0
0
0
0
[]
false
true
true
3,447
false
# Regression Model for Walking Functioning Levels (ICF d550) ## Description A fine-tuned regression model that assigns a functioning level to Dutch sentences describing walking functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about walking functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model. ## Functioning levels Level | Meaning ---|--- 5 | Patient can walk independently anywhere: level surface, uneven surface, slopes, stairs. 4 | Patient can walk independently on level surface but requires help on stairs, inclines, uneven surface; or, patient can walk independently, but the walking is not fully normal. 3 | Patient requires verbal supervision for walking, without physical contact. 2 | Patient needs continuous or intermittent support of one person to help with balance and coordination. 1 | Patient needs firm continuous support from one person who helps carrying weight and with balance. 0 | Patient cannot walk or needs help from two or more people; or, patient walks on a treadmill. The predictions generated by the model might sometimes be outside of the scale (e.g. 5.2); this is normal in a regression model. ## Intended uses and limitations - The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records). - The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled. ## How to use To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library: ``` from simpletransformers.classification import ClassificationModel model = ClassificationModel( 'roberta', 'CLTL/icf-levels-fac', use_cuda=False, ) example = 'kan nog goed traplopen, maar flink ingeleverd aan conditie na Corona' _, raw_outputs = model.predict([example]) predictions = np.squeeze(raw_outputs) ``` The prediction on the example is: ``` 4.2 ``` The raw outputs look like this: ``` [[4.20903111]] ``` ## Training data - The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released. - The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines). ## Training procedure The default training parameters of Simple Transformers were used, including: - Optimizer: AdamW - Learning rate: 4e-5 - Num train epochs: 1 - Train batch size: 8 ## Evaluation results The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). | | Sentence-level | Note-level |---|---|--- mean absolute error | 0.70 | 0.66 mean squared error | 0.91 | 0.93 root mean squared error | 0.95 | 0.96 ## Authors and references ### Authors Jenia Kim, Piek Vossen ### References TBD
ad550cdd009e5862afef2d85d33a6bba
asi/albert-act-tiny
asi
albert_act
9
4
transformers
1
null
true
true
false
apache-2.0
['en']
['wikipedia', 'bookcorpus']
null
0
0
0
0
0
0
0
[]
true
true
true
2,627
false
# Adaptive Depth Transformers Implementation of the paper "How Many Layers and Why? An Analysis of the Model Depth in Transformers". In this study, we investigate the role of the multiple layers in deep transformer models. We design a variant of ALBERT that dynamically adapts the number of layers for each token of the input. ## Model architecture We augment a multi-layer transformer encoder with a halting mechanism, which dynamically adjusts the number of layers for each token. We directly adapted this mechanism from Graves ([2016](#graves-2016)). At each iteration, we compute a probability for each token to stop updating its state. ## Model use The architecture is not yet directly included in the Transformers library. The code used for pre-training is available in the following [github repository](https://github.com/AntoineSimoulin/adaptive-depth-transformers). So you should install the code implementation first: ```bash !pip install git+https://github.com/AntoineSimoulin/adaptive-depth-transformers$ ``` Then you can use the model directly. ```python from act import AlbertActConfig, AlbertActModel, TFAlbertActModel from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('asi/albert-act-base') model = AlbertActModel.from_pretrained('asi/albert-act-base') _ = model.eval() inputs = tokenizer("a lump in the middle of the monkeys stirred and then fell quiet .", return_tensors="pt") outputs = model(**inputs) outputs.updates # tensor([[[[15., 9., 10., 7., 3., 8., 5., 7., 12., 10., 6., 8., 8., 9., 5., 8.]]]]) ``` ## Citations ### BibTeX entry and citation info If you use our iterative transformer model for your scientific publication or your industrial applications, please cite the following [paper](https://aclanthology.org/2021.acl-srw.23/): ```bibtex @inproceedings{simoulin-crabbe-2021-many, title = "How Many Layers and Why? {A}n Analysis of the Model Depth in Transformers", author = "Simoulin, Antoine and Crabb{\'e}, Benoit", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-srw.23", doi = "10.18653/v1/2021.acl-srw.23", pages = "221--228", } ``` ### References ><div id="graves-2016">Alex Graves. 2016. Adaptive computation time for recurrent neural networks. CoRR, abs/1603.08983.</div>
fdb0f9dbe1fe677dcde6cc50003501fd
IShallRiseAgain/DCAU
IShallRiseAgain
null
7
0
null
20
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
0
1
0
1
1
0
['stable-diffusion', 'text-to-image']
false
true
true
513
false
**DCAU Diffusion** Prompt is currently Batman_the_animated_series. In the future it will include all DCAU shows. **Existing Characters:** ![Existing Characters](https://huggingface.co/IShallRiseAgain/DCAU/resolve/main/charactbanner.png) **Characters not in original dataset:** ![New Characters](https://huggingface.co/IShallRiseAgain/DCAU/resolve/main/customcharacterbanner.png) **Realistic Style:** ![Characters in realistic styles](https://huggingface.co/IShallRiseAgain/DCAU/resolve/main/realisticbanner.png)
fc52382070e8e15ed517fac5675a0bcd
ligerre/xlm-roberta-base-finetuned-panx-de
ligerre
xlm-roberta
12
6
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1343 - F1: 0.8637 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2578 | 1.0 | 525 | 0.1562 | 0.8273 | | 0.1297 | 2.0 | 1050 | 0.1330 | 0.8474 | | 0.0809 | 3.0 | 1575 | 0.1343 | 0.8637 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
4818f1091a3254a857a113953a557b19
stanfordnlp/CoreNLP
stanfordnlp
null
3
0
null
7
null
false
false
false
gpl-2.0
['en']
null
null
0
0
0
0
1
1
0
['corenlp']
false
true
true
660
false
# Core NLP model for CoreNLP CoreNLP is your one stop shop for natural language processing in Java! CoreNLP enables users to derive linguistic annotations for text, including token and sentence boundaries, parts of speech, named entities, numeric and time values, dependency and constituency parses, coreference, sentiment, quote attributions, and relations. Find more about it in [our website](https://stanfordnlp.github.io/CoreNLP) and our [GitHub repository](https://github.com/stanfordnlp/CoreNLP). This card and repo were automatically prepared with `hugging_corenlp.py` in the `stanfordnlp/huggingface-models` repo Last updated 2023-01-21 01:34:10.792
8b6420189afade7ffcf8260e888926e9
shripadbhat/whisper-large-v2-sr
shripadbhat
whisper
17
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['sr']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,365
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large v2 Serbian This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2036 - Wer: 11.8980 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2639 | 0.48 | 100 | 0.2438 | 14.0834 | | 0.1965 | 0.96 | 200 | 0.2036 | 11.8980 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
6e77512cc5c967656dbd7671654409bf
google/tapas-large-finetuned-tabfact
google
tapas
8
11
transformers
0
text-classification
true
true
false
apache-2.0
['en']
['tab_fact']
null
0
0
0
0
0
0
0
['tapas', 'sequence-classification']
false
true
true
4,767
false
# TAPAS large model fine-tuned on Tabular Fact Checking (TabFact) This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_large_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_large` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then jointly train this randomly initialized classification head with the base model on TabFact. ## Intended uses & limitations You can use this model for classifying whether a sentence is supported or refuted by the contents of a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ``` ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweล‚ Krzysztof Nowak and Thomas Mรผller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Mรผller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @inproceedings{2019TabFactA, title={TabFact : A Large-scale Dataset for Table-based Fact Verification}, author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang}, booktitle = {International Conference on Learning Representations (ICLR)}, address = {Addis Ababa, Ethiopia}, month = {April}, year = {2020} } ```
f5cab53ff0dadf9ee0aa5d89cc4120c6
csikasote/xls-r-300m-bemba-15hrs
csikasote
wav2vec2
17
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,071
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-300m-bemba-15hrs This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2754 - Wer: 0.3481 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5142 | 0.71 | 400 | 0.5585 | 0.7501 | | 0.6351 | 1.43 | 800 | 0.3185 | 0.5058 | | 0.4892 | 2.15 | 1200 | 0.2813 | 0.4655 | | 0.4021 | 2.86 | 1600 | 0.2539 | 0.4159 | | 0.3505 | 3.58 | 2000 | 0.2411 | 0.4000 | | 0.3045 | 4.29 | 2400 | 0.2512 | 0.3951 | | 0.274 | 5.01 | 2800 | 0.2402 | 0.3922 | | 0.2335 | 5.72 | 3200 | 0.2403 | 0.3764 | | 0.2032 | 6.44 | 3600 | 0.2383 | 0.3657 | | 0.1783 | 7.16 | 4000 | 0.2603 | 0.3518 | | 0.1487 | 7.87 | 4400 | 0.2479 | 0.3577 | | 0.1281 | 8.59 | 4800 | 0.2638 | 0.3518 | | 0.113 | 9.3 | 5200 | 0.2754 | 0.3481 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
2bdcfa4312132e14348b656eeb53065a
lunesco/bert-german-ner
lunesco
bert
13
55
transformers
1
token-classification
true
false
false
mit
null
['conll2003']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,003
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-german-ner This model is a fine-tuned version of [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.3196 - Precision: 0.8334 - Recall: 0.8620 - F1: 0.8474 - Accuracy: 0.9292 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 300 | 0.3617 | 0.7310 | 0.7733 | 0.7516 | 0.8908 | | 0.5428 | 2.0 | 600 | 0.2897 | 0.7789 | 0.8395 | 0.8081 | 0.9132 | | 0.5428 | 3.0 | 900 | 0.2805 | 0.8147 | 0.8465 | 0.8303 | 0.9221 | | 0.2019 | 4.0 | 1200 | 0.2816 | 0.8259 | 0.8498 | 0.8377 | 0.9260 | | 0.1215 | 5.0 | 1500 | 0.2942 | 0.8332 | 0.8599 | 0.8463 | 0.9285 | | 0.1215 | 6.0 | 1800 | 0.3053 | 0.8293 | 0.8619 | 0.8452 | 0.9287 | | 0.0814 | 7.0 | 2100 | 0.3190 | 0.8249 | 0.8634 | 0.8437 | 0.9267 | | 0.0814 | 8.0 | 2400 | 0.3196 | 0.8334 | 0.8620 | 0.8474 | 0.9292 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
eb8110e96a1f6d5d176f9af67f20e725
kadirnar/OcSort
kadirnar
null
2
0
null
0
object-detection
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
['object-detection', 'computer-vision', 'sort', 'tracker', 'ocsort']
false
true
true
982
false
### Model Description Observation-Centric SORT ([OC-SORT(https://arxiv.org/abs/2203.14360)]) is a pure motion-model-based multi-object tracker. It aims to improve tracking robustness in crowded scenes and when objects are in non-linear motion. It is designed by recognizing and fixing limitations in Kalman filter and SORT. It is flexible to integrate with different detectors and matching modules, such as appearance similarity. It remains, Simple, Online and Real-time. <img src="https://raw.githubusercontent.com/noahcao/OC_SORT/master/assets/teaser.png" width="600"/> ### Installation ``` pip install ocsort ``` ### Tracker ```python from ocsort.ocsort import OCSort tracker = OCSort(args) for image in images: dets = detector(image) online_targets = tracker.update(dets) ``` ### BibTeX Entry and Citation Info ``` , Jinkun and Weng, Xinshuo and Khirodkar, Rawal and Pang, Jiangmiao and Kitani, Kris}, journal={arXiv preprint arXiv:2203.14360}, year={2022} } ```
33b4de78ea91b14730becae7b28c9f20
merve/bart-example
merve
bart
10
8
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,332
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bart-example This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.7877 - Validation Loss: 2.4972 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 6.3670 | 3.2462 | 0 | | 3.5143 | 2.7551 | 1 | | 3.0299 | 2.5620 | 2 | | 2.9364 | 2.7830 | 3 | | 2.7877 | 2.4972 | 4 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
d67c634082ee4fa52b79102fd02a4b47
skpawar1305/wav2vec2-base-finetuned-ks
skpawar1305
wav2vec2
10
3
transformers
0
audio-classification
true
false
false
apache-2.0
null
['superb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,559
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0903 - Accuracy: 0.9834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7264 | 1.0 | 399 | 0.6319 | 0.9351 | | 0.2877 | 2.0 | 798 | 0.1846 | 0.9748 | | 0.175 | 3.0 | 1197 | 0.1195 | 0.9796 | | 0.1672 | 4.0 | 1596 | 0.0903 | 0.9834 | | 0.1235 | 5.0 | 1995 | 0.0854 | 0.9825 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
278bdfcbde5c07e5d595102e6a3788f1
jonatasgrosman/exp_w2v2t_id_vp-it_s692
jonatasgrosman
wav2vec2
10
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['id']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'id']
false
true
true
469
false
# exp_w2v2t_id_vp-it_s692 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
88ae63ac6bc0176a15056879fcf1dca5
naverpapago/garnet
naverpapago
null
3
0
pytorch
0
null
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['Scene Text Removal', 'Image to Image']
false
true
true
2,047
false
### GaRNet This is text-removal model that introduced in the paper below and first released at [this page](https://github.com/naver/garnet). \ [The Surprisingly Straightforward Scene Text Removal Method With Gated Attention and Region of Interest Generation: A Comprehensive Prominent Model Analysis](https://arxiv.org/abs/2210.07489). \ Hyeonsu Lee, Chankyu Choi \ Naver Corp. \ In ECCV 2022. ### Model description GaRNet is a generator that create non-text image with given image and coresponding text box mask. It consists of convolution encoder and decoder. The encoder consists of residual block with attention module called Gated Attention. Gated Attention module has two Spatial attention branch. Each attention branch finds text stroke or its surrounding regions. The module adjusts the weight of these two domains by trainable parameters. The model was trained in PatchGAN manner with Region-of-Interest Generation. \ The discriminator is consists of convolution encoder. Given an image, it determines whether each patch, which indicates text-box regions, is real or fake. All loss functions treat non-textbox regions as 'don't care'. ### Intended uses & limitations This model can be used for areas that require the process of erasing text from an image, such as concealment private information, text editing.\ You can use the raw model or pre-trained model.\ Note that pre-trained model was trained in both Synthetic and SCUT_EnsText dataset. And the SCUT-EnsText dataset can only be used for non-commercial research purposes. ### How to use You can use inference code in [this page](https://github.com/naver/garnet). ### BibTeX entry and citation info ``` @inproceedings{lee2022surprisingly, title={The Surprisingly Straightforward Scene Text Removal Method with Gated Attention and Region of Interest Generation: A Comprehensive Prominent Model Analysis}, author={Lee, Hyeonsu and Choi, Chankyu}, booktitle={European Conference on Computer Vision}, pages={457--472}, year={2022}, organization={Springer} } ```
9172defa0a5587975ac93bb43669a4e9
remzicam/xs_blenderbot_onnx
remzicam
null
6
0
null
0
null
false
false
false
other
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,659
false
# xs_blenderbot_onnx (only 168 mb) onnx quantized version of facebook/blenderbot_small-90M model (350 mb) Faster cpu inference ## INTRO Before usage: โ€ข download blender_model.py script from files in this repo โ€ข pip install onnxruntime you can use the model with huggingface generate function with its all parameters # Usage With text generation pipeline ```python >>>from blender_model import TextGenerationPipeline >>>max_answer_length = 100 >>>response_generator_pipe = TextGenerationPipeline(max_length=max_answer_length) >>>utterance = "Hello, how are you?" >>>response_generator_pipe(utterance) i am well. how are you? what do you like to do in your free time? ``` Or you can call the model ```python >>>from blender_model import OnnxBlender >>>from transformers import BlenderbotSmallTokenizer >>>original_repo_id = "facebook/blenderbot_small-90M" >>>repo_id = "remzicam/xs_blenderbot_onnx" >>>model_file_names = [ "blenderbot_small-90M-encoder-quantized.onnx", "blenderbot_small-90M-decoder-quantized.onnx", "blenderbot_small-90M-init-decoder-quantized.onnx", ] >>>model=OnnxBlender(original_repo_id, repo_id, model_file_names) >>>utterance = "Hello, how are you?" >>>inputs = tokenizer(utterance, return_tensors="pt") >>>outputs= model.generate(**inputs, max_length=max_answer_length) >>>response = tokenizer.decode(outputs[0], skip_special_tokens = True) >>>print(response) i am well. how are you? what do you like to do in your free time? ``` # Credits To create the model, I adopted codes from https://github.com/siddharth-sharma7/fast-Bart repository.
32f57e1008fcfeb2d94051494f3770a2
stanfordnlp/stanza-hi
stanfordnlp
null
10
142
stanza
0
token-classification
false
false
false
apache-2.0
['hi']
null
null
0
0
0
0
0
0
0
['stanza', 'token-classification']
false
true
true
578
false
# Stanza model for Hindi (hi) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2022-10-26 21:23:50.098
dfb78bce10fbe3ce4e78bdd89ab3168d
wietsedv/xlm-roberta-base-ft-udpos28-fo
wietsedv
xlm-roberta
8
13
transformers
0
token-classification
true
false
false
apache-2.0
['fo']
['universal_dependencies']
null
0
0
0
0
0
0
0
['part-of-speech', 'token-classification']
true
true
true
567
false
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Faroese This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fo") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fo") ```
c12ecee01145b258a9ebb99f451ca0db
eunbeee/ainize-kobart-news-eb-finetuned-papers
eunbeee
bart
11
1
transformers
0
text2text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,863
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ainize-kobart-news-eb-finetuned-papers This model is a fine-tuned version of [ainize/kobart-news](https://huggingface.co/ainize/kobart-news) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3066 - Rouge1: 14.5433 - Rouge2: 5.2238 - Rougel: 14.4731 - Rougelsum: 14.5183 - Gen Len: 19.9934 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 0.1918 | 1.0 | 7200 | 0.2403 | 14.6883 | 5.2427 | 14.6306 | 14.6489 | 19.9938 | | 0.1332 | 2.0 | 14400 | 0.2391 | 14.5165 | 5.2443 | 14.493 | 14.4908 | 19.9972 | | 0.0966 | 3.0 | 21600 | 0.2539 | 14.758 | 5.4976 | 14.6906 | 14.7188 | 19.9941 | | 0.0736 | 4.0 | 28800 | 0.2782 | 14.6267 | 5.3371 | 14.5578 | 14.6014 | 19.9934 | | 0.0547 | 5.0 | 36000 | 0.3066 | 14.5433 | 5.2238 | 14.4731 | 14.5183 | 19.9934 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
9c5c7e35f66cae3677037e0accfc27b0
Brainergy/zzuurryy
Brainergy
null
16
2
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
419
false
### zzuurryy Dreambooth model trained by Brainergy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
9eb86a4587b74b85f6ee056a6354a809
pollcat/pollcat-mnli
pollcat
distilbert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,201
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pollcat-mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.8610 - Accuracy: 0.7271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0633 | 1.0 | 1563 | 1.8610 | 0.7271 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
6c05e4b58907598130d517d812e1e279
muhtasham/small-mlm-glue-rte-target-glue-stsb
muhtasham
bert
10
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
1
1
0
['generated_from_trainer']
true
true
true
1,962
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-rte-target-glue-stsb This model is a fine-tuned version of [muhtasham/small-mlm-glue-rte](https://huggingface.co/muhtasham/small-mlm-glue-rte) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5419 - Pearson: 0.8754 - Spearmanr: 0.8723 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 0.8054 | 2.78 | 500 | 0.6118 | 0.8682 | 0.8680 | | 0.2875 | 5.56 | 1000 | 0.5788 | 0.8693 | 0.8682 | | 0.1718 | 8.33 | 1500 | 0.6133 | 0.8673 | 0.8639 | | 0.1251 | 11.11 | 2000 | 0.6103 | 0.8716 | 0.8681 | | 0.0999 | 13.89 | 2500 | 0.5665 | 0.8734 | 0.8707 | | 0.0825 | 16.67 | 3000 | 0.6035 | 0.8736 | 0.8700 | | 0.07 | 19.44 | 3500 | 0.5605 | 0.8752 | 0.8716 | | 0.0611 | 22.22 | 4000 | 0.5661 | 0.8768 | 0.8730 | | 0.0565 | 25.0 | 4500 | 0.5557 | 0.8739 | 0.8705 | | 0.0523 | 27.78 | 5000 | 0.5419 | 0.8754 | 0.8723 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
2019ece308a283bf629b6e508f1864a3
gokuls/distilbert_add_GLUE_Experiment_logit_kd_cola_96
gokuls
distilbert
17
3
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,398
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_logit_kd_cola_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6839 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.9137 | 1.0 | 34 | 0.7665 | 0.0 | | 0.8592 | 2.0 | 68 | 0.7303 | 0.0 | | 0.8268 | 3.0 | 102 | 0.7043 | 0.0 | | 0.8074 | 4.0 | 136 | 0.6901 | 0.0 | | 0.8005 | 5.0 | 170 | 0.6853 | 0.0 | | 0.7969 | 6.0 | 204 | 0.6842 | 0.0 | | 0.797 | 7.0 | 238 | 0.6840 | 0.0 | | 0.7981 | 8.0 | 272 | 0.6840 | 0.0 | | 0.7971 | 9.0 | 306 | 0.6840 | 0.0 | | 0.7967 | 10.0 | 340 | 0.6839 | 0.0 | | 0.7978 | 11.0 | 374 | 0.6839 | 0.0 | | 0.7979 | 12.0 | 408 | 0.6839 | 0.0 | | 0.7973 | 13.0 | 442 | 0.6839 | 0.0 | | 0.7979 | 14.0 | 476 | 0.6840 | 0.0 | | 0.7972 | 15.0 | 510 | 0.6839 | 0.0 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
39d646c9be5bfb8c052a57d45cb573b3
neelrr/xlm-roberta-base-finetuned-panx-hi
neelrr
xlm-roberta
14
5
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,314
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-hi This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2211 - F1: 0.8614 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.62 | 1.0 | 209 | 0.3914 | 0.7622 | | 0.2603 | 2.0 | 418 | 0.2665 | 0.8211 | | 0.1653 | 3.0 | 627 | 0.2211 | 0.8614 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
9078ae3320c69d13658f8cb8d0b5df20
tkazusa/lilt-en-funsd
tkazusa
lilt
23
5
transformers
0
token-classification
true
false
false
mit
null
['funsd-layoutlmv3']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
6,837
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lilt-en-funsd This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 1.6459 - Answer: {'precision': 0.8831942789034565, 'recall': 0.9069767441860465, 'f1': 0.894927536231884, 'number': 817} - Header: {'precision': 0.6213592233009708, 'recall': 0.5378151260504201, 'f1': 0.5765765765765765, 'number': 119} - Question: {'precision': 0.8998178506375227, 'recall': 0.9173630454967502, 'f1': 0.9085057471264367, 'number': 1077} - Overall Precision: 0.8789 - Overall Recall: 0.8907 - Overall F1: 0.8848 - Overall Accuracy: 0.8068 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.4201 | 10.53 | 200 | 0.8003 | {'precision': 0.8321995464852607, 'recall': 0.8984088127294981, 'f1': 0.8640376692171865, 'number': 817} | {'precision': 0.5714285714285714, 'recall': 0.5714285714285714, 'f1': 0.5714285714285714, 'number': 119} | {'precision': 0.8651079136690647, 'recall': 0.89322191272052, 'f1': 0.8789401553220649, 'number': 1077} | 0.8348 | 0.8763 | 0.8551 | 0.8104 | | 0.0376 | 21.05 | 400 | 1.3158 | {'precision': 0.8395904436860068, 'recall': 0.9033047735618115, 'f1': 0.8702830188679245, 'number': 817} | {'precision': 0.4785714285714286, 'recall': 0.5630252100840336, 'f1': 0.5173745173745175, 'number': 119} | {'precision': 0.8887814313346228, 'recall': 0.8532961931290622, 'f1': 0.8706774040738986, 'number': 1077} | 0.8397 | 0.8564 | 0.8480 | 0.7934 | | 0.0119 | 31.58 | 600 | 1.4791 | {'precision': 0.8752941176470588, 'recall': 0.9106487148102815, 'f1': 0.8926214757048591, 'number': 817} | {'precision': 0.5401459854014599, 'recall': 0.6218487394957983, 'f1': 0.578125, 'number': 119} | {'precision': 0.8818681318681318, 'recall': 0.8941504178272981, 'f1': 0.8879668049792531, 'number': 1077} | 0.8567 | 0.8847 | 0.8705 | 0.7961 | | 0.0061 | 42.11 | 800 | 1.5605 | {'precision': 0.8617886178861789, 'recall': 0.9082007343941249, 'f1': 0.8843861740166865, 'number': 817} | {'precision': 0.5963302752293578, 'recall': 0.5462184873949579, 'f1': 0.5701754385964912, 'number': 119} | {'precision': 0.8747763864042933, 'recall': 0.9080779944289693, 'f1': 0.8911161731207289, 'number': 1077} | 0.8549 | 0.8867 | 0.8705 | 0.7965 | | 0.0026 | 52.63 | 1000 | 1.5172 | {'precision': 0.8596491228070176, 'recall': 0.8996328029375765, 'f1': 0.8791866028708135, 'number': 817} | {'precision': 0.7176470588235294, 'recall': 0.5126050420168067, 'f1': 0.5980392156862744, 'number': 119} | {'precision': 0.8737864077669902, 'recall': 0.9192200557103064, 'f1': 0.8959276018099548, 'number': 1077} | 0.8616 | 0.8872 | 0.8742 | 0.8014 | | 0.0019 | 63.16 | 1200 | 1.6132 | {'precision': 0.8735224586288416, 'recall': 0.9045287637698899, 'f1': 0.888755261575466, 'number': 817} | {'precision': 0.6460176991150443, 'recall': 0.6134453781512605, 'f1': 0.6293103448275863, 'number': 119} | {'precision': 0.881508078994614, 'recall': 0.9117920148560817, 'f1': 0.8963943404837974, 'number': 1077} | 0.8654 | 0.8912 | 0.8781 | 0.8040 | | 0.0012 | 73.68 | 1400 | 1.6459 | {'precision': 0.8831942789034565, 'recall': 0.9069767441860465, 'f1': 0.894927536231884, 'number': 817} | {'precision': 0.6213592233009708, 'recall': 0.5378151260504201, 'f1': 0.5765765765765765, 'number': 119} | {'precision': 0.8998178506375227, 'recall': 0.9173630454967502, 'f1': 0.9085057471264367, 'number': 1077} | 0.8789 | 0.8907 | 0.8848 | 0.8068 | | 0.0005 | 84.21 | 1600 | 1.5619 | {'precision': 0.8602771362586605, 'recall': 0.9118727050183598, 'f1': 0.8853238265002972, 'number': 817} | {'precision': 0.6631578947368421, 'recall': 0.5294117647058824, 'f1': 0.5887850467289719, 'number': 119} | {'precision': 0.8944494995450409, 'recall': 0.9127205199628597, 'f1': 0.9034926470588234, 'number': 1077} | 0.8694 | 0.8897 | 0.8795 | 0.8155 | | 0.0003 | 94.74 | 1800 | 1.6571 | {'precision': 0.8649592549476135, 'recall': 0.9094247246022031, 'f1': 0.886634844868735, 'number': 817} | {'precision': 0.6391752577319587, 'recall': 0.5210084033613446, 'f1': 0.5740740740740741, 'number': 119} | {'precision': 0.8971792538671519, 'recall': 0.9155060352831941, 'f1': 0.90625, 'number': 1077} | 0.8715 | 0.8897 | 0.8805 | 0.8098 | | 0.0003 | 105.26 | 2000 | 1.6731 | {'precision': 0.8672875436554133, 'recall': 0.9118727050183598, 'f1': 0.8890214797136038, 'number': 817} | {'precision': 0.62, 'recall': 0.5210084033613446, 'f1': 0.5662100456621004, 'number': 119} | {'precision': 0.9008264462809917, 'recall': 0.9108635097493036, 'f1': 0.9058171745152355, 'number': 1077} | 0.8730 | 0.8882 | 0.8806 | 0.8071 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
e9d2743b5c345eef3ff203e1b70c5633
mahmoudNG/emotion_model
mahmoudNG
distilbert
12
5
transformers
0
text-classification
true
false
false
apache-2.0
null
['tweet_eval']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,456
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.3046 - Accuracy: 0.7938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 204 | 1.1915 | 0.7854 | | No log | 2.0 | 408 | 1.1624 | 0.7889 | | 0.0451 | 3.0 | 612 | 1.1865 | 0.7952 | | 0.0451 | 4.0 | 816 | 1.2653 | 0.7945 | | 0.0154 | 5.0 | 1020 | 1.3046 | 0.7938 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
9ab4028a3f4659cff2324c9281b7ceb5
renesteeman/whisper-tiny-dutch-25
renesteeman
whisper
15
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['nl']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['hf-asr-leaderboard', 'generated_from_trainer']
true
true
true
1,535
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny Dutch 25 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.7024 - Wer: 42.0655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5563 | 0.78 | 500 | 0.7838 | 47.5002 | | 0.3949 | 1.56 | 1000 | 0.7301 | 43.9570 | | 0.2666 | 2.34 | 1500 | 0.7103 | 42.8426 | | 0.2307 | 3.12 | 2000 | 0.7024 | 42.0655 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
0f9c3d51254413bd399ac4ea494d620e
Iwillbeback/ddpm-butterflies-128
Iwillbeback
null
13
3
diffusers
0
null
false
false
false
apache-2.0
['en']
['huggan/smithsonian_butterflies_subset']
null
0
0
0
0
0
0
0
[]
false
true
true
1,233
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [๐Ÿค— Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results ๐Ÿ“ˆ [TensorBoard logs](https://huggingface.co/Iwillbeback/ddpm-butterflies-128/tensorboard?#scalars)
6028af1ee38d3ee2aa427044ab3ff2aa
sd-concepts-library/scarlet-witch
sd-concepts-library
null
9
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,025
false
### Scarlet witch on Stable Diffusion This is the `<sw-mom>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<sw-mom> 0](https://huggingface.co/sd-concepts-library/scarlet-witch/resolve/main/concept_images/0.jpeg) ![<sw-mom> 1](https://huggingface.co/sd-concepts-library/scarlet-witch/resolve/main/concept_images/1.jpeg) ![<sw-mom> 2](https://huggingface.co/sd-concepts-library/scarlet-witch/resolve/main/concept_images/3.jpeg) ![<sw-mom> 3](https://huggingface.co/sd-concepts-library/scarlet-witch/resolve/main/concept_images/2.jpeg)
c8e82f5cd06ced7dccc287ad48672d03
ishaankul67/Adult_contemporary_music-clustered
ishaankul67
distilbert
8
0
transformers
0
question-answering
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,878
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ishaankul67/Adult_contemporary_music-clustered This model is a fine-tuned version of [nandysoham16/15-clustered_aug](https://huggingface.co/nandysoham16/15-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3734 - Train End Logits Accuracy: 0.9167 - Train Start Logits Accuracy: 0.8889 - Validation Loss: 0.1582 - Validation End Logits Accuracy: 0.8571 - Validation Start Logits Accuracy: 1.0 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.3734 | 0.9167 | 0.8889 | 0.1582 | 0.8571 | 1.0 | 0 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
22642839852fa35cc24ab6b13a4b0880
andrewljohnson/segformer-b5-finetuned-magic-cards-230117-3
andrewljohnson
segformer
7
4
transformers
0
image-segmentation
true
false
false
other
null
null
null
0
0
0
0
0
0
0
['vision', 'image-segmentation', 'generated_from_trainer']
true
true
true
4,081
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b5-finetuned-magic-cards-230117-3 This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the andrewljohnson/magic_cards dataset. It achieves the following results on the evaluation set: - Loss: 0.0691 - Mean Iou: 0.6585 - Mean Accuracy: 0.9878 - Overall Accuracy: 0.9912 - Accuracy Unlabeled: nan - Accuracy Front: 0.9978 - Accuracy Back: 0.9777 - Iou Unlabeled: 0.0 - Iou Front: 0.9978 - Iou Back: 0.9777 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Front | Accuracy Back | Iou Unlabeled | Iou Front | Iou Back | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:--------------:|:-------------:|:-------------:|:---------:|:--------:| | 1.2232 | 0.37 | 20 | 0.4691 | 0.6041 | 0.9201 | 0.9218 | nan | 0.9252 | 0.9150 | 0.0 | 0.9252 | 0.8870 | | 0.2718 | 0.74 | 40 | 0.1983 | 0.6509 | 0.9764 | 0.9785 | nan | 0.9826 | 0.9702 | 0.0 | 0.9826 | 0.9702 | | 0.255 | 1.11 | 60 | 0.0939 | 0.6524 | 0.9785 | 0.9794 | nan | 0.9812 | 0.9758 | 0.0 | 0.9812 | 0.9758 | | 0.1103 | 1.48 | 80 | 0.0682 | 0.6536 | 0.9804 | 0.9813 | nan | 0.9830 | 0.9779 | 0.0 | 0.9830 | 0.9779 | | 0.1373 | 1.85 | 100 | 0.1260 | 0.6631 | 0.9946 | 0.9961 | nan | 0.9989 | 0.9903 | 0.0 | 0.9989 | 0.9903 | | 0.0566 | 2.22 | 120 | 0.1558 | 0.6578 | 0.9868 | 0.9912 | nan | 0.9999 | 0.9736 | 0.0 | 0.9999 | 0.9736 | | 0.1535 | 2.59 | 140 | 0.1330 | 0.6558 | 0.9838 | 0.9883 | nan | 0.9973 | 0.9703 | 0.0 | 0.9973 | 0.9703 | | 0.0586 | 2.96 | 160 | 0.2317 | 0.6599 | 0.9899 | 0.9933 | nan | 1.0000 | 0.9798 | 0.0 | 1.0000 | 0.9798 | | 0.0727 | 3.33 | 180 | 0.1018 | 0.6586 | 0.9880 | 0.9919 | nan | 0.9995 | 0.9764 | 0.0 | 0.9995 | 0.9764 | | 0.3588 | 3.7 | 200 | 0.1151 | 0.6608 | 0.9912 | 0.9939 | nan | 0.9993 | 0.9831 | 0.0 | 0.9993 | 0.9831 | | 0.0463 | 4.07 | 220 | 0.0538 | 0.6610 | 0.9915 | 0.9934 | nan | 0.9969 | 0.9862 | 0.0 | 0.9969 | 0.9862 | | 0.046 | 4.44 | 240 | 0.1201 | 0.6581 | 0.9871 | 0.9912 | nan | 0.9991 | 0.9751 | 0.0 | 0.9991 | 0.9751 | | 0.0468 | 4.81 | 260 | 0.0691 | 0.6585 | 0.9878 | 0.9912 | nan | 0.9978 | 0.9777 | 0.0 | 0.9978 | 0.9777 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.8.0 - Tokenizers 0.13.0.dev0
645cc79cf2c5b4d172b34944c9cf7809
emilios/whisper-sm-farsipal-e5
emilios
whisper
29
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['el']
['mozilla-foundation/common_voice_11_0,google/fleurs']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
2,572
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper small Greek Farsioal and El Greco This model is a fine-tuned version of [emilios/whisper-sm-el-farsipal-e4](https://huggingface.co/emilios/whisper-sm-el-farsipal-e4) on the mozilla-foundation/common_voice_11_0,google/fleurs el,el_gr dataset. It achieves the following results on the evaluation set: - Loss: 0.4871 - Wer: 17.1991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 20000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.1259 | 2.49 | 1000 | 0.4834 | 18.3692 | | 0.1002 | 4.49 | 2000 | 0.4604 | 17.8027 | | 0.1096 | 6.98 | 3000 | 0.4553 | 17.8770 | | 0.0885 | 9.46 | 4000 | 0.4551 | 17.9606 | | 0.0675 | 11.95 | 5000 | 0.4631 | 17.9049 | | 0.0675 | 14.44 | 6000 | 0.4619 | 17.9049 | | 0.0645 | 16.93 | 7000 | 0.4678 | 17.6727 | | 0.0535 | 19.41 | 8000 | 0.4685 | 17.6634 | | 0.039 | 21.49 | 9000 | 0.4746 | 17.6727 | | 0.0447 | 23.98 | 10000 | 0.4761 | 17.6634 | | 0.0393 | 26.46 | 11000 | 0.4792 | 17.7656 | | 0.0308 | 28.95 | 12000 | 0.4851 | 17.8678 | | 0.0301 | 31.44 | 13000 | 0.4846 | 17.4499 | | 0.031 | 33.93 | 14000 | 0.4849 | 17.8306 | | 0.0263 | 36.41 | 15000 | 0.4880 | 17.6170 | | 0.0256 | 38.9 | 16000 | 0.4871 | 17.1991 | | 0.0236 | 41.39 | 17000 | 0.4883 | 17.2641 | | 0.0195 | 43.88 | 18000 | 0.4880 | 17.5706 | | 0.0193 | 46.36 | 19000 | 0.4993 | 17.7285 | | 0.0161 | 48.85 | 20000 | 0.4968 | 17.8306 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 2.0.0.dev20221216+cu116 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
9a8e15177cba9242c5f6eee76019bcf2
gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier-finetuned-chico-xavier
gabrielgmendonca
bert
8
2
transformers
0
fill-mask
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,678
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier-finetuned-chico-xavier This model is a fine-tuned version of [gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier](https://huggingface.co/gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.8630 - Validation Loss: 1.7215 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3430, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.8630 | 1.7215 | 0 | ### Framework versions - Transformers 4.22.2 - TensorFlow 2.8.2 - Datasets 2.5.1 - Tokenizers 0.12.1
d48219f6baa210c22f9a4673bd68638b
premsuresh/bart-finetuned-mathqa-prem
premsuresh
bart
18
3
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
961
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-finetuned-mathqa-prem This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
09f38f17966ec9c9630eeea10f1ed99a
DrishtiSharma/wav2vec2-base-finetuned-sentiment-mesd-v9
DrishtiSharma
wav2vec2
10
13
transformers
0
audio-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
7,466
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-sentiment-mesd-v9 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3500 - Accuracy: 0.9154 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 40 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.86 | 3 | 1.7825 | 0.1846 | | 1.9553 | 1.86 | 6 | 1.7212 | 0.4308 | | 1.9553 | 2.86 | 9 | 1.6164 | 0.3769 | | 2.002 | 3.86 | 12 | 1.4904 | 0.3769 | | 1.6191 | 4.86 | 15 | 1.4426 | 0.4385 | | 1.6191 | 5.86 | 18 | 1.3516 | 0.5231 | | 1.6209 | 6.86 | 21 | 1.2176 | 0.5538 | | 1.6209 | 7.86 | 24 | 1.1683 | 0.5692 | | 1.371 | 8.86 | 27 | 1.0885 | 0.5923 | | 1.1568 | 9.86 | 30 | 1.0152 | 0.6385 | | 1.1568 | 10.86 | 33 | 0.9289 | 0.6385 | | 1.1023 | 11.86 | 36 | 0.9141 | 0.6308 | | 1.1023 | 12.86 | 39 | 0.8526 | 0.6462 | | 0.9448 | 13.86 | 42 | 0.8420 | 0.6769 | | 0.7972 | 14.86 | 45 | 0.7976 | 0.6692 | | 0.7972 | 15.86 | 48 | 0.8192 | 0.7308 | | 0.7793 | 16.86 | 51 | 0.7108 | 0.7615 | | 0.7793 | 17.86 | 54 | 0.6712 | 0.7769 | | 0.6468 | 18.86 | 57 | 0.6684 | 0.7923 | | 0.5083 | 19.86 | 60 | 0.6922 | 0.7385 | | 0.5083 | 20.86 | 63 | 0.6148 | 0.7923 | | 0.4988 | 21.86 | 66 | 0.5846 | 0.7923 | | 0.4988 | 22.86 | 69 | 0.6050 | 0.8154 | | 0.4123 | 23.86 | 72 | 0.5506 | 0.7846 | | 0.3511 | 24.86 | 75 | 0.6095 | 0.7846 | | 0.3511 | 25.86 | 78 | 0.5916 | 0.8154 | | 0.3268 | 26.86 | 81 | 0.5912 | 0.8077 | | 0.3268 | 27.86 | 84 | 0.5142 | 0.8538 | | 0.3036 | 28.86 | 87 | 0.5492 | 0.8077 | | 0.3066 | 29.86 | 90 | 0.6007 | 0.8231 | | 0.3066 | 30.86 | 93 | 0.5748 | 0.8231 | | 0.2538 | 31.86 | 96 | 0.6027 | 0.7692 | | 0.2538 | 32.86 | 99 | 0.6979 | 0.7462 | | 0.2281 | 33.86 | 102 | 0.7002 | 0.7615 | | 0.2183 | 34.86 | 105 | 0.6650 | 0.7769 | | 0.2183 | 35.86 | 108 | 0.5192 | 0.8462 | | 0.2202 | 36.86 | 111 | 0.5389 | 0.8308 | | 0.2202 | 37.86 | 114 | 0.5050 | 0.8385 | | 0.1906 | 38.86 | 117 | 0.5722 | 0.7769 | | 0.154 | 39.86 | 120 | 0.5239 | 0.8308 | | 0.154 | 40.86 | 123 | 0.4448 | 0.8615 | | 0.1474 | 41.86 | 126 | 0.4623 | 0.8615 | | 0.1474 | 42.86 | 129 | 0.4282 | 0.8615 | | 0.1345 | 43.86 | 132 | 0.5087 | 0.8615 | | 0.1567 | 44.86 | 135 | 0.4859 | 0.8385 | | 0.1567 | 45.86 | 138 | 0.6603 | 0.8077 | | 0.1731 | 46.86 | 141 | 0.5379 | 0.8385 | | 0.1731 | 47.86 | 144 | 0.8666 | 0.7538 | | 0.1606 | 48.86 | 147 | 0.7518 | 0.8 | | 0.1484 | 49.86 | 150 | 0.5986 | 0.8385 | | 0.1484 | 50.86 | 153 | 0.6368 | 0.8231 | | 0.2256 | 51.86 | 156 | 0.4639 | 0.8692 | | 0.2256 | 52.86 | 159 | 0.5533 | 0.8462 | | 0.1178 | 53.86 | 162 | 0.5038 | 0.8615 | | 0.0815 | 54.86 | 165 | 0.5052 | 0.8692 | | 0.0815 | 55.86 | 168 | 0.4337 | 0.8846 | | 0.0998 | 56.86 | 171 | 0.4422 | 0.8769 | | 0.0998 | 57.86 | 174 | 0.4317 | 0.8692 | | 0.0855 | 58.86 | 177 | 0.4025 | 0.8923 | | 0.0962 | 59.86 | 180 | 0.4605 | 0.8769 | | 0.0962 | 60.86 | 183 | 0.4356 | 0.8769 | | 0.0763 | 61.86 | 186 | 0.4614 | 0.8769 | | 0.0763 | 62.86 | 189 | 0.4382 | 0.8846 | | 0.0902 | 63.86 | 192 | 0.4701 | 0.8692 | | 0.0654 | 64.86 | 195 | 0.4922 | 0.8692 | | 0.0654 | 65.86 | 198 | 0.5413 | 0.8538 | | 0.0651 | 66.86 | 201 | 0.5759 | 0.8615 | | 0.0651 | 67.86 | 204 | 0.4238 | 0.9 | | 0.0822 | 68.86 | 207 | 0.3500 | 0.9154 | | 0.0625 | 69.86 | 210 | 0.3878 | 0.8923 | | 0.0625 | 70.86 | 213 | 0.4952 | 0.8615 | | 0.0548 | 71.86 | 216 | 0.4544 | 0.8615 | | 0.0548 | 72.86 | 219 | 0.5497 | 0.8769 | | 0.054 | 73.86 | 222 | 0.4434 | 0.8846 | | 0.0543 | 74.86 | 225 | 0.4732 | 0.8769 | | 0.0543 | 75.86 | 228 | 0.4425 | 0.8923 | | 0.0881 | 76.86 | 231 | 0.4788 | 0.8769 | | 0.0881 | 77.86 | 234 | 0.5448 | 0.8769 | | 0.061 | 78.86 | 237 | 0.4221 | 0.9077 | | 0.0567 | 79.86 | 240 | 0.4404 | 0.8769 | | 0.0567 | 80.86 | 243 | 0.4099 | 0.9 | | 0.052 | 81.86 | 246 | 0.5259 | 0.8769 | | 0.052 | 82.86 | 249 | 0.5874 | 0.8692 | | 0.0444 | 83.86 | 252 | 0.5555 | 0.8846 | | 0.0332 | 84.86 | 255 | 0.5156 | 0.8615 | | 0.0332 | 85.86 | 258 | 0.4564 | 0.8615 | | 0.0449 | 86.86 | 261 | 0.4826 | 0.8692 | | 0.0449 | 87.86 | 264 | 0.4726 | 0.8615 | | 0.0385 | 88.86 | 267 | 0.4206 | 0.8846 | | 0.0356 | 89.86 | 270 | 0.4050 | 0.8769 | | 0.0356 | 90.86 | 273 | 0.4161 | 0.8923 | | 0.0391 | 91.86 | 276 | 0.4100 | 0.9077 | | 0.0391 | 92.86 | 279 | 0.4047 | 0.9 | | 0.0249 | 93.86 | 282 | 0.4044 | 0.9 | | 0.0399 | 94.86 | 285 | 0.3968 | 0.8846 | | 0.0399 | 95.86 | 288 | 0.3802 | 0.9 | | 0.031 | 96.86 | 291 | 0.3689 | 0.9 | | 0.031 | 97.86 | 294 | 0.3616 | 0.9077 | | 0.036 | 98.86 | 297 | 0.3584 | 0.9077 | | 0.0386 | 99.86 | 300 | 0.3574 | 0.9077 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
b0939940eb87ab4d5ab89988f1b2934a
tomekkorbak/pensive_keller
tomekkorbak
null
2
0
null
0
null
false
false
false
mit
['en']
['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
8,995
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pensive_keller This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 3125 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True, 'skip_tokens': 1661599744}, 'generation': {'every_n_steps': 32, 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'every_n_steps': 32, 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '81a1701e025d2c65ae6e8c2103df559071523ee0', 'value_head_config': {'is_detached': False}}, 'path_or_name': 'tomekkorbak/goofy_pasteur'}, 'objective': {'alpha': 0.5, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 512, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'pensive_keller', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 3346, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1661599744, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/1pk4cf6z
940aaa83cd4927049574f7f08a71d83a
paola-md/recipe-lr0.0001-wd0.02-bs64
paola-md
roberta
6
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,470
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-lr0.0001-wd0.02-bs64 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2792 - Rmse: 0.5284 - Mse: 0.2792 - Mae: 0.4268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2799 | 1.0 | 623 | 0.2789 | 0.5281 | 0.2789 | 0.4218 | | 0.2786 | 2.0 | 1246 | 0.2792 | 0.5284 | 0.2792 | 0.4268 | | 0.2785 | 3.0 | 1869 | 0.2792 | 0.5284 | 0.2792 | 0.4268 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 2.4.0 - Tokenizers 0.12.1
d87066ec43997abba4556f2e11dbd6b3
jonfd/convbert-small-igc-is
jonfd
convbert
8
3
transformers
0
feature-extraction
true
true
false
cc-by-4.0
['is']
['igc']
null
0
0
0
0
0
0
0
[]
false
true
true
607
false
# Icelandic ConvBERT-Small This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a Unigram tokenizer with a vocabulary size of 96,000. # Acknowledgments This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC). This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarรณmur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
7abac0804c0c97f9352a866739a6d2ff
MultiBertGunjanPatrick/multiberts-seed-4-900k
MultiBertGunjanPatrick
bert
7
4
transformers
0
null
true
false
false
apache-2.0
['en']
['bookcorpus', 'wikipedia']
null
0
0
0
0
0
0
0
['exbert', 'multiberts', 'multiberts-seed-4']
false
true
true
6,483
false
# MultiBERTs Seed 4 Checkpoint 900k (uncased) Seed 4 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-900k') model = BertModel.from_pretrained("multiberts-seed-4-900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
3eae80ffc266b168c728658bca2de666
medhabi/distilbert-base-uncased-mlm-ta-local
medhabi
distilbert
10
5
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,312
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-mlm-ta-local This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0658 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4431 | 1.0 | 3125 | 2.1817 | | 2.2197 | 2.0 | 6250 | 2.0929 | | 2.1519 | 3.0 | 9375 | 2.0696 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.1 - Datasets 2.0.0 - Tokenizers 0.11.6
d4ed788004f1184cb4f9db6888bde7d7
microsoft/GODEL-v1_1-base-seq2seq
microsoft
t5
9
62,587
transformers
25
conversational
true
false
false
mit
null
null
null
0
0
0
0
2
2
0
['conversational']
false
true
true
3,483
false
### Large-Scale Pre-Training for Goal-Directed Dialog (GODEL) GODEL is a large-scale pre-trained model for goal-directed dialogs. It is parameterized with a Transformer-based encoder-decoder model and trained for response generation grounded in external text, which allows more effective fine-tuning on dialog tasks that require conditioning the response on information that is external to the current conversation (e.g., a retrieved document). The pre-trained model can be efficiently fine-tuned and adapted to accomplish a new dialog task with a handful of task-specific dialogs. The v1.1 model is trained on 551M multi-turn dialogs from Reddit discussion thread, and 5M instruction and knowledge grounded dialogs. ##### Multi-turn generation examples from an interactive environment: Chitchat example: > Instruction: given a dialog context, you need to response empathically. <br> > User: Does money buy happiness? <br> > Agent: It is a question. Money buys you a lot of things, but not enough to buy happiness. <br> > User: What is the best way to buy happiness ? <br> > Agent: Happiness is bought through your experience and not money. <br> Grounded response generation example: > Instruction: given a dialog context and related knowledge, you need to response safely based on the knowledge. <br> > Knowledge: The best Stardew Valley mods PCGamesN_0 / About SMAPI <br> > User: My favorite game is stardew valley. stardew valley is very fun. <br> > Agent: I love Stardew Valley mods, like PCGamesN_0 / About SMAPI. <br> Please find the information about preprocessing, training and full details of the GODEL in the [project webpage](https://aka.ms/GODEL). ArXiv paper: [https://arxiv.org/abs/2206.11309](https://arxiv.org/abs/2206.11309) ### How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("microsoft/GODEL-v1_1-base-seq2seq") model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/GODEL-v1_1-base-seq2seq") def generate(instruction, knowledge, dialog): if knowledge != '': knowledge = '[KNOWLEDGE] ' + knowledge dialog = ' EOS '.join(dialog) query = f"{instruction} [CONTEXT] {dialog} {knowledge}" input_ids = tokenizer(f"{query}", return_tensors="pt").input_ids outputs = model.generate(input_ids, max_length=128, min_length=8, top_p=0.9, do_sample=True) output = tokenizer.decode(outputs[0], skip_special_tokens=True) return output # Instruction for a chitchat task instruction = f'Instruction: given a dialog context, you need to response empathically.' # Leave the knowldge empty knowledge = '' dialog = [ 'Does money buy happiness?', 'It is a question. Money buys you a lot of things, but not enough to buy happiness.', 'What is the best way to buy happiness ?' ] response = generate(instruction, knowledge, dialog) print(response) ``` ### Citation if you use this code and data in your research, please cite our arxiv paper: ``` @misc{peng2022godel, author = {Peng, Baolin and Galley, Michel and He, Pengcheng and Brockett, Chris and Liden, Lars and Nouri, Elnaz and Yu, Zhou and Dolan, Bill and Gao, Jianfeng}, title = {GODEL: Large-Scale Pre-training for Goal-Directed Dialog}, howpublished = {arXiv}, year = {2022}, month = {June}, url = {https://www.microsoft.com/en-us/research/publication/godel-large-scale-pre-training-for-goal-directed-dialog/}, } ```
8a69dabbcde05feae031b0c45f18973a
jonatasgrosman/exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s756
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['de']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'de']
false
true
true
503
false
# exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s756 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b0ced49778303150369c458bd0fe5279
anki08/t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol-finetuned-nl-to-fol
anki08
t5
14
3
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,100
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol-finetuned-nl-to-fol This model is a fine-tuned version of [anki08/t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol](https://huggingface.co/anki08/t5-small-finetuned-text2log-finetuned-nl-to-fol-finetuned-nl-to-fol) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1468 - Bleu: 30.3266 - Gen Len: 18.8824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 17 | 0.1486 | 30.3537 | 18.8824 | | No log | 2.0 | 34 | 0.1474 | 30.2522 | 18.8824 | | No log | 3.0 | 51 | 0.1465 | 30.2522 | 18.8824 | | No log | 4.0 | 68 | 0.1461 | 30.2522 | 18.8824 | | No log | 5.0 | 85 | 0.1469 | 30.2522 | 18.8824 | | No log | 6.0 | 102 | 0.1457 | 29.8889 | 18.8824 | | No log | 7.0 | 119 | 0.1470 | 30.3537 | 18.8824 | | No log | 8.0 | 136 | 0.1469 | 30.3537 | 18.8824 | | No log | 9.0 | 153 | 0.1469 | 30.3266 | 18.8824 | | No log | 10.0 | 170 | 0.1468 | 30.3266 | 18.8824 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
263171df3605cdbf87710be2a86d4d15
jonatasgrosman/exp_w2v2t_ar_hubert_s947
jonatasgrosman
hubert
10
4
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ar']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'ar']
false
true
true
452
false
# exp_w2v2t_ar_hubert_s947 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
755583d5929156a9dc42f047ca66df08
GamingDaveUK/HorrorByDave
GamingDaveUK
null
23
0
null
2
null
false
false
false
wtfpl
null
null
null
4
0
1
3
0
0
0
[]
false
true
true
1,263
false
One of the first embeddings I have created, adds a horror atmosphere and monsters to an image. Download it into the embeddings folder and use it with "by HorrorByDave" or what ever you have renamed the embed. Samples (if hugging face keeps the png data then you can get the prompt by putting the sample into pnginfo): ![Sample 1](https://huggingface.co/GamingDaveUK/HorrorByDave/resolve/main/Sample%20\(1\).png) ![Sample 2](https://huggingface.co/GamingDaveUK/HorrorByDave/resolve/main/Sample%20\(2\).png) ![Sample 3](https://huggingface.co/GamingDaveUK/HorrorByDave/resolve/main/Sample%20\(3\).png) ![Sample 4](https://huggingface.co/GamingDaveUK/HorrorByDave/resolve/main/Sample%20\(4\).png) ![Sample 5](https://huggingface.co/GamingDaveUK/HorrorByDave/resolve/main/Sample%20\(5\).png) ![Sample 6](https://huggingface.co/GamingDaveUK/HorrorByDave/resolve/main/Sample%20\(6\).png) ![Sample 7](https://huggingface.co/GamingDaveUK/HorrorByDave/resolve/main/Sample%20\(7\).png) ![Sample 8](https://huggingface.co/GamingDaveUK/HorrorByDave/resolve/main/Sample%20\(8\).png) ![Sample 9](https://huggingface.co/GamingDaveUK/HorrorByDave/resolve/main/Sample%20\(9\).png) ![Sample 10](https://huggingface.co/GamingDaveUK/HorrorByDave/resolve/main/Sample%20\(10\).png)
6bd0e94ba5544624ee9f85c1c38e06f6
Helsinki-NLP/opus-mt-fr-lu
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-fr-lu * source languages: fr * target languages: lu * OPUS readme: [fr-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.lu | 25.5 | 0.471 |
f1734e6ff501dfa08d5303fd1e48f9d2
juro95/fourth_iteration_model
juro95
xlm-roberta
5
1
transformers
0
token-classification
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,162
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # fourth_iteration_model This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 65805, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results ### Framework versions - Transformers 4.25.1 - TensorFlow 2.6.5 - Datasets 2.3.2 - Tokenizers 0.13.2
96dd4846c0e5853d66c7a5820155f5b5
jonatasgrosman/exp_w2v2r_es_vp-100k_gender_male-0_female-10_s33
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['es']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'es']
false
true
true
498
false
# exp_w2v2r_es_vp-100k_gender_male-0_female-10_s33 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
2d088d06cabd960f66c230c51b240556
lighthouse/mdeberta-v3-base-kor-further
lighthouse
deberta-v2
7
355
transformers
3
null
true
false
false
mit
['multilingual', 'en', 'ko', 'ar', 'bg', 'de', 'el', 'es', 'fr', 'hi', 'ru', 'sw', 'th', 'tr', 'ur', 'vi', 'zh']
null
null
0
0
0
0
2
1
1
['deberta', 'deberta-v3', 'mdeberta', 'korean', 'pretraining']
false
true
true
4,059
false
# mDeBERTa-v3-base-kor-further > ๐Ÿ’ก ์•„๋ž˜ ํ”„๋กœ์ ํŠธ๋Š”ย KPMG Lighthouse Korea์—์„œ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค. > KPMG Lighthouse Korea์—์„œ๋Š”, Financial area์˜ ๋‹ค์–‘ํ•œ ๋ฌธ์ œ๋“ค์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด Edge Technology์˜ NLP/Vision AI๋ฅผ ๋ชจ๋ธ๋งํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. > https://kpmgkr.notion.site/ ## What is DeBERTa? - [DeBERTa](https://arxiv.org/abs/2006.03654)๋Š” `Disentangled Attention` + `Enhanced Mask Decoder` ๋ฅผ ์ ์šฉํ•˜์—ฌ ๋‹จ์–ด์˜ positional information์„ ํšจ๊ณผ์ ์œผ๋กœ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ด์™€ ๊ฐ™์€ ์•„์ด๋””์–ด๋ฅผ ํ†ตํ•ด, ๊ธฐ์กด์˜ BERT, RoBERTa์—์„œ ์‚ฌ์šฉํ–ˆ๋˜ absolute position embedding๊ณผ๋Š” ๋‹ฌ๋ฆฌ DeBERTa๋Š” ๋‹จ์–ด์˜ ์ƒ๋Œ€์ ์ธ ์œ„์น˜ ์ •๋ณด๋ฅผ ํ•™์Šต ๊ฐ€๋Šฅํ•œ ๋ฒกํ„ฐ๋กœ ํ‘œํ˜„ํ•˜์—ฌ ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ, BERT, RoBERTA ์™€ ๋น„๊ตํ–ˆ์„ ๋•Œ ๋” ์ค€์ˆ˜ํ•œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ์—ˆ์Šต๋‹ˆ๋‹ค. - [DeBERTa-v3](https://arxiv.org/abs/2111.09543)์—์„œ๋Š”, ์ด์ „ ๋ฒ„์ „์—์„œ ์‚ฌ์šฉํ–ˆ๋˜ MLM (Masked Language Model) ์„ RTD (Replaced Token Detection) Task ๋กœ ๋Œ€์ฒดํ•œ ELECTRA ์Šคํƒ€์ผ์˜ ์‚ฌ์ „ํ•™์Šต ๋ฐฉ๋ฒ•๊ณผ, Gradient-Disentangled Embedding Sharing ์„ ์ ์šฉํ•˜์—ฌ ๋ชจ๋ธ ํ•™์Šต์˜ ํšจ์œจ์„ฑ์„ ๊ฐœ์„ ํ•˜์˜€์Šต๋‹ˆ๋‹ค. - DeBERTa์˜ ์•„ํ‚คํ…์ฒ˜๋กœ ํ’๋ถ€ํ•œ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•ด์„œ, `mDeBERTa-v3-base-kor-further` ๋Š” microsoft ๊ฐ€ ๋ฐœํ‘œํ•œ `mDeBERTa-v3-base` ๋ฅผ ์•ฝ 40GB์˜ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด์„œ **์ถ”๊ฐ€์ ์ธ ์‚ฌ์ „ํ•™์Šต**์„ ์ง„ํ–‰ํ•œ ์–ธ์–ด ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ## How to Use - Requirements ``` pip install transformers pip install sentencepiece ``` - Huggingface Hub ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("mdeberta-v3-base-kor-further") # DebertaV2ForModel tokenizer = AutoTokenizer.from_pretrained("mdeberta-v3-base-kor-further") # DebertaV2Tokenizer (SentencePiece) ``` ## Pre-trained Models - ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜๋Š” ๊ธฐ์กด microsoft์—์„œ ๋ฐœํ‘œํ•œ `mdeberta-v3-base`์™€ ๋™์ผํ•œ ๊ตฌ์กฐ์ž…๋‹ˆ๋‹ค. | | Vocabulary(K) | Backbone Parameters(M) | Hidden Size | Layers | Note | | --- | --- | --- | --- | --- | --- | | mdeberta-v3-base-kor-further (mdeberta-v3-base์™€ ๋™์ผ) | 250 | 86 | 768 | 12 | 250K new SPM vocab | ## Further Pretraing Details (MLM Task) - `mDeBERTa-v3-base-kor-further` ๋Š” `microsoft/mDeBERTa-v3-base` ๋ฅผ ์•ฝ 40GB์˜ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด์„œ MLM Task๋ฅผ ์ ์šฉํ•˜์—ฌ ์ถ”๊ฐ€์ ์ธ ์‚ฌ์ „ ํ•™์Šต์„ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค. | | Max length | Learning Rate | Batch Size | Train Steps | Warm-up Steps | | --- | --- | --- | --- | --- | --- | | mdeberta-v3-base-kor-further | 512 | 2e-5 | 8 | 5M | 50k | ## Datasets - ๋ชจ๋‘์˜ ๋ง๋ญ‰์น˜(์‹ ๋ฌธ, ๊ตฌ์–ด, ๋ฌธ์–ด), ํ•œ๊ตญ์–ด Wiki, ๊ตญ๋ฏผ์ฒญ์› ๋“ฑ ์•ฝ 40 GB ์˜ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์…‹์ด ์ถ”๊ฐ€์ ์ธ ์‚ฌ์ „ํ•™์Šต์— ์‚ฌ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. - Train: 10M lines, 5B tokens - Valid: 2M lines, 1B tokens - cf) ๊ธฐ์กด mDeBERTa-v3์€ XLM-R ๊ณผ ๊ฐ™์ด [cc-100 ๋ฐ์ดํ„ฐ์…‹](https://data.statmt.org/cc-100/)์œผ๋กœ ํ•™์Šต๋˜์—ˆ์œผ๋ฉฐ, ๊ทธ ์ค‘ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์…‹์˜ ํฌ๊ธฐ๋Š” 54GB์ž…๋‹ˆ๋‹ค. ## Fine-tuning on NLU Tasks - Base Model | Model | Size | NSMC(acc) | Naver NER(F1) | PAWS (acc) | KorNLI (acc) | KorSTS (spearman) | Question Pair (acc) | KorQuaD (Dev) (EM/F1) | Korean-Hate-Speech (Dev) (F1) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | XLM-Roberta-Base | 1.03G | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 | | mdeberta-base | 534M | 90.01 | 87.43 | 85.55 | 80.41 | **82.65** | 94.06 | 65.48 / 89.74 | 62.91 | | mdeberta-base-kor-further (Ours) | 534M | **90.52** | **87.87** | **85.85** | **80.65** | 81.90 | **94.98** | **66.07 / 90.35** | **68.16** | ## KPMG Lighthouse KR https://kpmgkr.notion.site/ ## Citation ``` @misc{he2021debertav3, title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing}, author={Pengcheng He and Jianfeng Gao and Weizhu Chen}, year={2021}, eprint={2111.09543}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ``` ## Reference - [mDeBERTa-v3-base-kor-further](https://github.com/kpmg-kr/mDeBERTa-v3-base-kor-further) - [DeBERTa](https://github.com/microsoft/DeBERTa) - [Huggingface Transformers](https://github.com/huggingface/transformers) - [๋ชจ๋‘์˜ ๋ง๋ญ‰์น˜](https://corpus.korean.go.kr/) - [Korpora: Korean Corpora Archives](https://github.com/ko-nlp/Korpora) - [sooftware/Korean PLM](https://github.com/sooftware/Korean-PLM)
9c087b788db5dcec6fb5ced8d0aabc6d
yanaiela/roberta-base-epoch_24
yanaiela
roberta
9
3
transformers
0
fill-mask
true
false
false
mit
['en']
['wikipedia', 'bookcorpus']
null
0
0
0
0
0
0
0
['roberta-base', 'roberta-base-epoch_24']
false
true
true
2,102
false
# RoBERTa, Intermediate Checkpoint - Epoch 24 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_24. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schรผtze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
9764a61c906f419e923e7b72e8ef18e6
rafiulrumy/wav2vec2-large-xlsr-hindi-demo-colab
rafiulrumy
wav2vec2
12
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,108
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-hindi-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
6bf8593be786fed980bd1484e2c0efcc
SetFit/deberta-v3-large__sst2__train-8-8
SetFit
deberta-v2
10
5
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,135
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-8-8 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7414 - Accuracy: 0.5623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6597 | 1.0 | 3 | 0.7716 | 0.25 | | 0.6376 | 2.0 | 6 | 0.7802 | 0.25 | | 0.5857 | 3.0 | 9 | 0.6625 | 0.75 | | 0.4024 | 4.0 | 12 | 0.5195 | 0.75 | | 0.2635 | 5.0 | 15 | 0.4222 | 1.0 | | 0.1714 | 6.0 | 18 | 0.4410 | 0.5 | | 0.1267 | 7.0 | 21 | 0.7773 | 0.75 | | 0.0582 | 8.0 | 24 | 0.9070 | 0.75 | | 0.0374 | 9.0 | 27 | 0.9539 | 0.75 | | 0.0204 | 10.0 | 30 | 1.0507 | 0.75 | | 0.012 | 11.0 | 33 | 1.2802 | 0.5 | | 0.0086 | 12.0 | 36 | 1.4272 | 0.5 | | 0.0049 | 13.0 | 39 | 1.4803 | 0.5 | | 0.0039 | 14.0 | 42 | 1.4912 | 0.5 | | 0.0031 | 15.0 | 45 | 1.5231 | 0.5 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
bd544d6560719761ca1159d4d3235134
susumu2357/bert-base-swedish-squad2
susumu2357
bert
9
11
transformers
1
question-answering
true
true
true
apache-2.0
['sv']
['susumu2357/squad_v2_sv']
null
0
0
0
0
0
0
0
['squad']
false
true
true
1,271
false
# Swedish BERT Fine-tuned on SQuAD v2 This model is a fine-tuning checkpoint of Swedish BERT on SQuAD v2. ## Training data Fine-tuning was done based on the pre-trained model [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased). Training and dev datasets are our [Swedish translation of SQuAD v2](https://github.com/susumu2357/SQuAD_v2_sv). [Here](https://huggingface.co/datasets/susumu2357/squad_v2_sv) is the HuggingFace Datasets. ## Hyperparameters ``` batch_size = 16 n_epochs = 2 max_seq_len = 386 learning_rate = 3e-5 warmup_steps = 2900 # warmup_proportion = 0.2 doc_stride=128 max_query_length=64 ``` ## Eval results ``` 'exact': 66.72642524202223 'f1': 70.11149581003404 'total': 11156 'HasAns_exact': 55.574745730186144 'HasAns_f1': 62.821693965983044 'HasAns_total': 5211 'NoAns_exact': 76.50126156433979 'NoAns_f1': 76.50126156433979 'NoAns_total': 5945 ``` ## Limitations and bias This model may contain biases due to mistranslations of the SQuAD dataset. ## BibTeX entry and citation info ```bibtex @misc{svSQuADbert, author = {Susumu Okazawa}, title = {Swedish BERT Fine-tuned on Swedish SQuAD 2.0}, year = {2021}, howpublished = {\url{https://huggingface.co/susumu2357/bert-base-swedish-squad2}}, } ```
96d1fcb83567a550e19a8e1949d73931
BaxterAI/SentimentClassifier
BaxterAI
distilbert
60
1
transformers
1
text-classification
true
false
false
apache-2.0
null
['amazon_polarity']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,042
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SentimentClassifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset. It achieves the following results on the evaluation set: - Loss: 0.4425 - Accuracy: 0.91 - F1: 0.91 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
ce3f644613e24611548585674bcf8726