repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sarahmiller137/distilbert-base-uncased-ft-m3-lc
|
sarahmiller137
|
distilbert
| 8 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
cc
|
['en']
|
['MIMIC-III']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text classification']
| false | true | true | 1,459 | false |
## Model information:
This model is the [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) model that has been finetuned using radiology report texts from the MIMIC-III database. The task performed was text classification in order to benchmark this model with a selection of other variants of BERT for the classifcation of MIMIC-III radiology report texts into two classes. Labels of [0,1] were assigned to radiology reports in MIMIC-III that were linked to an ICD9 diagnosis code for lung cancer = 1 and a random sample of reports which were not linked to any type of cancer diagnosis code at all = 0.
## Intended uses:
This model is intended to be used to classify texts to identify the presence of lung cancer. The model will predict lables of [0,1].
## Limitations:
Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before use -
- [MIMIC-III](https://www.nature.com/articles/sdata201635.pdf)
- [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)
## How to use:
Load the model from the library using the following checkpoints:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/distilbert-base-uncased-ft-m3-lc")
model = AutoModel.from_pretrained("sarahmiller137/distilbert-base-uncased-ft-m3-lc")
```
|
57d75d1b672ccd42b98de073a93e9c49
|
jonatasgrosman/exp_w2v2r_de_vp-100k_gender_male-2_female-8_s364
|
jonatasgrosman
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['de']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'de']
| false | true | true | 498 | false |
# exp_w2v2r_de_vp-100k_gender_male-2_female-8_s364
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
d1c78de5a18fa342c65e4652359158ac
|
pfluo/k2fsa-zipformer-chinese-english-mixed
|
pfluo
| null | 25 | 0 | null | 1 | null | false | false | false |
apache-2.0
| null | null | null | 2 | 0 | 2 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,425 | false |
## Chinese-English ASR model using k2-zipformer-streaming
### AIShell-1 and Wenetspeech testset results with modified-beam-search streaming decode using epoch-14.pt
| decode_chunk_len | AIShell-1 | TEST_NET | TEST_MEETING |
|------------------|-----------|----------|--------------|
| 32 | 3.19 | 9.58 | 9.51 ||
| 64 | 3.04 | 8.97 | 8.83 ||
### Training and decoding commands
```
nohup ./pruned_transducer_stateless7_streaming/train.py --world-size 8 --num-epochs 30 --start-epoch 1 --feedforward-dims "1024,1024,1536,1536,1024" --exp-dir pruned_transducer_stateless7_streaming/exp --max-duration 360 > pruned_transducer_stateless7_streaming/exp/nohup.zipformer &
nohup ./pruned_transducer_stateless7_streaming/decode.py --epoch 6 --avg 1 --exp-dir ./pruned_transducer_stateless7_streaming/exp --max-duration 600 --decode-chunk-len 32 --decoding-method modified_beam_search --beam-size 4 > nohup.zipformer.deocode &
```
### Model unit is char+bpe as `data/lang_char_bpe/tokens.txt`
### Tips
some k2-fsa version and parameter is
```
{'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'lo
g_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.2', 'k2-build
-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'a74f59dba1863cd9386ba4d8815850421260eee7', 'k2-git-date': 'Fri Dec 2 08:32:22 2022', 'lhotse-version': '1.5.0.dev+gi
t.8ce38fc.dirty', 'torch-version': '1.11.0+cu113', 'torch-cuda-available': True, 'torch-cuda-version': '11.3', 'python-version': '3.7', 'icefall-git-branch': 'master', 'icef
all-git-sha1': '11b08db-dirty', 'icefall-git-date': 'Thu Jan 12 10:19:21 2023', 'icefall-path': '/opt/conda/lib/python3.7/site-packages', 'k2-path': '/opt/conda/lib/python3.
7/site-packages/k2/__init__.py', 'lhotse-path': '/opt/conda/lib/python3.7/site-packages/lhotse/__init__.py', 'hostname': 'xxx', 'IP add
ress': 'x.x.x.x'}, 'world_size': 8, 'master_port': 12354, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('pruned_trans
ducer_stateless7_streaming/exp'), 'lang_dir': 'data/lang_char_bpe', 'base_lr': 0.01, 'lr_batches': 5000, 'lr_epochs': 3.5, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0
.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 2000, 'keep_last_k': 30, 'average_period': 200, '
use_fp16': False, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,1536,1536,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_
dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder
_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 360, 'bucketing
_sampler': True, 'num_buckets': 300, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_wor
kers': 8, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'training_subset': '12k_hour', 'blank_id': 0, 'vocab_size': 6254}
```
|
fa5e48f9551d52ca2422b614da4070a6
|
hugogolastico/finetuning-sentiment-model-3000-samples
|
hugogolastico
|
distilbert
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,040 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3185
- Accuracy: 0.8667
- F1: 0.8675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
e56c2df2a6a05469de0931dea9931408
|
sd-concepts-library/guttestreker
|
sd-concepts-library
| null | 18 | 0 | null | 8 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,062 | false |
### guttestreker on Stable Diffusion
This is the `<guttestreker>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:













|
d23ed7998bf843adf276e43e8dc76686
|
MaryaAI/opus-mt-ar-en-finetunedTanzil-v5-ar-to-en
|
MaryaAI
|
marian
| 9 | 5 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,907 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetunedTanzil-v5-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8101
- Validation Loss: 0.9477
- Train Bleu: 9.3241
- Train Gen Len: 88.73
- Train Rouge1: 56.4906
- Train Rouge2: 34.2668
- Train Rougel: 53.2279
- Train Rougelsum: 53.7836
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Train Gen Len | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:------------:|:------------:|:------------:|:---------------:|:-----:|
| 0.8735 | 0.9809 | 11.0863 | 78.68 | 56.4557 | 33.3673 | 53.4828 | 54.1197 | 0 |
| 0.8408 | 0.9647 | 9.8543 | 88.955 | 57.3797 | 34.3539 | 53.8783 | 54.3714 | 1 |
| 0.8101 | 0.9477 | 9.3241 | 88.73 | 56.4906 | 34.2668 | 53.2279 | 53.7836 | 2 |
### Framework versions
- Transformers 4.17.0.dev0
- TensorFlow 2.7.0
- Datasets 1.18.4.dev0
- Tokenizers 0.10.3
|
33f7f642d95f93003020bd8ab0888dac
|
kingabzpro/wav2vec2-large-xls-r-1b-Indonesian
|
kingabzpro
|
wav2vec2
| 11 | 7 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['id']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'hf-asr-leaderboard', 'robust-speech-event']
| true | true | true | 1,576 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-Indonesian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9550
- Wer: 0.4551
- Cer: 0.1643
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 3.663 | 7.69 | 200 | 0.7898 | 0.6039 | 0.1848 |
| 0.7424 | 15.38 | 400 | 1.0215 | 0.5615 | 0.1924 |
| 0.4494 | 23.08 | 600 | 1.0901 | 0.5249 | 0.1932 |
| 0.5075 | 30.77 | 800 | 1.1013 | 0.5079 | 0.1935 |
| 0.4671 | 38.46 | 1000 | 1.1034 | 0.4916 | 0.1827 |
| 0.1928 | 46.15 | 1200 | 0.9550 | 0.4551 | 0.1643 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
be9baf1ef1e20b45880f9c5ba79775f9
|
helloway/lenet
|
helloway
| null | 5 | 22 |
mindspore
| 0 |
image-classification
| false | false | false |
apache-2.0
| null |
['mnist']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification']
| false | true | true | 888 | false |
## MindSpore Image Classification models with MNIST on the 🤗Hub!
This repository contains the model from [this notebook on image classification with MNIST dataset using LeNet architecture](https://gitee.com/mindspore/mindspore/blob/r1.2/model_zoo/official/cv/lenet/README.md#).
## LeNet Description
Lenet-5 is one of the earliest pre-trained models proposed by Yann LeCun and others in the year 1998, in the research paper Gradient-Based Learning Applied to Document Recognition. They used this architecture for recognizing the handwritten and machine-printed characters.
The main reason behind the popularity of this model was its simple and straightforward architecture. It is a multi-layer convolution neural network for image classification.

[source](https://www.analyticsvidhya.com/blog/2021/03/the-architecture-of-lenet-5/)
|
e1c796dc1c1d4c96216577041344ff70
|
elopezlopez/Bio_ClinicalBERT_fold_6_ternary_v1
|
elopezlopez
|
bert
| 13 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,668 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_6_ternary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7302
- F1: 0.8128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 292 | 0.5359 | 0.7833 |
| 0.5585 | 2.0 | 584 | 0.5376 | 0.8026 |
| 0.5585 | 3.0 | 876 | 0.6117 | 0.8038 |
| 0.2314 | 4.0 | 1168 | 0.8036 | 0.7974 |
| 0.2314 | 5.0 | 1460 | 0.9467 | 0.8179 |
| 0.1093 | 6.0 | 1752 | 1.2957 | 0.7923 |
| 0.0384 | 7.0 | 2044 | 1.3423 | 0.8026 |
| 0.0384 | 8.0 | 2336 | 1.2644 | 0.8218 |
| 0.021 | 9.0 | 2628 | 1.3093 | 0.8231 |
| 0.021 | 10.0 | 2920 | 1.3282 | 0.8179 |
| 0.0129 | 11.0 | 3212 | 1.3853 | 0.8295 |
| 0.0078 | 12.0 | 3504 | 1.4705 | 0.8154 |
| 0.0078 | 13.0 | 3796 | 1.5063 | 0.8167 |
| 0.0064 | 14.0 | 4088 | 1.5293 | 0.8179 |
| 0.0064 | 15.0 | 4380 | 1.6303 | 0.8128 |
| 0.0085 | 16.0 | 4672 | 1.5945 | 0.8115 |
| 0.0085 | 17.0 | 4964 | 1.6899 | 0.8103 |
| 0.0056 | 18.0 | 5256 | 1.6952 | 0.8064 |
| 0.0055 | 19.0 | 5548 | 1.7550 | 0.7936 |
| 0.0055 | 20.0 | 5840 | 1.6779 | 0.8141 |
| 0.003 | 21.0 | 6132 | 1.7064 | 0.8128 |
| 0.003 | 22.0 | 6424 | 1.7192 | 0.8154 |
| 0.0013 | 23.0 | 6716 | 1.8188 | 0.7974 |
| 0.0014 | 24.0 | 7008 | 1.7273 | 0.8128 |
| 0.0014 | 25.0 | 7300 | 1.7302 | 0.8128 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dd6bff728f506563faa19306a8719591
|
weijiahaha/t5-small-summarization
|
weijiahaha
|
t5
| 28 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['cnn_dailymail']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 796 | false |
# t5-small-summarization
This model is a fine-tuned version of t5-small (https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6477
## Model description
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9195 | 1.0 | 718 | 1.6477 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
0289993ea0c0311b4367e58b503c83f5
|
anas-awadalla/bart-base-few-shot-k-128-finetuned-squad-seq2seq-seed-0
|
anas-awadalla
|
bart
| 23 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 963 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-128-finetuned-squad-seq2seq-seed-0
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
86d6373a9103891a7d9b72b71cbf1bfb
|
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6
|
anas-awadalla
|
bert
| 16 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 998 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ba3391316ba44d4532f8249e820c207e
|
fathyshalab/massive_general-roberta-large-v1-5-95
|
fathyshalab
|
roberta
| 14 | 22 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,464 | false |
# fathyshalab/massive_general-roberta-large-v1-5-95
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_general-roberta-large-v1-5-95")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
1acf2c8135ba6251bef14d014b8aea7a
|
redfoo/stable-diffusion-2-inpainting-endpoint-foo
|
redfoo
| null | 21 | 4 |
diffusers
| 0 | null | false | false | false |
openrail++
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-guided-to-image-inpainting', 'endpoints-template']
| false | true | true | 2,570 | false |
# Fork of [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting)
> Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
> For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion).
For more information about the model, license and limitations check the original model card at [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting).
---
This repository implements a custom `handler` task for `text-guided-to-image-inpainting` for 🤗 Inference Endpoints. The code for the customized pipeline is in the [handler.py](https://huggingface.co/philschmid/stable-diffusion-2-inpainting-endpoint/blob/main/handler.py).
There is also a [notebook](https://huggingface.co/philschmid/stable-diffusion-2-inpainting-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`

### expected Request payload
```json
{
"inputs": "A prompt used for image generation",
"image" : "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC",
"mask_image": "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC",
}
```
below is an example on how to run a request using Python and `requests`.
## Run Request
```python
import json
from typing import List
import requests as r
import base64
from PIL import Image
from io import BytesIO
ENDPOINT_URL = ""
HF_TOKEN = ""
# helper image utils
def encode_image(image_path):
with open(image_path, "rb") as i:
b64 = base64.b64encode(i.read())
return b64.decode("utf-8")
def predict(prompt, image, mask_image):
image = encode_image(image)
mask_image = encode_image(mask_image)
# prepare sample payload
request = {"inputs": prompt, "image": image, "mask_image": mask_image}
# headers
headers = {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json",
"Accept": "image/png" # important to get an image back
}
response = r.post(ENDPOINT_URL, headers=headers, json=payload)
img = Image.open(BytesIO(response.content))
return img
prediction = predict(
prompt="Face of a bengal cat, high resolution, sitting on a park bench",
image="dog.png",
mask_image="mask_dog.png"
)
```
expected output

|
b3ff7a97f01a88587144d8f2d271b7ff
|
buruzaemon/test-minilm-finetuned-emotion
|
buruzaemon
|
bert
| 13 | 9 |
transformers
| 0 |
text-classification
| true | false | false |
bsd-3-clause
|
['en']
|
['SetFit/emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['microsoft/MiniLM-L12-H384-uncased']
| false | true | true | 603 | false |
# test-minilm-finetuned-emotion fine-tuned model (uncased)
This model is a fine-tuned extension of the [Microsoft MiniLM distilled model](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased). This is the result of the learning exercise for [Simple Training with the 🤗 Transformers Trainer](https://www.youtube.com/watch?v=u--UVvH-LIQ&t=198s) and also going through Chapter 2, Text Classification in [Natural Language Processing with Transformers](https://transformersbook.com/), Revised Color Edition, May 2022.
This model is uncased: it does not make a difference between english and English.
|
b188bd8502ec108a1b1af9c2e5f6bf2b
|
sd-concepts-library/anime-background-style-v2
|
sd-concepts-library
| null | 19 | 0 | null | 12 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,049 | false |
### Anime Background style (v2) on Stable Diffusion
This is the `<anime-background-style-v2>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:














Here are images generated with this style:




|
cc8b7190cd8440412cc4d911ddef42f5
|
Helsinki-NLP/opus-mt-eo-ru
|
Helsinki-NLP
|
marian
| 11 | 250 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['eo', 'ru']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,999 | false |
### epo-rus
* source group: Esperanto
* target group: Russian
* OPUS readme: [epo-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-rus/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.rus | 17.7 | 0.379 |
### System Info:
- hf_name: epo-rus
- source_languages: epo
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'ru']
- src_constituents: {'epo'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: rus
- short_pair: eo-ru
- chrF2_score: 0.379
- bleu: 17.7
- brevity_penalty: 0.9179999999999999
- ref_len: 71288.0
- src_name: Esperanto
- tgt_name: Russian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: ru
- prefer_old: False
- long_pair: epo-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
47e290ba1960ea72a83629964fe041c6
|
jakub014/bert-base-uncased-finetuned-convincingness-IBM
|
jakub014
|
bert
| 15 | 16 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,473 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-convincingness-IBM
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6537
- Accuracy: 0.7511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 270 | 0.5707 | 0.7337 |
| 0.4673 | 2.0 | 540 | 0.6059 | 0.7221 |
| 0.4673 | 3.0 | 810 | 0.6537 | 0.7511 |
| 0.2218 | 4.0 | 1080 | 0.8485 | 0.7467 |
| 0.2218 | 5.0 | 1350 | 0.9221 | 0.7438 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
ba7a0355039717b48dd4f425ac222129
|
responsibility-framing/predict-perception-bert-cause-human
|
responsibility-framing
|
bert
| 12 | 19 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 9,145 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bert-cause-human
This model is a fine-tuned version of [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7139
- Rmse: 1.2259
- Rmse Cause::a Causata da un essere umano: 1.2259
- Mae: 1.0480
- Mae Cause::a Causata da un essere umano: 1.0480
- R2: 0.4563
- R2 Cause::a Causata da un essere umano: 0.4563
- Cos: 0.4783
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3953
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Cause::a Causata da un essere umano | Mae | Mae Cause::a Causata da un essere umano | R2 | R2 Cause::a Causata da un essere umano | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------------------:|:------:|:---------------------------------------:|:------:|:--------------------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0874 | 1.0 | 15 | 1.2615 | 1.6296 | 1.6296 | 1.3836 | 1.3836 | 0.0393 | 0.0393 | 0.0435 | 0.0 | 0.5 | 0.2935 | nan |
| 0.9577 | 2.0 | 30 | 1.1988 | 1.5886 | 1.5886 | 1.3017 | 1.3017 | 0.0870 | 0.0870 | 0.4783 | 0.0 | 0.5 | 0.3944 | nan |
| 0.8414 | 3.0 | 45 | 0.9870 | 1.4414 | 1.4414 | 1.1963 | 1.1963 | 0.2483 | 0.2483 | 0.3913 | 0.0 | 0.5 | 0.3048 | nan |
| 0.7291 | 4.0 | 60 | 0.9098 | 1.3839 | 1.3839 | 1.1297 | 1.1297 | 0.3071 | 0.3071 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.5949 | 5.0 | 75 | 0.9207 | 1.3921 | 1.3921 | 1.2079 | 1.2079 | 0.2988 | 0.2988 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.4938 | 6.0 | 90 | 0.8591 | 1.3448 | 1.3448 | 1.1842 | 1.1842 | 0.3458 | 0.3458 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.3611 | 7.0 | 105 | 0.8176 | 1.3119 | 1.3119 | 1.1454 | 1.1454 | 0.3774 | 0.3774 | 0.5652 | 0.0 | 0.5 | 0.4091 | nan |
| 0.2663 | 8.0 | 120 | 0.6879 | 1.2034 | 1.2034 | 1.0300 | 1.0300 | 0.4761 | 0.4761 | 0.5652 | 0.0 | 0.5 | 0.4091 | nan |
| 0.1833 | 9.0 | 135 | 0.7704 | 1.2735 | 1.2735 | 1.1031 | 1.1031 | 0.4133 | 0.4133 | 0.5652 | 0.0 | 0.5 | 0.3152 | nan |
| 0.1704 | 10.0 | 150 | 0.7097 | 1.2222 | 1.2222 | 1.0382 | 1.0382 | 0.4596 | 0.4596 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.1219 | 11.0 | 165 | 0.6872 | 1.2027 | 1.2027 | 1.0198 | 1.0198 | 0.4767 | 0.4767 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.1011 | 12.0 | 180 | 0.7201 | 1.2312 | 1.2312 | 1.0466 | 1.0466 | 0.4516 | 0.4516 | 0.5652 | 0.0 | 0.5 | 0.3152 | nan |
| 0.0849 | 13.0 | 195 | 0.7267 | 1.2368 | 1.2368 | 1.0454 | 1.0454 | 0.4466 | 0.4466 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0818 | 14.0 | 210 | 0.7361 | 1.2448 | 1.2448 | 1.0565 | 1.0565 | 0.4394 | 0.4394 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0634 | 15.0 | 225 | 0.7158 | 1.2275 | 1.2275 | 1.0384 | 1.0384 | 0.4549 | 0.4549 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan |
| 0.065 | 16.0 | 240 | 0.7394 | 1.2475 | 1.2475 | 1.0659 | 1.0659 | 0.4369 | 0.4369 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan |
| 0.0541 | 17.0 | 255 | 0.7642 | 1.2683 | 1.2683 | 1.0496 | 1.0496 | 0.4181 | 0.4181 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0577 | 18.0 | 270 | 0.7137 | 1.2257 | 1.2257 | 1.0303 | 1.0303 | 0.4565 | 0.4565 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0474 | 19.0 | 285 | 0.7393 | 1.2475 | 1.2475 | 1.0447 | 1.0447 | 0.4370 | 0.4370 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.0494 | 20.0 | 300 | 0.7157 | 1.2274 | 1.2274 | 1.0453 | 1.0453 | 0.4550 | 0.4550 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.0434 | 21.0 | 315 | 0.7248 | 1.2352 | 1.2352 | 1.0462 | 1.0462 | 0.4480 | 0.4480 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.049 | 22.0 | 330 | 0.7384 | 1.2467 | 1.2467 | 1.0613 | 1.0613 | 0.4377 | 0.4377 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0405 | 23.0 | 345 | 0.7420 | 1.2498 | 1.2498 | 1.0653 | 1.0653 | 0.4349 | 0.4349 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan |
| 0.0398 | 24.0 | 360 | 0.7355 | 1.2442 | 1.2442 | 1.0620 | 1.0620 | 0.4399 | 0.4399 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0398 | 25.0 | 375 | 0.7570 | 1.2623 | 1.2623 | 1.0698 | 1.0698 | 0.4235 | 0.4235 | 0.3913 | 0.0 | 0.5 | 0.3306 | nan |
| 0.0345 | 26.0 | 390 | 0.7359 | 1.2446 | 1.2446 | 1.0610 | 1.0610 | 0.4396 | 0.4396 | 0.5652 | 0.0 | 0.5 | 0.3152 | nan |
| 0.0345 | 27.0 | 405 | 0.7417 | 1.2495 | 1.2495 | 1.0660 | 1.0660 | 0.4352 | 0.4352 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
| 0.0386 | 28.0 | 420 | 0.7215 | 1.2323 | 1.2323 | 1.0514 | 1.0514 | 0.4506 | 0.4506 | 0.4783 | 0.0 | 0.5 | 0.3084 | nan |
| 0.0372 | 29.0 | 435 | 0.7140 | 1.2260 | 1.2260 | 1.0477 | 1.0477 | 0.4562 | 0.4562 | 0.5652 | 0.0 | 0.5 | 0.4091 | nan |
| 0.0407 | 30.0 | 450 | 0.7139 | 1.2259 | 1.2259 | 1.0480 | 1.0480 | 0.4563 | 0.4563 | 0.4783 | 0.0 | 0.5 | 0.3953 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
aecf8c99c0779d8a73cb46a501d08085
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_mrpc_256
|
gokuls
|
distilbert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,313 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_mrpc_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5199
- Accuracy: 0.3284
- F1: 0.0616
- Combined Score: 0.1950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5375 | 1.0 | 15 | 0.5292 | 0.3162 | 0.0 | 0.1581 |
| 0.5305 | 2.0 | 30 | 0.5292 | 0.3162 | 0.0 | 0.1581 |
| 0.5294 | 3.0 | 45 | 0.5293 | 0.3162 | 0.0 | 0.1581 |
| 0.5283 | 4.0 | 60 | 0.5284 | 0.3162 | 0.0 | 0.1581 |
| 0.5258 | 5.0 | 75 | 0.5260 | 0.3162 | 0.0 | 0.1581 |
| 0.519 | 6.0 | 90 | 0.5199 | 0.3284 | 0.0616 | 0.1950 |
| 0.5036 | 7.0 | 105 | 0.5200 | 0.3848 | 0.2462 | 0.3155 |
| 0.4916 | 8.0 | 120 | 0.5226 | 0.4167 | 0.3239 | 0.3703 |
| 0.4725 | 9.0 | 135 | 0.5298 | 0.4289 | 0.3581 | 0.3935 |
| 0.4537 | 10.0 | 150 | 0.5333 | 0.6152 | 0.6736 | 0.6444 |
| 0.4382 | 11.0 | 165 | 0.5450 | 0.6201 | 0.6906 | 0.6554 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
422cbb5fae0fbf277c11f2e38ceb9ef8
|
jonatasgrosman/exp_w2v2t_fa_vp-es_s533
|
jonatasgrosman
|
wav2vec2
| 10 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fa']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'fa']
| false | true | true | 469 | false |
# exp_w2v2t_fa_vp-es_s533
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
adb8ede84b89eadc3a25d457dbe4e05f
|
distilbert-base-uncased-distilled-squad
| null |
distilbert
| 14 | 27,917 |
transformers
| 20 |
question-answering
| true | true | false |
apache-2.0
|
['en']
|
['squad']
| null | 3 | 0 | 3 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 8,586 | false |
# DistilBERT base uncased distilled SQuAD
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.
This model is a fine-tune checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased), fine-tuned using (a second step of) knowledge distillation on [SQuAD v1.1](https://huggingface.co/datasets/squad).
- **Developed by:** Hugging Face
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** Apache 2.0
- **Related Models:** [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased)
- **Resources for more information:**
- See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including this model)
- See [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure
## How to Get Started with the Model
Use the code below to get started with the model.
```python
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model='distilbert-base-uncased-distilled-squad')
>>> context = r"""
... Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
... question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
... a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script.
... """
>>> result = question_answerer(question="What is a good example of a question answering dataset?", context=context)
>>> print(
... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}"
...)
Answer: 'SQuAD dataset', score: 0.4704, start: 147, end: 160
```
Here is how to use this model in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertForQuestionAnswering
import torch
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased-distilled-squad')
model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
answer_start_index = torch.argmax(outputs.start_logits)
answer_end_index = torch.argmax(outputs.end_logits)
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
```
And in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering
import tensorflow as tf
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased-distilled-squad")
model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased-distilled-squad")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
```
## Uses
This model can be used for question answering.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model='distilbert-base-uncased-distilled-squad')
>>> context = r"""
... Alice is sitting on the bench. Bob is sitting next to her.
... """
>>> result = question_answerer(question="Who is the CEO?", context=context)
>>> print(
... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}"
...)
Answer: 'Bob', score: 0.4183, start: 32, end: 35
```
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased) model describes it's training data as:
> DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
To learn more about the SQuAD v1.1 dataset, see the [SQuAD v1.1 data card](https://huggingface.co/datasets/squad).
#### Training Procedure
##### Preprocessing
See the [distilbert-base-uncased model card](https://huggingface.co/distilbert-base-uncased) for further details.
##### Pretraining
See the [distilbert-base-uncased model card](https://huggingface.co/distilbert-base-uncased) for further details.
## Evaluation
As discussed in the [model repository](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md)
> This model reaches a F1 score of 86.9 on the [SQuAD v1.1] dev set (for comparison, Bert bert-base-uncased version reaches a F1 score of 88.5).
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1910.01108.pdf). Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD.
- **Hardware Type:** 8 16GB V100 GPUs
- **Hours used:** 90 hours
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://arxiv.org/abs/1910.01108) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@inproceedings{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
booktitle={NeurIPS EMC^2 Workshop},
year={2019}
}
```
APA:
- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
## Model Card Authors
This model card was written by the Hugging Face team.
|
4fbbfa7dc732fdd9378b108516909a2a
|
kompactss/JeBERT_je_ko
|
kompactss
|
encoder-decoder
| 7 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
afl-3.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 728 | false |
# 🍊 제주 방언 번역 모델 🍊
- 제주어 -> 표준어
- Made by. 구름 자연어처리 과정 3기 3조!!
- github link : https://github.com/Goormnlpteam3/JeBERT
## 1. Seq2Seq Transformer Model
- encoder : BertConfig
- decoder : BertConfig
- Tokenizer : WordPiece Tokenizer
## 2. Dataset
- Jit Dataset
- AI HUB(+아래아 문자)
## 3. Hyper Parameters
- Epoch : 10 epochs(best at 8 epoch)
- Random Seed : 42
- Learning Rate : 5e-5
- Warm up Ratio : 0.1
- Batch Size : 32
## 4. BLEU Score
- Jit + AI HUB(+아래아 문자) Dataset : 79.0
---
### CREDIT
- 주형준 : wngudwns2798@gmail.com
- 강가람 : 1st9aram@gmail.com
- 고광연 : rhfprl11@gmail.com
- 김수연 : s01090445778@gmail.com
- 이원경 : hjtwin2@gmail.com
- 조성은 : eun102476@gmail.com
|
110d02c079ac9951ca1680e2e6d08038
|
jcmc/aw-gpt
|
jcmc
|
gpt2
| 17 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,087 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aw-gpt
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu116
- Datasets 2.6.1
- Tokenizers 0.11.6
|
a136a4e6d36421225697d8b3e55790e1
|
jannatul17/squad-bn-qgen-mt5-all-metric
|
jannatul17
|
t5
| 16 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['squad_bn']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,507 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-bn-qgen-mt5-all-metric
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the squad_bn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7273
- Rouge1 Precision: 35.8589
- Rouge1 Recall: 29.7041
- Rouge1 Fmeasure: 31.6373
- Rouge2 Precision: 15.4203
- Rouge2 Recall: 12.5155
- Rouge2 Fmeasure: 13.3978
- Rougel Precision: 34.4684
- Rougel Recall: 28.5887
- Rougel Fmeasure: 30.4627
- Rougelsum Precision: 34.4252
- Rougelsum Recall: 28.5362
- Rougelsum Fmeasure: 30.4053
- Sacrebleu: 6.4143
- Meteor: 0.1416
- Gen Len: 16.7199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure | Sacrebleu | Meteor | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:|:---------:|:------:|:-------:|
| 0.8449 | 1.0 | 16396 | 0.7340 | 31.6476 | 26.8901 | 28.2871 | 13.621 | 11.3545 | 11.958 | 30.3276 | 25.7754 | 27.1048 | 30.3426 | 25.7489 | 27.0991 | 5.9655 | 0.1336 | 16.8685 |
| 0.7607 | 2.0 | 32792 | 0.7182 | 33.7173 | 28.6115 | 30.1049 | 14.8227 | 12.2059 | 12.9453 | 32.149 | 27.2036 | 28.6617 | 32.2479 | 27.2261 | 28.7272 | 6.6093 | 0.138 | 16.8522 |
| 0.7422 | 3.0 | 49188 | 0.7083 | 34.6128 | 29.0223 | 30.7248 | 14.9888 | 12.3092 | 13.1021 | 33.2507 | 27.8154 | 29.4599 | 33.2848 | 27.812 | 29.5064 | 6.2407 | 0.1416 | 16.5806 |
| 0.705 | 4.0 | 65584 | 0.7035 | 34.156 | 29.0012 | 30.546 | 14.72 | 12.0251 | 12.8161 | 32.7527 | 27.6511 | 29.1955 | 32.7692 | 27.6627 | 29.231 | 6.1784 | 0.1393 | 16.7793 |
| 0.6859 | 5.0 | 81980 | 0.7038 | 35.1405 | 29.6033 | 31.2614 | 15.5108 | 12.6414 | 13.5059 | 33.8335 | 28.4264 | 30.0745 | 33.8782 | 28.4349 | 30.0901 | 6.5896 | 0.144 | 16.6651 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
7e536941c9934f7a3a666b77fa1d9d03
|
jonatasgrosman/exp_w2v2t_th_vp-nl_s253
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['th']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'th']
| false | true | true | 469 | false |
# exp_w2v2t_th_vp-nl_s253
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
60edf672ab5b484ee92b9687e023cd20
|
Loc/lucky-model
|
Loc
|
vit
| 8 | 4 |
transformers
| 0 |
image-classification
| true | true | true |
apache-2.0
| null |
['imagenet-1k', 'imagenet-21k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-classification']
| false | true | true | 5,273 | false |
# Vision Transformer (base-sized model)
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ViTFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224')
model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
## Training data
The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Training resolution is 224.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
```
|
cd26cde0781997e95c1a82884123e308
|
okep/distilbert-base-uncased-finetuned-emotion
|
okep
|
distilbert
| 22 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,339 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2269
- Accuracy: 0.9245
- F1: 0.9245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.853 | 1.0 | 250 | 0.3507 | 0.8925 | 0.8883 |
| 0.2667 | 2.0 | 500 | 0.2269 | 0.9245 | 0.9245 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
9a3fb3f3a14e7293e02ac9de8a5d3381
|
SiddharthaM/hasoc19-bert-base-multilingual-cased-HatredStatement-new
|
SiddharthaM
|
bert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,860 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hasoc19-bert-base-multilingual-cased-HatredStatement-new
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6565
- Accuracy: 0.7319
- Precision: 0.7320
- Recall: 0.7319
- F1: 0.7307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 296 | 0.5540 | 0.7110 | 0.7147 | 0.7110 | 0.7067 |
| 0.5551 | 2.0 | 592 | 0.5345 | 0.7224 | 0.7673 | 0.7224 | 0.7038 |
| 0.5551 | 3.0 | 888 | 0.5752 | 0.7272 | 0.7430 | 0.7272 | 0.7183 |
| 0.4252 | 4.0 | 1184 | 0.5697 | 0.7376 | 0.7384 | 0.7376 | 0.7359 |
| 0.4252 | 5.0 | 1480 | 0.6335 | 0.7319 | 0.7388 | 0.7319 | 0.7269 |
| 0.3401 | 6.0 | 1776 | 0.6565 | 0.7319 | 0.7320 | 0.7319 | 0.7307 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
53dcc8376de924bf3e23234845c4a8dc
|
infinitejoy/wav2vec2-large-xls-r-300m-basaa-cv8
|
infinitejoy
|
wav2vec2
| 19 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['bas']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'bas', 'generated_from_trainer', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event']
| true | true | true | 1,702 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-basaa-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BAS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4648
- Wer: 0.5472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9421 | 12.82 | 500 | 2.8894 | 1.0 |
| 1.1872 | 25.64 | 1000 | 0.6688 | 0.7460 |
| 0.8894 | 38.46 | 1500 | 0.4868 | 0.6516 |
| 0.769 | 51.28 | 2000 | 0.4960 | 0.6507 |
| 0.6936 | 64.1 | 2500 | 0.4781 | 0.5384 |
| 0.624 | 76.92 | 3000 | 0.4643 | 0.5430 |
| 0.5966 | 89.74 | 3500 | 0.4530 | 0.5591 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
9a6dad3b40cafc923e44ccf964047494
|
azizbarank/mbert-finnic-ner
|
azizbarank
|
bert
| 13 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,578 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-finnic-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the Finnish and Estonian parts of the "WikiANN" dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1427
- Precision: 0.9090
- Recall: 0.9156
- F1: 0.9123
- Accuracy: 0.9672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1636 | 1.0 | 2188 | 0.1385 | 0.8906 | 0.9000 | 0.8953 | 0.9601 |
| 0.0991 | 2.0 | 4376 | 0.1346 | 0.9099 | 0.9095 | 0.9097 | 0.9660 |
| 0.0596 | 3.0 | 6564 | 0.1427 | 0.9090 | 0.9156 | 0.9123 | 0.9672 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
05d62f41f4695fae6493d5a796f7f5fa
|
DOOGLAK/Article_50v0_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
|
bert
| 13 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['article50v0_wikigold_split']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,557 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_50v0_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v0_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5912
- Precision: 0.0975
- Recall: 0.0183
- F1: 0.0308
- Accuracy: 0.7915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 14 | 0.7204 | 0.0 | 0.0 | 0.0 | 0.7803 |
| No log | 2.0 | 28 | 0.6230 | 0.0743 | 0.0081 | 0.0145 | 0.7869 |
| No log | 3.0 | 42 | 0.5912 | 0.0975 | 0.0183 | 0.0308 | 0.7915 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
31ebe7e55f7b7fae71bdb0d6c1dff37f
|
cakiki/distilbert-base-uncased-finetuned-tweet-sentiment
|
cakiki
|
distilbert
| 9 | 14 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,746 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cakiki/distilbert-base-uncased-finetuned-tweet-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1025
- Train Sparse Categorical Accuracy: 0.9511
- Validation Loss: 0.1455
- Validation Sparse Categorical Accuracy: 0.9365
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.5409 | 0.8158 | 0.2115 | 0.9265 | 0 |
| 0.1442 | 0.9373 | 0.1411 | 0.9380 | 1 |
| 0.1025 | 0.9511 | 0.1455 | 0.9365 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.9.0-rc0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
9efe633a9994db7179576a2eeeb72825
|
muhtasham/bert-small-finetuned-ner-to-multilabel-wnut-17-new
|
muhtasham
|
bert
| 12 | 11 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,463 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-ner-to-multilabel-wnut-17-new
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2006 | 1.18 | 500 | 0.2043 |
| 0.1247 | 2.35 | 1000 | 0.1960 |
| 0.0935 | 3.53 | 1500 | 0.1893 |
| 0.0742 | 4.71 | 2000 | 0.2003 |
| 0.0552 | 5.88 | 2500 | 0.2106 |
| 0.0405 | 7.06 | 3000 | 0.2039 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
610cf4033f2664a0eef24aa6aeec233f
|
navteca/nli-deberta-v3-large
|
navteca
|
deberta-v2
| 9 | 33 |
transformers
| 2 |
zero-shot-classification
| true | false | false |
apache-2.0
|
['en']
|
['multi_nli', 'snli']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['microsoft/deberta-v3-large']
| false | true | true | 2,617 | false |
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 92.20
- Accuracy on MNLI mismatched set: 90.49
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-large')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-large')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-large')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-large')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
```
|
464d59d38dfbbc815381d52c62627433
|
yanaiela/roberta-base-epoch_45
|
yanaiela
|
roberta
| 9 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
|
['en']
|
['wikipedia', 'bookcorpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta-base', 'roberta-base-epoch_45']
| false | true | true | 2,102 | false |
# RoBERTa, Intermediate Checkpoint - Epoch 45
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_45.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
ed4d67081d6aeb7a9b8cd78f79df86ba
|
jonatasgrosman/exp_w2v2r_es_xls-r_age_teens-2_sixties-8_s772
|
jonatasgrosman
|
wav2vec2
| 10 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['es']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'es']
| false | true | true | 475 | false |
# exp_w2v2r_es_xls-r_age_teens-2_sixties-8_s772
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
1d1820382b3dff89ecbf4bd0a08a6dfd
|
jonatasgrosman/exp_w2v2t_pl_r-wav2vec2_s996
|
jonatasgrosman
|
wav2vec2
| 10 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['pl']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'pl']
| false | true | true | 462 | false |
# exp_w2v2t_pl_r-wav2vec2_s996
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
378607e3d4299d4f7468a4d4b0e2b99f
|
harshit345/xlsr-53-wav2vec-hi
|
harshit345
|
wav2vec2
| 8 | 10 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hi']
|
['Interspeech 2021']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 3,432 | false |
# Wav2Vec2-Large-XLSR-53-hindi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) hindi using the [Multilingual and code-switching ASR challenges for low resource Indian languages](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the hindi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi")
model = Wav2Vec2ForCTC.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:20.22 %
## Training
The script used for training can be found [Hindi ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1nY5WMj1oNlexD_qDeNYL7ZM427A021CV?usp=sharing)
|
773b66c192a4abe871eb4cee2f4a6408
|
FAICAM/wav2vec2-base-timit-demo-google-colab
|
FAICAM
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,998 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5725
- Wer: 0.3413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.508 | 1.0 | 500 | 1.9315 | 0.9962 |
| 0.8832 | 2.01 | 1000 | 0.5552 | 0.5191 |
| 0.4381 | 3.01 | 1500 | 0.4451 | 0.4574 |
| 0.2983 | 4.02 | 2000 | 0.4096 | 0.4265 |
| 0.2232 | 5.02 | 2500 | 0.4280 | 0.4083 |
| 0.1811 | 6.02 | 3000 | 0.4307 | 0.3942 |
| 0.1548 | 7.03 | 3500 | 0.4453 | 0.3889 |
| 0.1367 | 8.03 | 4000 | 0.5043 | 0.4138 |
| 0.1238 | 9.04 | 4500 | 0.4530 | 0.3807 |
| 0.1072 | 10.04 | 5000 | 0.4435 | 0.3660 |
| 0.0978 | 11.04 | 5500 | 0.4739 | 0.3676 |
| 0.0887 | 12.05 | 6000 | 0.5052 | 0.3761 |
| 0.0813 | 13.05 | 6500 | 0.5098 | 0.3619 |
| 0.0741 | 14.06 | 7000 | 0.4666 | 0.3602 |
| 0.0654 | 15.06 | 7500 | 0.5642 | 0.3657 |
| 0.0589 | 16.06 | 8000 | 0.5489 | 0.3638 |
| 0.0559 | 17.07 | 8500 | 0.5260 | 0.3598 |
| 0.0562 | 18.07 | 9000 | 0.5250 | 0.3640 |
| 0.0448 | 19.08 | 9500 | 0.5215 | 0.3569 |
| 0.0436 | 20.08 | 10000 | 0.5117 | 0.3560 |
| 0.0412 | 21.08 | 10500 | 0.4910 | 0.3570 |
| 0.0336 | 22.09 | 11000 | 0.5221 | 0.3524 |
| 0.031 | 23.09 | 11500 | 0.5278 | 0.3480 |
| 0.0339 | 24.1 | 12000 | 0.5353 | 0.3486 |
| 0.0278 | 25.1 | 12500 | 0.5342 | 0.3462 |
| 0.0251 | 26.1 | 13000 | 0.5399 | 0.3439 |
| 0.0242 | 27.11 | 13500 | 0.5626 | 0.3431 |
| 0.0214 | 28.11 | 14000 | 0.5749 | 0.3408 |
| 0.0216 | 29.12 | 14500 | 0.5725 | 0.3413 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
28fc28ad60466f70cce042995edd5946
|
Manseo/Colorful-v4.5
|
Manseo
| null | 24 | 25 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable diffusion', 'text-to-image', 'diffusers']
| false | true | true | 3,648 | false |
# **Colorful-v4.5**
**Colorful-v4.5** is a model merge between [Anything-v4.5](https://huggingface.co/andite/anything-v4.0), [AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs) and [ProtogenInfinity](https://huggingface.co/darkstorm2150/Protogen_Infinity_Official_Release)
Colorful-v4.5 is named how it is because of the fact that it is similar to Anything-v4.5 and that it improves the bland color pallet it comes with (atleast for me), producing much livelier images. It also improves some other things like environments, fingers, facial emotions and somewhat clothing (it also fixes the purple blobs 🤫)
*Technically i could name it Anything-v5.0 but that would be rather cheesy*
*It is highly recommended to run this model locally on your computer because running it from the web-ui api will produce lower quality images than intended*
# Examples:
# **Example #1:** *10 steps*

```
Prompt: masterpiece, best quality, 1girl, (green eyes), black hair, (black shorts), white shirt, glossy lips, small nose, standing up, in park, trees, full body, smiling
Other Details: Steps: 10, Sampler: DPM++ 2S a Karras, CFG scale: 8, Seed: 774768794, Size: 512x512, Model hash: b5de490700, Model: Colorful-v4.5, Denoising strength: 0.6, Hires upscale: 2, Hires steps: 10, Hires upscaler: SwinIR_4x
Negative Prompt: The negative prompt is very long and specific so it will be listed in the model's repo. ( The negative prompt comes from another model called Hentai Difussion so it will contain NSFW. A curated version of the negative prompt will also be in the repo for those who want SFW)
```
# **Example #2:** *20 steps*

```
Prompt: masterpiece, best quality, lady, in red and black yukata, pink hair, blue eyes, in dojo, smiling, sitting, hands on lap
Other Details: Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 8, Seed: 774768794, Size: 512x512, Model hash: b5de490700, Model: Colorful-v4.5, Denoising strength: 0.6, Hires upscale: 2, Hires steps: 20, Hires upscaler: SwinIR_4x
Negative Prompt: Same thing as in Example #1
```
# **Example #3:** *30 steps*

# Anything-v4.5: (for comparison)

```
Prompt: masterpiece, best quality, girl, black hair, blue eyes, black t-shirt, black pants, smiling, standing up, solo, facing viewer, near blossomed tree
Other Details: Steps: 30, Sampler: DPM++ 2S a Karras, CFG scale: 8, Seed: 774768794, Size: 512x512, Model hash: b5de490700, Model: Colorful-v4.5, Denoising strength: 0.6, Hires upscale: 2, Hires steps: 30, Hires upscaler: SwinIR_4x
Negative Prompt: Same thing as in Example #1
```
*Note: I didnt use any vae for the examples, but i did try the anything-v4.0 vae and it barely made a difference*
|
893f5ade5dbb46a67f00d770de4c82e6
|
YoussefSaad/dresses
|
YoussefSaad
|
vit
| 9 | 8 |
transformers
| 2 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,945 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dresses
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4588
- Accuracy: 0.9014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2458 | 1.23 | 100 | 0.4519 | 0.8633 |
| 0.0937 | 2.47 | 200 | 0.4285 | 0.8754 |
| 0.0802 | 3.7 | 300 | 0.4683 | 0.8754 |
| 0.041 | 4.94 | 400 | 0.4088 | 0.9031 |
| 0.0277 | 6.17 | 500 | 0.3979 | 0.8945 |
| 0.0459 | 7.41 | 600 | 0.4253 | 0.9014 |
| 0.024 | 8.64 | 700 | 0.4680 | 0.8893 |
| 0.0267 | 9.88 | 800 | 0.4575 | 0.8945 |
| 0.019 | 11.11 | 900 | 0.4470 | 0.8893 |
| 0.0235 | 12.35 | 1000 | 0.4380 | 0.9066 |
| 0.0129 | 13.58 | 1100 | 0.4557 | 0.9048 |
| 0.0211 | 14.81 | 1200 | 0.4588 | 0.9014 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
6e5fe63a9c99e72a5a93f9ac23c47865
|
jonatasgrosman/exp_w2v2t_fa_r-wav2vec2_s129
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fa']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'fa']
| false | true | true | 462 | false |
# exp_w2v2t_fa_r-wav2vec2_s129
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
8f4d505d44d4e705291f1d40f6306ca4
|
HuyenNguyen/FPT-P3-6000
|
HuyenNguyen
|
whisper
| 15 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,331 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FPT-P3-6000
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4654
- Wer: 24.2203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3748 | 1.14 | 500 | 0.5442 | 26.9412 |
| 0.1952 | 2.28 | 1000 | 0.4654 | 24.2203 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
0cdea0221aa2b07f91a14a4c2cc84ff5
|
HDKCL/izamizam
|
HDKCL
| null | 64 | 55 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 608 | false |
### izamizam Dreambooth model trained by HDKCL with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
30e8f00df716dc63b86b1072bebaf5ba
|
sd-dreambooth-library/bolsonaro
|
sd-dreambooth-library
| null | 3 | 0 | null | 1 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 419 | false |
### Bolsonaro on Stable Diffusion via Dreambooth
#### model by rodrigocoelho'
language:
- pt-BR
- en-US
thumbnail: "https://s3.amazonaws.com/moonup/production/uploads/1665418455423-632486a8767557375b7078a6.png"
tags:
- Bolsonaro
license: "Apache 2.0"
datasets:
- Bolsonaro
- stable diffusion

|
efa69472ab1d83defc5d339b66833b23
|
w11wo/javanese-distilbert-small-imdb
|
w11wo
|
distilbert
| 8 | 9 |
transformers
| 0 |
fill-mask
| true | true | false |
mit
|
['jv']
|
['w11wo/imdb-javanese']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['javanese-distilbert-small-imdb']
| false | true | true | 3,093 | false |
## Javanese DistilBERT Small IMDB
Javanese DistilBERT Small IMDB is a masked language model based on the [DistilBERT model](https://arxiv.org/abs/1910.01108). It was trained on Javanese IMDB movie reviews.
The model was originally the pretrained [Javanese DistilBERT Small model](https://huggingface.co/w11wo/javanese-distilbert-small) and is later fine-tuned on the Javanese IMDB movie review dataset. It achieved a perplexity of 21.01 on the validation dataset. Many of the techniques used are based on a Hugging Face tutorial [notebook](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) written by [Sylvain Gugger](https://github.com/sgugger).
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with TensorFlow nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
|----------------------------------|----------|----------------------|---------------------------------|
| `javanese-distilbert-small-imdb` | 66M | DistilBERT Small | Javanese IMDB (47.5 MB of text) |
## Evaluation Results
The model was trained for 5 epochs and the following is the final result once the training ended.
| train loss | valid loss | perplexity | total time |
|------------|------------|------------|-------------|
| 3.126 | 3.039 | 21.01 | 5:6:4 |
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "w11wo/javanese-distilbert-small-imdb"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Aku mangan sate ing [MASK] bareng konco-konco")
```
### Feature Extraction in PyTorch
```python
from transformers import DistilBertModel, DistilBertTokenizerFast
pretrained_name = "w11wo/javanese-distilbert-small-imdb"
model = DistilBertModel.from_pretrained(pretrained_name)
tokenizer = DistilBertTokenizerFast.from_pretrained(pretrained_name)
prompt = "Indonesia minangka negara gedhe."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Disclaimer
Do consider the biases which came from the IMDB review that may be carried over into the results of this model.
## Author
Javanese DistilBERT Small was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If you use any of our models in your research, please cite:
```bib
@inproceedings{wongso2021causal,
title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures},
author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin},
booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)},
pages={1--7},
year={2021},
organization={IEEE}
}
```
|
d9ae3f8e873b0659c8a6b63f9a88194c
|
tmsreekanth98/keyphrase-extractions_bart-large
|
tmsreekanth98
|
bart
| 16 | 5 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['kp20k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,797 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# keyphrase-extractions_bart-large
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the kp20k dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7257
- Rouge1: 0.4713
- Rouge2: 0.2385
- Rougel: 0.384
- Rougelsum: 0.3841
- Gen Len: 18.3164
- Phrase match: 0.1917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Phrase match |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------------:|
| 2.5104 | 1.0 | 730 | 1.8021 | 0.464 | 0.2336 | 0.3765 | 0.3766 | 18.9074 | 0.1784 |
| 1.8436 | 2.0 | 1460 | 1.7473 | 0.4709 | 0.2381 | 0.3834 | 0.3836 | 17.8127 | 0.1891 |
| 1.6864 | 3.0 | 2190 | 1.7257 | 0.4713 | 0.2385 | 0.384 | 0.3841 | 18.3164 | 0.1917 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
ff5025b7944463c8b61f678039bd4475
|
recklessrecursion/2008_Sichuan_earthquake-clustered
|
recklessrecursion
|
distilbert
| 8 | 22 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,886 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# recklessrecursion/2008_Sichuan_earthquake-clustered
This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5049
- Train End Logits Accuracy: 0.8507
- Train Start Logits Accuracy: 0.7778
- Validation Loss: 0.3830
- Validation End Logits Accuracy: 0.9474
- Validation Start Logits Accuracy: 0.8947
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.5049 | 0.8507 | 0.7778 | 0.3830 | 0.9474 | 0.8947 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
0d4025b92abe72130361bc4e8daa82dd
|
andreaparker/flan-t5-base-samsum
|
andreaparker
|
t5
| 7 | 14 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['samsum']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,109 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-samsum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3772
- Rouge1: 47.4798
- Rouge2: 23.9756
- Rougel: 40.0392
- Rougelsum: 43.6545
- Gen Len: 17.3162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4403 | 1.0 | 1842 | 1.3829 | 46.5346 | 23.1326 | 39.4401 | 42.8272 | 17.0977 |
| 1.3534 | 2.0 | 3684 | 1.3732 | 47.0911 | 23.5074 | 39.5951 | 43.2279 | 17.4554 |
| 1.2795 | 3.0 | 5526 | 1.3709 | 46.8895 | 23.3243 | 39.5909 | 43.1286 | 17.2027 |
| 1.2313 | 4.0 | 7368 | 1.3736 | 47.4946 | 23.7802 | 39.9999 | 43.5903 | 17.2198 |
| 1.1934 | 5.0 | 9210 | 1.3772 | 47.4798 | 23.9756 | 40.0392 | 43.6545 | 17.3162 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
### Papers With Code Results
As of 2 February 2023 the Papers with Code page for this task has the following leaderboard.
Our score (Rouge 1 score of 47.4798) puts this model's performance between fourth and fifth place on the leaderboard:

|
fbf4445fa1cb43344fdbf9c593329436
|
RuiqianLi/wav2vec2-large-xls-r-300m-chinese-taiwan-colab
|
RuiqianLi
|
wav2vec2
| 15 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,330 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-chinese-taiwan-colab
!!!this model has just been trained with very high learning rate and small epochs, please do not use this to do the speech to text.
!!!It's just a test, I'll retrain this model with more time later when I have time.
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
f14428421b20112375db198617d243c9
|
Rattana/wav2vec2-thai-colab
|
Rattana
|
wav2vec2
| 12 | 10 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 996 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-thai-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
b8afee4e59344e2746059b8902bdfa27
|
hfl/chinese-bert-wwm-ext
|
hfl
|
bert
| 11 | 533,397 |
transformers
| 50 |
fill-mask
| true | true | true |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,947 | false |
## Chinese BERT with Whole Word Masking
For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
This repository is developed based on:https://github.com/google-research/bert
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
- Primary: https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
- Secondary: https://arxiv.org/abs/1906.08101
```
@article{chinese-bert-wwm,
title={Pre-Training with Whole Word Masking for Chinese BERT},
author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
journal={arXiv preprint arXiv:1906.08101},
year={2019}
}
```
|
93902e433ddb944e6d50651668b355e6
|
moghis/xlm-roberta-base-finetuned-panx-de-data
|
moghis
|
xlm-roberta
| 12 | 6 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,340 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-data
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1 Score: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
97e4ba5c64dfddcab33b49594d7feca6
|
roberta-large-openai-detector
| null |
roberta
| 8 | 27,024 |
transformers
| 5 |
text-classification
| true | false | true |
mit
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 1 | 0 | 1 | 0 | 1 | 1 | 0 |
['exbert']
| false | true | true | 9,097 | false |
# RoBERTa Large OpenAI Detector
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:** RoBERTa large OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as OpenAI released the weights of the [largest GPT-2 model](https://huggingface.co/gpt2-xl), the 1.5B parameter version.
- **Developed by:** OpenAI, see [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector) and [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for full author list
- **Model Type:** Fine-tuned transformer-based language model
- **Language(s):** English
- **License:** MIT
- **Related Models:** [RoBERTa large](https://huggingface.co/roberta-large), [GPT-XL (1.5B parameter version)](https://huggingface.co/gpt2-xl), [GPT-Large (the 774M parameter version)](https://huggingface.co/gpt2-large), [GPT-Medium (the 355M parameter version)](https://huggingface.co/gpt2-medium) and [GPT-2 (the 124M parameter version)](https://huggingface.co/gpt2)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) (see, in particular, the section beginning on page 12 about Automated ML-based detection).
- [GitHub Repo](https://github.com/openai/gpt-2-output-dataset/tree/master/detector)
- [OpenAI Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- [Explore the detector model here](https://huggingface.co/openai-detector )
## Uses
#### Direct Use
The model is a classifier that can be used to detect text generated by GPT-2 models.
#### Downstream Use
The model's developers have stated that they developed and released the model to help with research related to synthetic text generation, so the model could potentially be used for downstream tasks related to synthetic text generation. See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further discussion.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model developers discuss the risk of adversaries using the model to better evade detection in their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), suggesting that using the model for evading detection or for supporting efforts to evade detection would be a misuse of the model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
In their [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), the model developers discuss the risk that the model may be used by bad actors to develop capabilities for evading detection, though one purpose of releasing the model is to help improve detection research.
In a related [blog post](https://openai.com/blog/gpt-2-1-5b-release/), the model developers also discuss the limitations of automated methods for detecting synthetic text and the need to pair automated detection tools with other, non-automated approaches. They write:
> We conducted in-house detection research and developed a detection model that has detection rates of ~95% for detecting 1.5B GPT-2-generated text. We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective.
The model developers also [report](https://openai.com/blog/gpt-2-1-5b-release/) finding that classifying content from larger models is more difficult, suggesting that detection with automated tools like this model will be increasingly difficult as model sizes increase. The authors find that training detector models on the outputs of larger models can improve accuracy and robustness.
#### Bias
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by RoBERTa large and GPT-2 1.5B (which this model is built/fine-tuned on) can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups (see the [RoBERTa large](https://huggingface.co/roberta-large) and [GPT-2 XL](https://huggingface.co/gpt2-xl) model cards for more information). The developers of this model discuss these issues further in their [paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
## Training
#### Training Data
The model is a sequence classifier based on RoBERTa large (see the [RoBERTa large model card](https://huggingface.co/roberta-large) for more details on the RoBERTa large training data) and then fine-tuned using the outputs of the 1.5B GPT-2 model (available [here](https://github.com/openai/gpt-2-output-dataset)).
#### Training Procedure
The model developers write that:
> We based a sequence classifier on RoBERTaLARGE (355 million parameters) and fine-tuned it to classify the outputs from the 1.5B GPT-2 model versus WebText, the dataset we used to train the GPT-2 model.
They later state:
> To develop a robust detector model that can accurately classify generated texts regardless of the sampling method, we performed an analysis of the model’s transfer performance.
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the training procedure.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf).
#### Testing Data, Factors and Metrics
The model is intended to be used for detecting text generated by GPT-2 models, so the model developers test the model on text datasets, measuring accuracy by:
> testing 510-token test examples comprised of 5,000 samples from the WebText dataset and 5,000 samples generated by a GPT-2 model, which were not used during the training.
#### Results
The model developers [find](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf):
> Our classifier is able to detect 1.5 billion parameter GPT-2-generated text with approximately 95% accuracy...The model’s accuracy depends on sampling methods used when generating outputs, like temperature, Top-K, and nucleus sampling ([Holtzman et al., 2019](https://arxiv.org/abs/1904.09751). Nucleus sampling outputs proved most difficult to correctly classify, but a detector trained using nucleus sampling transfers well across other sampling methods. As seen in Figure 1 [in the paper], we found consistently high accuracy when trained on nucleus sampling.
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf), Figure 1 (on page 14) and Figure 2 (on page 16) for full results.
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
The model developers write that:
See the [associated paper](https://d4mucfpksywv.cloudfront.net/papers/GPT_2_Report.pdf) for further details on the modeling architecture and training details.
## Citation Information
```bibtex
@article{solaiman2019release,
title={Release strategies and the social impacts of language models},
author={Solaiman, Irene and Brundage, Miles and Clark, Jack and Askell, Amanda and Herbert-Voss, Ariel and Wu, Jeff and Radford, Alec and Krueger, Gretchen and Kim, Jong Wook and Kreps, Sarah and others},
journal={arXiv preprint arXiv:1908.09203},
year={2019}
}
```
APA:
- Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., ... & Wang, J. (2019). Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203.
## Model Card Authors
This model card was written by the team at Hugging Face.
## How to Get Started with the Model
More information needed
|
073d629786f9574d585a38dfee0a9582
|
Helsinki-NLP/opus-mt-en-dra
|
Helsinki-NLP
|
marian
| 11 | 548 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
|
['en', 'ta', 'kn', 'ml', 'te', 'dra']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,371 | false |
### eng-dra
* source group: English
* target group: Dravidian languages
* OPUS readme: [eng-dra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-dra/README.md)
* model: transformer
* source language(s): eng
* target language(s): kan mal tam tel
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-kan.eng.kan | 4.7 | 0.348 |
| Tatoeba-test.eng-mal.eng.mal | 13.1 | 0.515 |
| Tatoeba-test.eng.multi | 10.7 | 0.463 |
| Tatoeba-test.eng-tam.eng.tam | 9.0 | 0.444 |
| Tatoeba-test.eng-tel.eng.tel | 7.1 | 0.363 |
### System Info:
- hf_name: eng-dra
- source_languages: eng
- target_languages: dra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-dra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ta', 'kn', 'ml', 'te', 'dra']
- src_constituents: {'eng'}
- tgt_constituents: {'tam', 'kan', 'mal', 'tel'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.test.txt
- src_alpha3: eng
- tgt_alpha3: dra
- short_pair: en-dra
- chrF2_score: 0.46299999999999997
- bleu: 10.7
- brevity_penalty: 1.0
- ref_len: 7928.0
- src_name: English
- tgt_name: Dravidian languages
- train_date: 2020-07-26
- src_alpha2: en
- tgt_alpha2: dra
- prefer_old: False
- long_pair: eng-dra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
bef0968740904b33b785ac68037a3717
|
Mizuiro-sakura/deberta-v2-japanese-base-finetuned-commonsenseqa
|
Mizuiro-sakura
|
deberta-v2
| 12 | 19 |
transformers
| 0 |
multiple-choice
| true | false | false |
mit
|
['ja']
|
['wikipedia', 'cc100', 'oscar']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'deberta', 'deberta-v2', 'commonsenseqa', 'commonsense_qa', 'commonsense-qa', 'CommonsenseQA']
| false | true | true | 2,096 | false |
# このモデルはdeberta-v2-base-japaneseをファインチューニングしてCommonsenseQA(選択式の質問)に用いれるようにしたものです。
このモデルはdeberta-v2-base-japaneseをyahoo japan/JGLUEのJCommonsenseQA( https://github.com/yahoojapan/JGLUE ) を用いてファインチューニングしたものです。
# This model is fine-tuned model for CommonsenseQA which is based on deberta-v2-base-japanese
This model is fine-tuned by using JGLUE/JCommonsenseQA dataset.
You could use this model for CommonsenseQA tasks.
# How to use 使い方
transformersおよびpytorch、sentencepiece、Juman++をインストールしてください。
以下のコードを実行することで、CommonsenseQAタスクを解かせることができます。 please execute this code.
```python
from transformers import AutoTokenizer, AutoModelForMultipleChoice
import torch
import numpy as np
# modelのロード
tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/deberta-v2-japanese-base-finetuned-commonsenseqa')
model = AutoModelForMultipleChoice.from_pretrained('Mizuiro-sakura/deberta-v2-japanese-base-finetuned-commonsenseqa')
# 質問と選択肢の代入
question = '電子機器で使用される最も主要な電子回路基板の事をなんと言う?'
choice1 = '掲示板'
choice2 = 'パソコン'
choice3 = 'マザーボード'
choice4 = 'ハードディスク'
choice5 = 'まな板'
# トークン化(エンコーディング・形態素解析)する
token = tokenizer([question,question,question,question,question],[choice1,choice2,choice3,choice4,choice5],return_tensors='pt',padding=True)
leng=len(token['input_ids'][0])
# modelに入力するための下準備
X1 = np.empty(shape=(1, 5, leng))
X2 = np.empty(shape=(1, 5, leng))
X1[0, :, :] = token['input_ids']
X2[0, :, :] = token['attention_mask']
# modelにトークンを入力する
results = model(torch.tensor(X1).to(torch.int64),torch.tensor(X2).to(torch.int64))
# 最も高い値のインデックスを取得する
max_result=torch.argmax(results.logits)
print(max_result)
```
# モデルの精度 accuracy of model
79.80339588918764
(参考 BERT : 72.0, XLM RoBERTa base : 68.7, LUKE : 80.0)
# deberta-v2-base-japaneseとは?
日本語Wikipedeia(3.2GB)および、cc100(85GB)、oscar(54GB)を用いて訓練されたモデルです。
京都大学黒橋研究室が公表されました。
# Model description
This is a Japanese DeBERTa V2 base model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
# Acknowledgments 謝辞
モデルを公開してくださった京都大学黒橋研究室には感謝いたします。
I would like to thank Kurohashi Lab at Kyoto University.
|
c8292058b0082d5d8161d98bec7be357
|
anas-awadalla/t5-base-few-shot-k-512-finetuned-squad-seed-0
|
anas-awadalla
|
t5
| 17 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 957 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-few-shot-k-512-finetuned-squad-seed-0
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
34249c010d7cc93e7ca6bfdaf6eedc2e
|
limsc/reqroberta-tapt-epoch33
|
limsc
|
roberta
| 4 | 2 |
transformers
| 0 |
fill-mask
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,325 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# reqroberta-tapt-epoch33
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 37100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
0300075cd6fc647379c6030d5765f62e
|
Nanatan/distilbert-base-uncased-finetuned-emotion
|
Nanatan
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2169
- Accuracy: 0.9215
- F1: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.798 | 1.0 | 250 | 0.3098 | 0.899 | 0.8956 |
| 0.2422 | 2.0 | 500 | 0.2169 | 0.9215 | 0.9215 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
35da484db079823b9c69ae5030136269
|
Arashasg/WikiBert2WikiBert
|
Arashasg
|
encoder-decoder
| 6 | 0 |
transformers
| 1 |
text2text-generation
| true | false | false |
['apache-2.0']
|
['fa']
|
['pn-summary', 'XL-Sum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Wikipedia', 'Summarizer', 'bert2bert', 'Summarization']
| false | true | true | 1,612 | false |
# WikiBert2WikiBert
Bert language models can be employed for Summarization tasks. WikiBert2WikiBert is an encoder-decoder transformer model that is initialized using the Persian WikiBert Model weights. The WikiBert Model is a Bert language model which is fine-tuned on Persian Wikipedia. After using the WikiBert weights for initialization, the model is trained for five epochs on PN-summary and Persian BBC datasets.
## How to Use:
You can use the code below to get the model's outputs, or you can simply use the demo on the right.
```
from transformers import (
BertTokenizerFast,
EncoderDecoderConfig,
EncoderDecoderModel,
BertConfig
)
model_name = 'Arashasg/WikiBert2WikiBert'
tokenizer = BertTokenizerFast.from_pretrained(model_name)
config = EncoderDecoderConfig.from_pretrained(model_name)
model = EncoderDecoderModel.from_pretrained(model_name, config=config)
def generate_summary(text):
inputs = tokenizer(text, padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
outputs = model.generate(input_ids, attention_mask=attention_mask)
output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True)
return output_str
input = 'your input comes here'
summary = generate_summary(input)
```
## Evaluation
I separated 5 percent of the pn-summary for evaluation of the model. The rouge scores of the model are as follows:
| Rouge-1 | Rouge-2 | Rouge-l |
| ------------- | ------------- | ------------- |
| 38.97% | 18.42% | 34.50% |
|
3245cef6032b3490c0a51adfab16539b
|
Lvxue/distilled-mt5-small-b1.5
|
Lvxue
|
mt5
| 17 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['en', 'ro']
|
['wmt16']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,035 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-b1.5
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7938
- Bleu: 7.5422
- Gen Len: 44.3267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
35b949c25f291f9284355a63585e11e5
|
jonatasgrosman/exp_w2v2t_ja_wav2vec2_s834
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ja']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ja']
| false | true | true | 456 | false |
# exp_w2v2t_ja_wav2vec2_s834
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
0bbbd177749afe922a25ab702d0a7a4c
|
jonatasgrosman/exp_w2v2t_de_unispeech_s62
|
jonatasgrosman
|
unispeech
| 10 | 1 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['de']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'de']
| false | true | true | 468 | false |
# exp_w2v2t_de_unispeech_s62
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
afaee7e921eb371e6f17a44eb169613a
|
kadirnar/yolov8m-v8.0
|
kadirnar
| null | 3 | 0 | null | 0 |
object-detection
| false | false | false |
gpl-3.0
| null |
['detection-datasets/coco']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['object-detection', 'computer-vision', 'yolov8', 'yolov5']
| false | true | true | 457 | false |
### Model Description
[Ultralytics:](https://github.com/ultralytics/ultralytics/) YOLOv8 in PyTorch > ONNX > CoreML > TFLite]
### Installation
```
pip install ultralytics
```
### Yolov8 Inference
```python
from ultralytics import YOLO
model = YOLO('kadirnar/yolov8m-v8.0')
model.conf = conf_threshold
model.iou = iou_threshold
prediction = model.predict(image, imgsz=image_size, show=False, save=False)
```
### BibTeX Entry and Citation Info
```
```
|
017ede23e612ad82b4daf3d619955461
|
yanaiela/roberta-base-epoch_58
|
yanaiela
|
roberta
| 9 | 3 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
|
['en']
|
['wikipedia', 'bookcorpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta-base', 'roberta-base-epoch_58']
| false | true | true | 2,102 | false |
# RoBERTa, Intermediate Checkpoint - Epoch 58
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_58.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
4d0543a905779a1b260bd398ca7ab02b
|
lantianai/Stable_Diffusion_Inpainting_Mask_EulerA
|
lantianai
| null | 17 | 9 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
| false | true | true | 14,470 | false |
Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask.
The **Stable-Diffusion-Inpainting** was initialized with the weights of the [Stable-Diffusion-v-1-2](https://steps/huggingface.co/CompVis/stable-diffusion-v-1-2-original). First 595k steps regular training, then 440k steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning to improve classifier-free [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
[](https://huggingface.co/spaces/runwayml/stable-diffusion-inpainting) | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
:-------------------------:|:-------------------------:|
## Examples:
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```python
from diffusers import StableDiffusionInpaintPipeline
pipe = StableDiffusionInpaintPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting",
revision="fp16",
torch_dtype=torch.float16,
)
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
#image and mask_image should be PIL images.
#The mask structure is white for inpainting and black for keeping as is
image = pipe(prompt=prompt, image=image, mask_image=mask_image).images[0]
image.save("./yellow_cat_on_park_bench.png")
```
**How it works:**
`image` | `mask_image`
:-------------------------:|:-------------------------:|
<img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" alt="drawing" width="300"/> | <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" alt="drawing" width="300"/>
`prompt` | `Output`
:-------------------------:|:-------------------------:|
<span style="position: relative;bottom: 150px;">Face of a yellow cat, high resolution, sitting on a park bench</span> | <img src="https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/test.png" alt="drawing" width="300"/>
### Original GitHub Repository
1. Download the weights [sd-v1-5-inpainting.ckpt](https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt)
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/runwayml/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide six checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`, `sd-v1-4.ckpt`, `sd-v1-5.ckpt` and `sd-v1-5-inpainting.ckpt`
which were trained as follows,
- `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`.
515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- `sd-v1-4.ckpt`: Resumed from stable-diffusion-v1-2.225,000 steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- `sd-v1-5.ckpt`: Resumed from sd-v1-2.ckpt. 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve classifier-free guidance sampling.
- `sd-v1-5-inpaint.ckpt`: Resumed from sd-v1-2.ckpt. 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve classifier-free guidance sampling. Then 440k steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Inpainting Evaluation
To assess the performance of the inpainting model, we used the same evaluation
protocol as in our [LDM paper](https://arxiv.org/abs/2112.10752). Since the
Stable Diffusion Inpainting Model acccepts a text input, we simply used a fixed
prompt of `photograph of a beautiful empty scene, highest quality settings`.
| Model | FID | LPIPS |
|-----------------------------|------|------------------|
| Stable Diffusion Inpainting | 1.00 | 0.141 (+- 0.082) |
| Latent Diffusion Inpainting | 1.50 | 0.137 (+- 0.080) |
| CoModGAN | 1.82 | 0.15 |
| LaMa | 2.21 | 0.134 (+- 0.080) |
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
dfee420109622375cef8e550c576a5e3
|
Akshat/xlm-roberta-base-finetuned-panx-de
|
Akshat
|
xlm-roberta
| 24 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1405
- F1: 0.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2542 | 1.0 | 787 | 0.1788 | 0.8083 |
| 0.1307 | 2.0 | 1574 | 0.1371 | 0.8488 |
| 0.0784 | 3.0 | 2361 | 0.1405 | 0.8611 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
bec9cc10fa2786cbfc4b0d8364a62f02
|
KoichiYasuoka/deberta-small-coptic-upos
|
KoichiYasuoka
|
deberta-v2
| 9 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-sa-4.0
|
['cop']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['coptic', 'token-classification', 'pos', 'dependency-parsing']
| false | true | true | 891 | false |
# deberta-small-coptic-upos
## Model Description
This is a DeBERTa(V2) model pre-trained with [UD_Coptic](https://universaldependencies.org/cop/) for POS-tagging and dependency-parsing, derived from [deberta-small-coptic](https://huggingface.co/KoichiYasuoka/deberta-small-coptic). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-small-coptic-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-small-coptic-upos")
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/deberta-small-coptic-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
896cc8b867076087d4b3bdeed82e232b
|
research-backup/t5-small-squadshifts-vanilla-new_wiki-qg
|
research-backup
|
t5
| 34 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['en']
|
['lmqg/qg_squadshifts']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation']
| true | true | true | 4,153 | false |
# Model Card of `research-backup/t5-small-squadshifts-vanilla-new_wiki-qg`
This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: new_wiki) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-small](https://huggingface.co/t5-small)
- **Language:** en
- **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (new_wiki)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-small-squadshifts-vanilla-new_wiki-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-small-squadshifts-vanilla-new_wiki-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-squadshifts-vanilla-new_wiki-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json)
| | Score | Type | Dataset |
|:-----------|--------:|:---------|:---------------------------------------------------------------------------|
| BERTScore | 83.08 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_1 | 6.9 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_2 | 2.75 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_3 | 1.38 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_4 | 0.81 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| METEOR | 8.26 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| MoverScore | 52.25 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| ROUGE_L | 8.85 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squadshifts
- dataset_name: new_wiki
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-small
- max_length: 512
- max_length_output: 32
- epoch: 1
- batch: 32
- lr: 1e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-squadshifts-vanilla-new_wiki-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
8a8783966dbdd5551d6bf08b71060e73
|
apatidar0/bert-finetuned-squad
|
apatidar0
|
bert
| 12 | 9 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 930 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
1497fc25ce86602416899db224685490
|
soorya12/t5-small-finetuned-on-cloudsek-data-assignment
|
soorya12
|
t5
| 7 | 3 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,392 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-on-cloudsek-data-assignment
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8961
- Validation Loss: 1.8481
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 6744, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.8958 | 1.8481 | 0 |
| 1.8961 | 1.8481 | 1 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
f9c79e88b84205719724b5ef66efff38
|
tomekkorbak/serene_goldberg
|
tomekkorbak
| null | 2 | 0 | null | 0 | null | false | false | false |
mit
|
['en']
|
['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 8,797 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# serene_goldberg
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 25000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True,
'skip_tokens': 1661599744},
'generation': {'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'max_tokens': 64, 'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '81a1701e025d2c65ae6e8c2103df559071523ee0'},
'path_or_name': 'tomekkorbak/goofy_pasteur'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'serene_goldberg',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1661599744,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/sj097lix
|
ade338acb558ac9b7768a3a81c644b7a
|
frtna/jwt300_mt-Italian-to-Spanish_transformers
|
frtna
|
t5
| 10 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['new_dataset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,293 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jwt300_mt-Italian-to-Spanish_transformers
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the new_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4425
- Sacrebleu: 0.9057
- Gen Len: 18.1276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 2.7545 | 1.0 | 2229 | 2.4425 | 0.9057 | 18.1276 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
8871f6aad5a3721d0751fb52e5489f49
|
xinranyyyy/roberta_checkpoint-finetuned-squad
|
xinranyyyy
|
roberta
| 13 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,285 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_checkpoint-finetuned-squad
This model is a fine-tuned version of [WillHeld/roberta-base-coqa](https://huggingface.co/WillHeld/roberta-base-coqa) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8504 | 1.0 | 5536 | 0.8424 |
| 0.6219 | 2.0 | 11072 | 0.8360 |
| 0.4807 | 3.0 | 16608 | 0.8969 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
9d63a4dd5eccef854026bd9714f377c6
|
MultiversexPeeps/art-of-wave
|
MultiversexPeeps
| null | 21 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 866 | false |
### Art of Wave Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Information on this model will be here: https://civitai.com/user/duskfallcrew
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
wvert1 (use that on your prompt)
|
47c506bef435bfe5bdfef2f9ff38eb83
|
jonatasgrosman/exp_w2v2t_it_unispeech_s156
|
jonatasgrosman
|
unispeech
| 10 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['it']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'it']
| false | true | true | 469 | false |
# exp_w2v2t_it_unispeech_s156
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
09434c314d446797613cd1a4bc291132
|
BSC-LT/roberta-large-bne-sqac
|
BSC-LT
|
roberta
| 9 | 0 |
transformers
| 3 |
question-answering
| true | false | false |
apache-2.0
|
['es']
|
['BSC-TeMU/SQAC']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['national library of spain', 'spanish', 'bne', 'qa', 'question answering']
| false | true | true | 1,627 | false |
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL AND WILL SOON BE REMOVED:** https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-sqac
# Spanish RoBERTa-large trained on BNE finetuned for Spanish Question Answering Corpus (SQAC) dataset.
RoBERTa-large-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) large model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-large-bne
## Dataset
The dataset used is the [SQAC corpus](https://huggingface.co/datasets/BSC-TeMU/SQAC).
## Evaluation and results
F1 Score: 0.7993 (average of 5 runs).
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-SANIDAD/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@misc{gutierrezfandino2021spanish,
title={Spanish Language Models},
author={Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquín Silveira-Ocampo and Casimiro Pio Carrino and Aitor Gonzalez-Agirre and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Marta Villegas},
year={2021},
eprint={2107.07253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
2a5316c84b941b77b2a27697575325c7
|
mrm8488/wav2vec2-large-xlsr-53-euskera
|
mrm8488
|
wav2vec2
| 11 | 10 |
transformers
| 0 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['eu']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 3,328 | false |
# Wav2Vec2-Large-XLSR-53-euskera
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Euskera using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eu", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-euskera")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-euskera")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Euskera test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "eu", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-euskera")
model = Wav2Vec2ForCTC.from_pretrained("mrm8488/wav2vec2-large-xlsr-53-euskera")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 24.03 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found ???
|
d2047408958137520ac739d879343ee9
|
sgugger/finetuned-bert
|
sgugger
|
bert
| 10 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,378 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3916
- Accuracy: 0.875
- F1: 0.9125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.581 | 1.0 | 230 | 0.4086 | 0.8260 | 0.8711 |
| 0.366 | 2.0 | 460 | 0.3758 | 0.8480 | 0.8963 |
| 0.2328 | 3.0 | 690 | 0.3916 | 0.875 | 0.9125 |
### Framework versions
- Transformers 4.9.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.8.1.dev0
- Tokenizers 0.10.1
|
8902b10dd345b87030d2a1d9560b81cd
|
poloclub/RobArch
|
poloclub
| null | 5 | 0 | null | 2 | null | false | false | false |
mit
| null |
['imagenet-1k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['adversarial machine learning']
| false | true | true | 3,920 | false |
## RobArch: Designing Robust Architectures against Adversarial Attacks
*ShengYun Peng, Weilin Xu, Cory Cornelius, Kevin Li, Rahul Duggal, Duen Horng Chau, Jason Martin*
Check https://github.com/ShengYun-Peng/RobArch for the complete code.
### Abstract
Adversarial Training is the most effective approach for improving the robustness of Deep Neural Networks (DNNs). However, compared to the large body of research in optimizing the adversarial training process, there are few investigations into how architecture components affect robustness, and they rarely constrain model capacity. Thus, it is unclear where robustness precisely comes from. In this work, we present the first large-scale systematic study on the robustness of DNN architecture components under fixed parameter budgets. Through our investigation, we distill 18 actionable robust network design guidelines that empower model developers to gain deep insights. We demonstrate these guidelines' effectiveness by introducing the novel Robust Architecture (RobArch) model that instantiates the guidelines to build a family of top-performing models across parameter capacities against strong adversarial attacks. RobArch achieves the new state-of-the-art AutoAttack accuracy on the RobustBench ImageNet leaderboard.
### Prerequisites
1. Register Weights & Biases [account](https://wandb.ai/site)
2. Prepare ImageNet via [Fast AT - Installation step 3 & 4](https://github.com/locuslab/fast_adversarial/tree/master/ImageNet)
> Run step 4 only if you want to use Fast-AT.
3. Set up venv:
```bash
make .venv_done
```
### Training
Fast-AT is much faster than standard PGD AT. For RobArch-S, Fast-AT takes ~1.5 days on 2 Nvidia A100s, but ~5 days on 4 Nvidia A100s.
#### Torchvision models - Fast AT (e.g., ResNet-50)
```bash
make BASE=<imagenet root dir> WANDB_ACCOUNT=<name> experiments/Torch_ResNet50/.done_test_pgd
```
If you want to test other off-the-shelf models in [torchvision](https://pytorch.org/vision/stable/models.html#classification), add the model name in [MODEL.mk](MODEL.mk), and create a new make target by following other ResNets/WideResNets in [Makefile](Makefile).
#### RobArch - Fast AT (e.g., RobArch-S)
```bash
make BASE=<imagenet root dir> WANDB_ACCOUNT=<name> experiments/RobArch_S/.done_test_pgd
```
#### RobArch - Standard PGD AT (e.g., RobArch-S)
```bash
# Training
make BASE=<imagenet root dir> WANDB_ACCOUNT=<name> experiments/PGDAT_RobArch_S/.done_train
# Evaluation on PGD
make BASE=<imagenet root dir> WANDB_ACCOUNT=<name> experiments/PGDAT_RobArch_S/.done_test_pgd
# Evaluation on AutoAttack
make BASE=<imagenet root dir> WANDB_ACCOUNT=<name> experiments/PGDAT_RobArch_S/.done_test_aa
# Pretrained models evaluated on AutoAttack
make BASE=<imagenet root dir> WANDB_ACCOUNT=<name> experiments/PGDAT_RobArch_S/.done_test_pretrained
```
### Pretrained models
- ImageNet $\ell_\infty$-norm
| Architecture | #Param | Natural | AutoAttack | PGD10-4 | PGD50-4 | PGD100-4 | PGD100-2 | PGD100-8 |
| :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: |
| [RobArch-S](https://huggingface.co/poloclub/RobArch/resolve/main/pretrained/robarch_s.pt) | 26M | 70.17% | 44.14% | 48.19% | 47.78% | 47.77% | 60.06% | 21.77% |
| [RobArch-M](https://huggingface.co/poloclub/RobArch/resolve/main/pretrained/robarch_m.pt) | 46M | 71.88% | 46.26% | 49.84% | 49.32% | 49.30% | 61.89% | 23.01% |
| [RobArch-L](https://huggingface.co/poloclub/RobArch/resolve/main/pretrained/robarch_l.pt) | 104M | 73.44% | 48.94% | 51.72% | 51.04% | 51.03% | 63.49% | 25.31% |
### Citation
```bibtex
@misc{peng2023robarch,
title={RobArch: Designing Robust Architectures against Adversarial Attacks},
author={ShengYun Peng and Weilin Xu and Cory Cornelius and Kevin Li and Rahul Duggal and Duen Horng Chau and Jason Martin},
year={2023},
eprint={2301.03110},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
3a9703ff3b32d6c37c93b4712d9aa03a
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_wnli_128
|
gokuls
|
mobilebert
| 17 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,601 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_wnli_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5913
- Accuracy: 0.1408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3404 | 1.0 | 435 | 0.5913 | 0.1408 |
| 0.3027 | 2.0 | 870 | 0.5985 | 0.1127 |
| 0.2935 | 3.0 | 1305 | 0.6351 | 0.1127 |
| 0.2884 | 4.0 | 1740 | 0.6013 | 0.0986 |
| 0.2838 | 5.0 | 2175 | 0.6154 | 0.0986 |
| 0.2788 | 6.0 | 2610 | 0.6608 | 0.0845 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
8f21f0cbd093e88415a44bde235bd404
|
fathyshalab/massive_play-roberta-large-v1-4-71
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,458 | false |
# fathyshalab/massive_play-roberta-large-v1-4-71
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_play-roberta-large-v1-4-71")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
990a3fc065f7b9bfb738c742f8d40994
|
jonatasgrosman/exp_w2v2t_ar_wavlm_s95
|
jonatasgrosman
|
wavlm
| 10 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ar']
| false | true | true | 438 | false |
# exp_w2v2t_ar_wavlm_s95
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
8f9f0a51bff280b0a40b5fc87811e51e
|
SayakRana/finetuning-hate-model-electra
|
SayakRana
|
electra
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,107 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-hate-model-electra
This model is a fine-tuned version of [cross-encoder/ms-marco-electra-base](https://huggingface.co/cross-encoder/ms-marco-electra-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1691
- Accuracy: 0.9597
- F1: 0.3448
- Precision: 0.4545
- Recall: 0.2778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
03bb7dd4922dc3ef93f5140120d23b53
|
cointegrated/rubert-tiny
|
cointegrated
|
bert
| 8 | 7,117 |
transformers
| 12 |
feature-extraction
| true | false | false |
mit
|
['ru', 'en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['russian', 'fill-mask', 'pretraining', 'embeddings', 'masked-lm', 'tiny', 'feature-extraction', 'sentence-similarity']
| false | true | true | 2,169 | false |
This is a very small distilled version of the [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model for Russian and English (45 MB, 12M parameters). There is also an **updated version of this model**, [rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2), with a larger vocabulary and better quality on practically all Russian NLU tasks.
This model is useful if you want to fine-tune it for a relatively simple Russian task (e.g. NER or sentiment classification), and you care more about speed and size than about accuracy. It is approximately x10 smaller and faster than a base-sized BERT. Its `[CLS]` embeddings can be used as a sentence representation aligned between Russian and English.
It was trained on the [Yandex Translate corpus](https://translate.yandex.ru/corpus), [OPUS-100](https://huggingface.co/datasets/opus100) and [Tatoeba](https://huggingface.co/datasets/tatoeba), using MLM loss (distilled from [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)), translation ranking loss, and `[CLS]` embeddings distilled from [LaBSE](https://huggingface.co/sentence-transformers/LaBSE), [rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence), Laser and USE.
There is a more detailed [description in Russian](https://habr.com/ru/post/562064/).
Sentence embeddings can be produced as follows:
```python
# pip install transformers sentencepiece
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cointegrated/rubert-tiny")
model = AutoModel.from_pretrained("cointegrated/rubert-tiny")
# model.cuda() # uncomment it if you have a GPU
def embed_bert_cls(text, model, tokenizer):
t = tokenizer(text, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**{k: v.to(model.device) for k, v in t.items()})
embeddings = model_output.last_hidden_state[:, 0, :]
embeddings = torch.nn.functional.normalize(embeddings)
return embeddings[0].cpu().numpy()
print(embed_bert_cls('привет мир', model, tokenizer).shape)
# (312,)
```
|
0777b4fbb92b37f510d8b2bf91ad370f
|
NahedAbdelgaber/evaluating-student-writing-distibert-ner-with-metric
|
NahedAbdelgaber
|
distilbert
| 12 | 10 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,625 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# evaluating-student-writing-distibert-ner-with-metric
This model is a fine-tuned version of [NahedAbdelgaber/evaluating-student-writing-distibert-ner](https://huggingface.co/NahedAbdelgaber/evaluating-student-writing-distibert-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7535
- Precision: 0.0614
- Recall: 0.2590
- F1: 0.0993
- Accuracy: 0.6188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7145 | 1.0 | 1755 | 0.7683 | 0.0546 | 0.2194 | 0.0875 | 0.6191 |
| 0.6608 | 2.0 | 3510 | 0.7504 | 0.0570 | 0.2583 | 0.0934 | 0.6136 |
| 0.5912 | 3.0 | 5265 | 0.7535 | 0.0614 | 0.2590 | 0.0993 | 0.6188 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
a6748dc709d36949a695de8c52a8dbb1
|
danieleV9H/hubert-base-libri-clean-ft100h-v3
|
danieleV9H
|
hubert
| 12 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['librispeech_asr']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'hf-asr-leaderboard', 'hf-asr-leaderboard']
| true | true | true | 4,781 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-libri-clean-ft100h-v3
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1120
- Wer: 0.1332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.201 | 0.14 | 250 | 3.9799 | 1.0 |
| 2.8893 | 0.28 | 500 | 3.4838 | 1.0 |
| 2.8603 | 0.42 | 750 | 3.3505 | 1.0 |
| 2.7216 | 0.56 | 1000 | 2.1194 | 0.9989 |
| 1.3372 | 0.7 | 1250 | 0.8124 | 0.6574 |
| 0.8238 | 0.84 | 1500 | 0.5712 | 0.5257 |
| 0.6449 | 0.98 | 1750 | 0.4442 | 0.4428 |
| 0.5241 | 1.12 | 2000 | 0.3442 | 0.3672 |
| 0.4458 | 1.26 | 2250 | 0.2850 | 0.3186 |
| 0.3959 | 1.4 | 2500 | 0.2507 | 0.2882 |
| 0.3641 | 1.54 | 2750 | 0.2257 | 0.2637 |
| 0.3307 | 1.68 | 3000 | 0.2044 | 0.2434 |
| 0.2996 | 1.82 | 3250 | 0.1969 | 0.2313 |
| 0.2794 | 1.96 | 3500 | 0.1823 | 0.2193 |
| 0.2596 | 2.1 | 3750 | 0.1717 | 0.2096 |
| 0.2563 | 2.24 | 4000 | 0.1653 | 0.2000 |
| 0.2532 | 2.38 | 4250 | 0.1615 | 0.1971 |
| 0.2376 | 2.52 | 4500 | 0.1559 | 0.1916 |
| 0.2341 | 2.66 | 4750 | 0.1494 | 0.1855 |
| 0.2102 | 2.8 | 5000 | 0.1464 | 0.1781 |
| 0.2222 | 2.94 | 5250 | 0.1399 | 0.1732 |
| 0.2081 | 3.08 | 5500 | 0.1450 | 0.1707 |
| 0.1963 | 3.22 | 5750 | 0.1337 | 0.1655 |
| 0.2107 | 3.36 | 6000 | 0.1344 | 0.1633 |
| 0.1866 | 3.5 | 6250 | 0.1339 | 0.1611 |
| 0.186 | 3.64 | 6500 | 0.1311 | 0.1563 |
| 0.1703 | 3.78 | 6750 | 0.1307 | 0.1537 |
| 0.1819 | 3.92 | 7000 | 0.1277 | 0.1555 |
| 0.176 | 4.06 | 7250 | 0.1280 | 0.1515 |
| 0.1837 | 4.2 | 7500 | 0.1249 | 0.1504 |
| 0.1678 | 4.34 | 7750 | 0.1236 | 0.1480 |
| 0.1624 | 4.48 | 8000 | 0.1194 | 0.1456 |
| 0.1631 | 4.62 | 8250 | 0.1215 | 0.1462 |
| 0.1736 | 4.76 | 8500 | 0.1192 | 0.1451 |
| 0.1752 | 4.9 | 8750 | 0.1206 | 0.1432 |
| 0.1578 | 5.04 | 9000 | 0.1151 | 0.1415 |
| 0.1537 | 5.18 | 9250 | 0.1185 | 0.1402 |
| 0.1771 | 5.33 | 9500 | 0.1165 | 0.1414 |
| 0.1481 | 5.47 | 9750 | 0.1152 | 0.1413 |
| 0.1509 | 5.61 | 10000 | 0.1152 | 0.1382 |
| 0.146 | 5.75 | 10250 | 0.1133 | 0.1385 |
| 0.1464 | 5.89 | 10500 | 0.1139 | 0.1371 |
| 0.1442 | 6.03 | 10750 | 0.1162 | 0.1365 |
| 0.128 | 6.17 | 11000 | 0.1147 | 0.1371 |
| 0.1381 | 6.31 | 11250 | 0.1148 | 0.1378 |
| 0.1343 | 6.45 | 11500 | 0.1113 | 0.1363 |
| 0.1325 | 6.59 | 11750 | 0.1134 | 0.1355 |
| 0.1442 | 6.73 | 12000 | 0.1142 | 0.1358 |
| 0.1286 | 6.87 | 12250 | 0.1133 | 0.1352 |
| 0.1349 | 7.01 | 12500 | 0.1129 | 0.1344 |
| 0.1338 | 7.15 | 12750 | 0.1131 | 0.1328 |
| 0.1403 | 7.29 | 13000 | 0.1124 | 0.1338 |
| 0.1314 | 7.43 | 13250 | 0.1141 | 0.1335 |
| 0.1283 | 7.57 | 13500 | 0.1124 | 0.1332 |
| 0.1347 | 7.71 | 13750 | 0.1107 | 0.1332 |
| 0.1195 | 7.85 | 14000 | 0.1119 | 0.1332 |
| 0.1326 | 7.99 | 14250 | 0.1120 | 0.1332 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
fd4250de656f0dfd56003c0ee782634d
|
paola-md/distil-tis
|
paola-md
|
roberta
| 6 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,578 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-tis
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6061
- Rmse: 0.7785
- Mse: 0.6061
- Mae: 0.6003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.7173 | 1.0 | 492 | 0.7060 | 0.8403 | 0.7060 | 0.5962 |
| 0.5955 | 2.0 | 984 | 0.6585 | 0.8115 | 0.6585 | 0.5864 |
| 0.5876 | 3.0 | 1476 | 0.6090 | 0.7804 | 0.6090 | 0.6040 |
| 0.5871 | 4.0 | 1968 | 0.6247 | 0.7904 | 0.6247 | 0.5877 |
| 0.5871 | 5.0 | 2460 | 0.6061 | 0.7785 | 0.6061 | 0.6003 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
2250a88967c4b4f1c7b6d5f743a23243
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-5
|
SetFit
|
distilbert
| 10 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,833 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7214
- Accuracy: 0.37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0995 | 1.0 | 5 | 1.1301 | 0.0 |
| 1.0227 | 2.0 | 10 | 1.1727 | 0.0 |
| 1.0337 | 3.0 | 15 | 1.1734 | 0.2 |
| 0.9137 | 4.0 | 20 | 1.1829 | 0.2 |
| 0.8065 | 5.0 | 25 | 1.1496 | 0.4 |
| 0.7038 | 6.0 | 30 | 1.1101 | 0.4 |
| 0.6246 | 7.0 | 35 | 1.0982 | 0.2 |
| 0.4481 | 8.0 | 40 | 1.0913 | 0.2 |
| 0.3696 | 9.0 | 45 | 1.0585 | 0.4 |
| 0.3137 | 10.0 | 50 | 1.0418 | 0.4 |
| 0.2482 | 11.0 | 55 | 1.0078 | 0.4 |
| 0.196 | 12.0 | 60 | 0.9887 | 0.6 |
| 0.1344 | 13.0 | 65 | 0.9719 | 0.6 |
| 0.1014 | 14.0 | 70 | 1.0053 | 0.6 |
| 0.111 | 15.0 | 75 | 0.9653 | 0.6 |
| 0.0643 | 16.0 | 80 | 0.9018 | 0.6 |
| 0.0559 | 17.0 | 85 | 0.9393 | 0.6 |
| 0.0412 | 18.0 | 90 | 1.0210 | 0.6 |
| 0.0465 | 19.0 | 95 | 0.9965 | 0.6 |
| 0.0328 | 20.0 | 100 | 0.9739 | 0.6 |
| 0.0289 | 21.0 | 105 | 0.9796 | 0.6 |
| 0.0271 | 22.0 | 110 | 0.9968 | 0.6 |
| 0.0239 | 23.0 | 115 | 1.0143 | 0.6 |
| 0.0201 | 24.0 | 120 | 1.0459 | 0.6 |
| 0.0185 | 25.0 | 125 | 1.0698 | 0.6 |
| 0.0183 | 26.0 | 130 | 1.0970 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
d708c5cba870a84727fe0421179502a6
|
globuslabs/ScholarBERT_1
|
globuslabs
|
bert
| 8 | 10 |
transformers
| 1 |
fill-mask
| true | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
['science', 'multi-displinary']
| false | true | true | 1,756 | false |
# ScholarBERT_1 Model
This is the **ScholarBERT_1** variant of the ScholarBERT model family.
The model is pretrained on a large collection of scientific research articles (**2.2B tokens**).
This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default.
The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters.
# Model Architecture
| Hyperparameter | Value |
|-----------------|:-------:|
| Layers | 24 |
| Hidden Size | 1024 |
| Attention Heads | 16 |
| Total Parameters | 340M |
# Training Dataset
The vocab and the model are pertrained on **1% of the PRD** scientific literature dataset.
The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”),
a nonprofit organization based in California. This dataset was constructed from a corpus
of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals.
The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences,
Social Sciences, and Technology. The distribution of articles is shown below.

# BibTeX entry and citation info
If using this model, please cite this paper:
```
@misc{hong2022scholarbert,
doi = {10.48550/ARXIV.2205.11342},
url = {https://arxiv.org/abs/2205.11342},
author = {Hong, Zhi and Ajith, Aswathy and Pauloski, Gregory and Duede, Eamon and Malamud, Carl and Magoulas, Roger and Chard, Kyle and Foster, Ian},
title = {ScholarBERT: Bigger is Not Always Better},
publisher = {arXiv},
year = {2022}
}
```
|
90f336c65c82322f5980e3bebe033937
|
SampsonChris/sentiment_analysis_on_covid_tweets
|
SampsonChris
|
distilbert
| 18 | 15 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,122 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_analysis_on_covid_tweets
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6053
- eval_accuracy: 0.7625
- eval_runtime: 33.7416
- eval_samples_per_second: 59.274
- eval_steps_per_second: 7.409
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
533037e7019cad1f35a8d1a26d3e3920
|
MultiBertGunjanPatrick/multiberts-seed-0-60k
|
MultiBertGunjanPatrick
|
bert
| 7 | 3 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert', 'multiberts', 'multiberts-seed-0']
| false | true | true | 6,479 | false |
# MultiBERTs Seed 0 Checkpoint 60k (uncased)
Seed 0 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-60k')
model = BertModel.from_pretrained("multiberts-seed-0-60k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
d51ce7d3f89b6b5a3b6e8a3a220f2baa
|
google/t5-efficient-small-dl2
|
google
|
t5
| 12 | 10 |
transformers
| 0 |
text2text-generation
| true | true | true |
apache-2.0
|
['en']
|
['c4']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['deep-narrow']
| false | true | true | 6,250 | false |
# T5-Efficient-SMALL-DL2 (Deep-Narrow version)
T5-Efficient-SMALL-DL2 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint and was released with the
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
of similar parameter count.
To quote the paper:
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
> a tall base model might also generally more efficient compared to a large model. We generally find
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
> consider.
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
A sequence of word embeddings is therefore processed sequentially by each transformer block.
## Details model architecture
This model checkpoint - **t5-efficient-small-dl2** - is of model type **Small** with the following variations:
- **dl** is **2**
It has **43.73** million parameters and thus requires *ca.* **174.93 MB** of memory in full precision (*fp32*)
or **87.46 MB** of memory in half precision (*fp16* or *bf16*).
A summary of the *original* T5 model architectures can be seen here:
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
| ----| ---- | ---- | ---- | ---- | ---- | ----|
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
whereas the following abbreviations are used:
| Abbreviation | Definition |
| ----| ---- |
| nl | Number of transformer blocks (depth) |
| dm | Dimension of embedding vector (output vector of transformers block) |
| kv | Dimension of key/value projection matrix |
| nh | Number of attention heads |
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
| el | Number of transformer blocks in the encoder (encoder depth) |
| dl | Number of transformer blocks in the decoder (decoder depth) |
| sh | Signifies that attention heads are shared |
| skv | Signifies that key-values projection matrices are tied |
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
## Pre-Training
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
the span-based masked language modeling (MLM) objective.
## Fine-Tuning
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
You can follow on of the following examples on how to fine-tune the model:
*PyTorch*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*Tensorflow*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
*JAX/Flax*:
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
## Downstream Performance
TODO: Add table if available
## Computational Complexity
TODO: Add table if available
## More information
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|
d027195cb1e0587808259bbb79d18099
|
anas-awadalla/roberta-base-few-shot-k-512-finetuned-squad-seed-8
|
anas-awadalla
|
roberta
| 17 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 983 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-512-finetuned-squad-seed-8
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
010375c8fead0e7206b353f921037cc2
|
ShadoWxShinigamI/Midjourney-v4-PaintArt
|
ShadoWxShinigamI
| null | 18 | 154 |
diffusers
| 31 | null | false | false | false |
openrail
| null | null | null | 5 | 0 | 4 | 1 | 0 | 0 | 0 |
[]
| false | true | true | 870 | false |
##MODEL BY ShadoWxShinigamI
Use Token - mdjrny-pntrt illustration style at the beginning of your prompt; If some object doesn't work, provide more context in your prompt [eg:- 'ocean,ship,waves' instead of just 'ship']
Training - 2080 steps, Batch size 4, 512x512, v1-5 Base, 26 images
Examples:-





|
35659921b45bb65a7797a9d0f35e9b38
|
MultiBertGunjanPatrick/multiberts-seed-3-60k
|
MultiBertGunjanPatrick
|
bert
| 7 | 4 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert', 'multiberts', 'multiberts-seed-3']
| false | true | true | 6,479 | false |
# MultiBERTs Seed 3 Checkpoint 60k (uncased)
Seed 3 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-60k')
model = BertModel.from_pretrained("multiberts-seed-3-60k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
776aab8986c6354cd28606e54e90d820
|
FredZhang7/google-safesearch-mini
|
FredZhang7
|
inceptionv3
| 13 | 0 |
keras
| 1 |
image-classification
| true | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['safety-checker', 'tensorflow', 'node.js']
| false | true | true | 6,729 | false |
# Google Safesearch Mini Model Card
<a href="https://huggingface.co/FredZhang7/google-safesearch-mini-v2"> <font size="4"> <bold> Version 2 is here! </bold> </font> </a>
This model is trained on 2,220,000+ images scraped from Google Images, Reddit, Imgur, and Github.
The InceptionV3 and Xception models have been fine-tuned to predict the likelihood of an image falling into one of three categories: nsfw_gore, nsfw_suggestive, and safe.
After 20 epochs on PyTorch, the finetuned InceptionV3 model achieves 94% acc on both training and test data. After 3.3 epochs on Keras, the finetuned Xception model scores 94% acc on training set and 92% on test set.
Not only is this model accurate, but it also offers a significant advantage over stable diffusion safety checkers. By using our model, users can save 1.12GB of RAM and disk space.
<br>
# PyTorch
The PyTorch model runs much slower with transformers, so downloading it externally is a better option.
```bash
pip install --upgrade torchvision
```
```python
import torch, os, warnings, requests
from io import BytesIO
from PIL import Image
from urllib.request import urlretrieve
from torchvision import transforms
PATH_TO_IMAGE = 'https://images.unsplash.com/photo-1594568284297-7c64464062b1'
USE_CUDA = False
warnings.filterwarnings("ignore")
def download_model():
print("Downloading google_safesearch_mini.bin...")
urlretrieve("https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/pytorch_model.bin", "google_safesearch_mini.bin")
def eval():
if not os.path.exists("google_safesearch_mini.bin"):
download_model()
model = torch.jit.load('./google_safesearch_mini.bin')
img = Image.open(PATH_TO_IMAGE).convert('RGB') if not (PATH_TO_IMAGE.startswith('http://') or PATH_TO_IMAGE.startswith('https://')) else Image.open(BytesIO(requests.get(PATH_TO_IMAGE).content)).convert('RGB')
transform = transforms.Compose([transforms.Resize(299), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
img = transform(img).unsqueeze(0)
if USE_CUDA:
img, model = img.cuda(), model.cuda()
else:
img, model = img.cpu(), model.cpu()
model.eval()
with torch.no_grad():
out, _ = model(img)
_, predicted = torch.max(out.data, 1)
classes = {0: 'nsfw_gore', 1: 'nsfw_suggestive', 2: 'safe'}
# account for edge cases
if predicted[0] != 2 and abs(out[0][2] - out[0][predicted[0]]) > 0.20:
img = Image.new('RGB', image.size, color = (0, 255, 255))
print("\033[93m" + "safe" + "\033[0m")
else:
print('\n\033[1;31m' + classes[predicted.item()] + '\033[0m' if predicted.item() != 2 else '\033[1;32m' + classes[predicted.item()] + '\033[0m\n')
if __name__ == '__main__':
eval()
```
Output Example:

<br>
# Keras
```python
import tensorflow as tf
from PIL import Image
import requests, os
# download the model
url = "https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/tensorflow/saved_model.pb"
r = requests.get(url, allow_redirects=True)
if not os.path.exists('tensorflow'):
os.makedirs('tensorflow')
open('tensorflow/saved_model.pb', 'wb').write(r.content)
# download the variables
url = "https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/tensorflow/variables/variables.data-00000-of-00001"
r = requests.get(url, allow_redirects=True)
if not os.path.exists('tensorflow/variables'):
os.makedirs('tensorflow/variables')
open('tensorflow/variables/variables.data-00000-of-00001', 'wb').write(r.content)
url = "https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/tensorflow/variables/variables.index"
r = requests.get(url, allow_redirects=True)
open('tensorflow/variables/variables.index', 'wb').write(r.content)
# load the model
model = tf.saved_model.load('./tensorflow')
image = Image.open('cat.jpg')
image = image.resize((299, 299))
image = tf.convert_to_tensor(image)
image = tf.expand_dims(image, 0)
# run the model
tensor = model(image)
classes = ['nsfw_gore', 'nsfw_suggestive', 'safe']
prediction = classes[tf.argmax(tensor, 1)[0]]
print('\033[1;32m' + prediction + '\033[0m' if prediction == 'safe' else '\033[1;33m' + prediction + '\033[0m')
```
Output Example:

<br>
# Tensorflow.js
```bash
npm i @tensorflow/tfjs-node
```
```javascript
const tf = require('@tensorflow/tfjs-node');
const fs = require('fs');
const { pipeline } = require('stream');
const { promisify } = require('util');
const download = async (url, path) => {
// Taken from https://levelup.gitconnected.com/how-to-download-a-file-with-node-js-e2b88fe55409
const streamPipeline = promisify(pipeline);
const response = await fetch(url);
if (!response.ok) {
throw new Error(`unexpected response ${response.statusText}`);
}
await streamPipeline(response.body, fs.createWriteStream(path));
};
async function run() {
// download saved model and variables from https://huggingface.co/FredZhang7/google-safesearch-mini/tree/main/tensorflow
if (!fs.existsSync('tensorflow')) {
fs.mkdirSync('tensorflow');
await download('https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/tensorflow/saved_model.pb', 'tensorflow/saved_model.pb');
fs.mkdirSync('tensorflow/variables');
await download('https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/tensorflow/variables/variables.data-00000-of-00001', 'tensorflow/variables/variables.data-00000-of-00001');
await download('https://huggingface.co/FredZhang7/google-safesearch-mini/resolve/main/tensorflow/variables/variables.index', 'tensorflow/variables/variables.index');
}
// load model and image
const model = await tf.node.loadSavedModel('./tensorflow/');
const image = tf.node.decodeImage(fs.readFileSync('cat.jpg'), 3);
// predict
const input = tf.expandDims(image, 0);
const tensor = model.predict(input);
const max = tensor.argMax(1);
const classes = ['nsfw_gore', 'nsfw_suggestive', 'safe'];
console.log('\x1b[32m%s\x1b[0m', classes[max.dataSync()[0]], '\n');
}
run();
```
Output Example:

<br>
# Bias and Limitations
Each person's definition of "safe" is different. The images in the dataset are classified as safe/unsafe by Google SafeSearch, Reddit, and Imgur.
It is possible that some images may be safe to others but not to you. Also, when a model encounters an image with things it hasn't seen, it likely makes wrong predictions.
This is why in the PyTorch example, I accounted for the "edge cases" before printing the predictions.
|
8d89f5f1885b80bbf46151789d4dfae6
|
vitouphy/wav2vec2-xls-r-300m-timit-phoneme
|
vitouphy
|
wav2vec2
| 9 | 113 |
transformers
| 3 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
| null | null | 1 | 0 | 1 | 0 | 4 | 1 | 3 |
['automatic-speech-recognition', 'pytorch', 'transformers', 'en', 'generated_from_trainer']
| true | true | true | 3,028 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
## Model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Timit dataset. Check [this notebook](https://www.kaggle.com/code/vitouphy/phoneme-recognition-with-wav2vec2) for training detail.
## Usage
**Approach 1:** Using HuggingFace's pipeline, this will cover everything end-to-end from raw audio input to text output.
```python
from transformers import pipeline
# Load the model
pipe = pipeline(model="vitouphy/wav2vec2-xls-r-300m-timit-phoneme")
# Process raw audio
output = pipe("audio_file.wav", chunk_length_s=10, stride_length_s=(4, 2))
```
**Approach 2:** More custom way to predict phonemes.
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
import soundfile as sf
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("vitouphy/wav2vec2-xls-r-300m-timit-phoneme")
model = Wav2Vec2ForCTC.from_pretrained("vitouphy/wav2vec2-xls-r-300m-timit-phoneme")
# Read and process the input
audio_input, sample_rate = sf.read("audio_file.wav")
inputs = processor(audio_input, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
# Decode id into string
predicted_ids = torch.argmax(logits, axis=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
print(predicted_sentences)
```
## Training and evaluation data
We use [DARPA TIMIT dataset](https://www.kaggle.com/datasets/mfekadu/darpa-timit-acousticphonetic-continuous-speech) for this model.
- We split into **80/10/10** for training, validation, and testing respectively.
- That roughly corresponds to about **137/17/17** minutes.
- The model obtained **7.996%** on this test set.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
### Citation
```
@misc { phy22-phoneme,
author = {Phy, Vitou},
title = {{Automatic Phoneme Recognition on TIMIT Dataset with Wav2Vec 2.0}},
year = 2022,
note = {{If you use this model, please cite it using these metadata.}},
publisher = {Hugging Face},
version = {1.0},
doi = {10.57967/hf/0125},
url = {https://huggingface.co/vitouphy/wav2vec2-xls-r-300m-timit-phoneme}
}
```
|
d2ef61e8063ac127b375421f6c53acab
|
MartinoMensio/racism-models-raw-label-epoch-2
|
MartinoMensio
|
bert
| 4 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
mit
|
['es']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,088 | false |
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `raw-label-epoch-2`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = 'raw-label-epoch-2'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = pipeline("text-classification", model = model, tokenizer = tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
print(pipe(texts))
# [{'label': 'racist', 'score': 0.8982619643211365}, {'label': 'non-racist', 'score': 0.6703745126724243}]
```
For more details, see https://github.com/preyero/neatclass22
|
4e20db0069333ca8ead5b8e5bfab0b35
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.