repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-it-ar
|
Helsinki-NLP
|
marian
| 11 | 35 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['it', 'ar']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,034 | false |
### ita-ara
* source group: Italian
* target group: Arabic
* OPUS readme: [ita-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-ara/README.md)
* model: transformer
* source language(s): ita
* target language(s): ara
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.ara | 21.9 | 0.517 |
### System Info:
- hf_name: ita-ara
- source_languages: ita
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'ar']
- src_constituents: {'ita'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ara/opus-2020-07-03.test.txt
- src_alpha3: ita
- tgt_alpha3: ara
- short_pair: it-ar
- chrF2_score: 0.517
- bleu: 21.9
- brevity_penalty: 0.95
- ref_len: 1161.0
- src_name: Italian
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: it
- tgt_alpha2: ar
- prefer_old: False
- long_pair: ita-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
a955cc4eae22416ff8d45b324a426d43
|
cartesinus/xlm-r-base-amazon-massive-intent-label_smoothing
|
cartesinus
|
xlm-roberta
| 11 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
mit
|
['en']
|
['AmazonScience/massive']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'nlu', 'intent-classification', 'text-classification']
| true | true | true | 1,639 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-base-amazon-massive-intent-label_smoothing
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [MASSIVE1.1](https://huggingface.co/datasets/AmazonScience/massive) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5148
- Accuracy: 0.8879
- F1: 0.8879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 3.3945 | 1.0 | 720 | 2.7175 | 0.7900 | 0.7900 |
| 2.7629 | 2.0 | 1440 | 2.5660 | 0.8549 | 0.8549 |
| 2.5143 | 3.0 | 2160 | 2.5389 | 0.8711 | 0.8711 |
| 2.4678 | 4.0 | 2880 | 2.5172 | 0.8883 | 0.8883 |
| 2.4187 | 5.0 | 3600 | 2.5148 | 0.8879 | 0.8879 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
6d959a603f0dec9ef7306cd9341f6edd
|
jerpint/whisper
|
jerpint
| null | 5 | 0 | null | 2 |
translation
| false | false | false |
mit
|
['en']
|
['whisper']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation', 'speech', 'audio', 'automatic-speech-recognition']
| false | true | true | 2,489 | false |
This model was forked from the original [OpenAI whisper model](https://github.com/openai/whisper).
# Whisper
## Model
Whisper is a multi-lingual speech-to-text model.
It takes in raw audio recordings from many languages and outputs transcriptions in the language of origin or translated to english.
The model first converts speech to spectrograms, then uses an auto-regressive transformer to decode the speech to text.
Here is an overview of the architecture:

For more information on the technical implementations, consult the [paper](https://cdn.openai.com/papers/whisper.pdf).
## Training Data
The model was trained on 680 000 hours of audio and associated transcripts trained from the internet.
The majority of the audio is in english (~65%) while the remainder is in other languages.
A total of 98 different languages were used in the dataset.

## Model Variations
OpenAI has released 9 different versions of the model, trained either on english-only audio or on multilingual data.
| Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed |
|:------:|:----------:|:------------------:|:------------------:|:-------------:|:--------------:|
| tiny | 39 M | `tiny.en` | `tiny` | ~1 GB | ~32x |
| base | 74 M | `base.en` | `base` | ~1 GB | ~16x |
| small | 244 M | `small.en` | `small` | ~2 GB | ~6x |
| medium | 769 M | `medium.en` | `medium` | ~5 GB | ~2x |
| large | 1550 M | N/A | `large` | ~10 GB | 1x |
## Limitations and bias
In the [paper](https://cdn.openai.com/papers/whisper.pdf), they find a direct corelation between performance on a given language and the amount of data available in the dataset.
As such, languages that are under-represented in the scraped dataset perform less well in whisper.
Because english is much more prevalent than other languages, the model will likely perform better in english.
This is shown in the following figure, where a lower word error rate (WER) indicates a better performance:

|
d066e4db24448b306d99575a34e97e7f
|
Anery/bert-finetuned-ner
|
Anery
|
bert
| 17 | 3 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,602 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0244
- Precision: 0.7368
- Recall: 0.4
- F1: 0.5185
- Accuracy: 0.9919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 14 | 0.0598 | 0.0 | 0.0 | 0.0 | 0.9870 |
| No log | 2.0 | 28 | 0.0357 | 0.0 | 0.0 | 0.0 | 0.9894 |
| No log | 3.0 | 42 | 0.0256 | 0.75 | 0.2571 | 0.3830 | 0.9910 |
| No log | 4.0 | 56 | 0.0244 | 0.7368 | 0.4 | 0.5185 | 0.9919 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
f69413195abda97aa981bffe22bd3760
|
napatswift/bkk-ner-model
|
napatswift
|
bert
| 17 | 3 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,638 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bkk-ner-model
This model is a fine-tuned version of [Geotrend/bert-base-th-cased](https://huggingface.co/Geotrend/bert-base-th-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0518
- Precision: 0.8850
- Recall: 0.9615
- F1: 0.9217
- Accuracy: 0.9822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 8 | 0.5592 | 0.3698 | 0.6827 | 0.4797 | 0.7818 |
| No log | 2.0 | 16 | 0.4491 | 0.4831 | 0.8269 | 0.6099 | 0.8062 |
| No log | 3.0 | 24 | 0.3738 | 0.6226 | 0.9519 | 0.7529 | 0.8399 |
| No log | 4.0 | 32 | 0.1781 | 0.6691 | 0.8942 | 0.7654 | 0.9401 |
| No log | 5.0 | 40 | 0.2201 | 0.8095 | 0.9808 | 0.8870 | 0.9204 |
| No log | 6.0 | 48 | 0.0936 | 0.8130 | 0.9615 | 0.8811 | 0.9710 |
| No log | 7.0 | 56 | 0.0692 | 0.8197 | 0.9615 | 0.8850 | 0.9757 |
| No log | 8.0 | 64 | 0.0712 | 0.8264 | 0.9615 | 0.8889 | 0.9710 |
| No log | 9.0 | 72 | 0.0575 | 0.8621 | 0.9615 | 0.9091 | 0.9803 |
| No log | 10.0 | 80 | 0.0625 | 0.8487 | 0.9712 | 0.9058 | 0.9766 |
| No log | 11.0 | 88 | 0.0580 | 0.8584 | 0.9327 | 0.8940 | 0.9766 |
| No log | 12.0 | 96 | 0.0551 | 0.8684 | 0.9519 | 0.9083 | 0.9813 |
| No log | 13.0 | 104 | 0.0554 | 0.8761 | 0.9519 | 0.9124 | 0.9803 |
| No log | 14.0 | 112 | 0.0535 | 0.8772 | 0.9615 | 0.9174 | 0.9813 |
| No log | 15.0 | 120 | 0.0518 | 0.8850 | 0.9615 | 0.9217 | 0.9822 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
4af13549d921773bb4ed81df28a329ca
|
EvSz/PokemonDiffuser-128
|
EvSz
| null | 18 | 7 |
diffusers
| 1 | null | false | false | false |
apache-2.0
|
['en']
|
['EvSz/Pokemon-by-Name-512px']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,201 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# PokemonDiffuser-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/pokemon` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/EvSz/PokemonDiffuser-128/tensorboard?#scalars)
|
3b0bd4df4537ea34234f529611239c2c
|
AymanMansour/Whisper-Sudanese-Dialect-lsrge-v2-10K
|
AymanMansour
|
whisper
| 41 | 22 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,854 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-large-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0925
- Wer: 41.4086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.5216 | 1.04 | 1000 | 0.7054 | 58.7611 |
| 0.0872 | 3.02 | 2000 | 0.7803 | 60.1400 |
| 0.1073 | 4.06 | 3000 | 0.8312 | 61.0522 |
| 0.0617 | 6.04 | 4000 | 0.8583 | 48.2181 |
| 0.0053 | 8.02 | 5000 | 0.9135 | 41.8328 |
| 0.0049 | 9.06 | 6000 | 0.9697 | 43.3814 |
| 0.0044 | 11.04 | 7000 | 0.9863 | 41.9813 |
| 0.0006 | 13.02 | 8000 | 1.0359 | 42.7662 |
| 0.0019 | 14.06 | 9000 | 1.0714 | 41.3449 |
| 0.0007 | 16.04 | 10000 | 1.0925 | 41.4086 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
d2fc164a2763ecb4230c960539f419e2
|
JovialValley/model_broadclass_onSet0.1
|
JovialValley
|
wav2vec2
| 13 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 13,085 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_broadclass_onSet0.1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1129
- 0 Precision: 1.0
- 0 Recall: 1.0
- 0 F1-score: 1.0
- 0 Support: 31
- 1 Precision: 0.9259
- 1 Recall: 1.0
- 1 F1-score: 0.9615
- 1 Support: 25
- 2 Precision: 1.0
- 2 Recall: 0.9259
- 2 F1-score: 0.9615
- 2 Support: 27
- 3 Precision: 1.0
- 3 Recall: 1.0
- 3 F1-score: 1.0
- 3 Support: 15
- Accuracy: 0.9796
- Macro avg Precision: 0.9815
- Macro avg Recall: 0.9815
- Macro avg F1-score: 0.9808
- Macro avg Support: 98
- Weighted avg Precision: 0.9811
- Weighted avg Recall: 0.9796
- Weighted avg F1-score: 0.9796
- Weighted avg Support: 98
- Wer: 0.0859
- Mtrix: [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 2, 25, 0], [3, 0, 0, 0, 15]]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | 2 Precision | 2 Recall | 2 F1-score | 2 Support | 3 Precision | 3 Recall | 3 F1-score | 3 Support | Accuracy | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | Wer | Mtrix |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:--------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|:------:|:---------------------------------------------------------------------------------------:|
| 2.343 | 4.16 | 100 | 2.2083 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 2.2769 | 8.33 | 200 | 2.1649 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 1.9687 | 12.49 | 300 | 1.8723 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 1.8046 | 16.65 | 400 | 1.6982 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 1.5645 | 20.82 | 500 | 1.5862 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 1.5322 | 24.98 | 600 | 1.5736 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 1.5468 | 29.16 | 700 | 1.4736 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 1.0542 | 33.33 | 800 | 1.0068 | 0.3163 | 1.0 | 0.4806 | 31 | 0.0 | 0.0 | 0.0 | 25 | 0.0 | 0.0 | 0.0 | 27 | 0.0 | 0.0 | 0.0 | 15 | 0.3163 | 0.0791 | 0.25 | 0.1202 | 98 | 0.1001 | 0.3163 | 0.1520 | 98 | 0.9847 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 25, 0, 0, 0], [2, 27, 0, 0, 0], [3, 15, 0, 0, 0]] |
| 0.9664 | 37.49 | 900 | 0.9831 | 0.3483 | 1.0 | 0.5167 | 31 | 1.0 | 0.12 | 0.2143 | 25 | 1.0 | 0.0370 | 0.0714 | 27 | 0.8 | 0.2667 | 0.4 | 15 | 0.3980 | 0.7871 | 0.3559 | 0.3006 | 98 | 0.7632 | 0.3980 | 0.2990 | 98 | 0.9758 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 21, 3, 0, 1], [2, 26, 0, 1, 0], [3, 11, 0, 0, 4]] |
| 0.9405 | 41.65 | 1000 | 0.9402 | 0.3827 | 1.0 | 0.5536 | 31 | 1.0 | 0.04 | 0.0769 | 25 | 1.0 | 0.4815 | 0.65 | 27 | 1.0 | 0.2 | 0.3333 | 15 | 0.4898 | 0.8457 | 0.4304 | 0.4035 | 98 | 0.8047 | 0.4898 | 0.4248 | 98 | 0.9630 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 24, 1, 0, 0], [2, 14, 0, 13, 0], [3, 12, 0, 0, 3]] |
| 0.9341 | 45.82 | 1100 | 0.9330 | 0.5082 | 1.0 | 0.6739 | 31 | 0.9231 | 0.48 | 0.6316 | 25 | 1.0 | 0.6296 | 0.7727 | 27 | 0.8571 | 0.4 | 0.5455 | 15 | 0.6735 | 0.8221 | 0.6274 | 0.6559 | 98 | 0.8029 | 0.6735 | 0.6707 | 98 | 0.9497 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 12, 12, 0, 1], [2, 9, 1, 17, 0], [3, 9, 0, 0, 6]] |
| 0.8769 | 49.98 | 1200 | 0.8662 | 0.6327 | 1.0 | 0.775 | 31 | 0.9565 | 0.88 | 0.9167 | 25 | 1.0 | 0.6296 | 0.7727 | 27 | 0.8889 | 0.5333 | 0.6667 | 15 | 0.7959 | 0.8695 | 0.7607 | 0.7828 | 98 | 0.8557 | 0.7959 | 0.7939 | 98 | 0.9442 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 2, 22, 0, 1], [2, 9, 1, 17, 0], [3, 7, 0, 0, 8]] |
| 0.8122 | 54.16 | 1300 | 0.7951 | 0.9062 | 0.9355 | 0.9206 | 31 | 0.8519 | 0.92 | 0.8846 | 25 | 1.0 | 0.8519 | 0.92 | 27 | 0.9375 | 1.0 | 0.9677 | 15 | 0.9184 | 0.9239 | 0.9268 | 0.9232 | 98 | 0.9230 | 0.9184 | 0.9185 | 98 | 0.9348 | [[0, 1, 2, 3], [0, 29, 2, 0, 0], [1, 1, 23, 0, 1], [2, 2, 2, 23, 0], [3, 0, 0, 0, 15]] |
| 0.5747 | 58.33 | 1400 | 0.4843 | 1.0 | 1.0 | 1.0 | 31 | 0.96 | 0.96 | 0.96 | 25 | 1.0 | 0.9630 | 0.9811 | 27 | 0.9375 | 1.0 | 0.9677 | 15 | 0.9796 | 0.9744 | 0.9807 | 0.9772 | 98 | 0.9802 | 0.9796 | 0.9797 | 98 | 0.6732 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 24, 0, 1], [2, 0, 1, 26, 0], [3, 0, 0, 0, 15]] |
| 0.2794 | 62.49 | 1500 | 0.2062 | 1.0 | 1.0 | 1.0 | 31 | 0.96 | 0.96 | 0.96 | 25 | 1.0 | 0.9630 | 0.9811 | 27 | 0.9375 | 1.0 | 0.9677 | 15 | 0.9796 | 0.9744 | 0.9807 | 0.9772 | 98 | 0.9802 | 0.9796 | 0.9797 | 98 | 0.2236 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 24, 0, 1], [2, 0, 1, 26, 0], [3, 0, 0, 0, 15]] |
| 0.1654 | 66.65 | 1600 | 0.1573 | 1.0 | 0.9677 | 0.9836 | 31 | 0.9259 | 1.0 | 0.9615 | 25 | 1.0 | 0.9630 | 0.9811 | 27 | 1.0 | 1.0 | 1.0 | 15 | 0.9796 | 0.9815 | 0.9827 | 0.9816 | 98 | 0.9811 | 0.9796 | 0.9798 | 98 | 0.1303 | [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 25, 0, 0], [2, 0, 1, 26, 0], [3, 0, 0, 0, 15]] |
| 0.1092 | 70.82 | 1700 | 0.1451 | 1.0 | 0.9677 | 0.9836 | 31 | 0.8889 | 0.96 | 0.9231 | 25 | 1.0 | 0.9259 | 0.9615 | 27 | 0.9375 | 1.0 | 0.9677 | 15 | 0.9592 | 0.9566 | 0.9634 | 0.9590 | 98 | 0.9621 | 0.9592 | 0.9597 | 98 | 0.1056 | [[0, 1, 2, 3], [0, 30, 1, 0, 0], [1, 0, 24, 0, 1], [2, 0, 2, 25, 0], [3, 0, 0, 0, 15]] |
| 0.085 | 74.98 | 1800 | 0.1126 | 1.0 | 1.0 | 1.0 | 31 | 0.9259 | 1.0 | 0.9615 | 25 | 1.0 | 0.9259 | 0.9615 | 27 | 1.0 | 1.0 | 1.0 | 15 | 0.9796 | 0.9815 | 0.9815 | 0.9808 | 98 | 0.9811 | 0.9796 | 0.9796 | 98 | 0.0938 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 2, 25, 0], [3, 0, 0, 0, 15]] |
| 0.0824 | 79.16 | 1900 | 0.1118 | 1.0 | 1.0 | 1.0 | 31 | 0.9259 | 1.0 | 0.9615 | 25 | 1.0 | 0.9259 | 0.9615 | 27 | 1.0 | 1.0 | 1.0 | 15 | 0.9796 | 0.9815 | 0.9815 | 0.9808 | 98 | 0.9811 | 0.9796 | 0.9796 | 98 | 0.0859 | [[0, 1, 2, 3], [0, 31, 0, 0, 0], [1, 0, 25, 0, 0], [2, 0, 2, 25, 0], [3, 0, 0, 0, 15]] |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
4274e11248901d38699e72590eaf2680
|
smeoni/nbme-electra-large-generator
|
smeoni
|
electra
| 17 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,418 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nbme-electra-large-generator
This model is a fine-tuned version of [google/electra-large-generator](https://huggingface.co/google/electra-large-generator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0122
- Accuracy: 0.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 195 | 0.1125 | 0.9789 |
| No log | 2.0 | 390 | 0.0141 | 0.9973 |
| 0.6233 | 3.0 | 585 | 0.0122 | 0.9977 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
4f9bb7c233f4d8576a39d3000bb758fb
|
thkkvui/xlm-roberta-base-finetuned-panx-de-fr
|
thkkvui
|
xlm-roberta
| 10 | 3 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,326 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1623
- F1: 0.8602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2927 | 1.0 | 715 | 0.1798 | 0.8356 |
| 0.1482 | 2.0 | 1430 | 0.1573 | 0.8507 |
| 0.095 | 3.0 | 2145 | 0.1623 | 0.8602 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.13.0.dev20220711
- Datasets 2.4.0
- Tokenizers 0.12.1
|
e146ec588e2af0ef5b02f9ab36fa4b62
|
theojolliffe/bart-large-cnn-finetuned-roundup-3-4
|
theojolliffe
|
bart
| 13 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,777 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-3-4
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1949
- Rouge1: 49.6216
- Rouge2: 29.1874
- Rougel: 32.042
- Rougelsum: 46.3679
- Gen Len: 140.9688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 258 | 1.2708 | 48.8914 | 29.2868 | 30.6203 | 46.2886 | 142.0 |
| 1.1751 | 2.0 | 516 | 1.1869 | 49.3567 | 28.4751 | 31.3075 | 46.3408 | 141.75 |
| 1.1751 | 3.0 | 774 | 1.1869 | 48.8335 | 28.4976 | 30.5434 | 46.2584 | 141.625 |
| 0.7391 | 4.0 | 1032 | 1.1949 | 49.6216 | 29.1874 | 32.042 | 46.3679 | 140.9688 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
aff5f7e820fdc63413912de93a712aa0
|
Helsinki-NLP/opus-mt-fi-sw
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-fi-sw
* source languages: fi
* target languages: sw
* OPUS readme: [fi-sw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.sw | 29.9 | 0.548 |
|
1926927dbd6245f4f3dd3f9a74182ec6
|
espnet/GunnarThor_talromur_a_fastspeech2
|
espnet
| null | 22 | 24 |
espnet
| 0 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['en']
|
['talromur']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 7,774 | false |
## ESPnet2 TTS model
### `espnet/GunnarThor_talromur_a_fastspeech2`
This model was trained by Gunnar Thor using talromur recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 49a284e69308d81c142b89795de255b4ce290c54
pip install -e .
cd egs2/talromur/tts1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/GunnarThor_talromur_a_fastspeech2
```
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_fastspeech2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/a/tts_train_fastspeech2_raw_phn_none
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 8
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 800
batch_size: 20
valid_batch_size: null
batch_bins: 2500000
valid_batch_bins: null
train_shape_file:
- exp/a/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/text_shape.phn
- exp/a/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/speech_shape
valid_shape_file:
- exp/a/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/text_shape.phn
- exp/a/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_a_phn/text
- text
- text
- - exp/a/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/train_a_phn/durations
- durations
- text_int
- - dump/raw/train_a_phn/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dev_a_phn/text
- text
- text
- - exp/a/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/dev_a_phn/durations
- durations
- text_int
- - dump/raw/dev_a_phn/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 384
warmup_steps: 4000
token_list:
- <blank>
- <unk>
- ','
- .
- r
- t
- n
- a0
- s
- I0
- D
- l
- m
- Y0
- v
- h
- E1
- k
- a:1
- E:1
- G
- f
- j
- T
- a1
- p
- c
- au:1
- i:1
- O:1
- I:1
- E0
- I1
- r_0
- t_h
- k_h
- Y1
- ei1
- i0
- ou:1
- ei:1
- u:1
- O1
- N
- l_0
- '91'
- ai0
- au1
- ou0
- n_0
- ei0
- ai:1
- O0
- ou1
- ai1
- i1
- '9:1'
- '90'
- au0
- x
- c_h
- 9i:1
- C
- p_h
- u0
- Y:1
- J
- 9i1
- u1
- 9i0
- N_0
- m_0
- J_0
- Yi0
- Oi1
- Yi1
- Oi0
- au:0
- '9:0'
- E:0
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/a/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/feats_stats.npz
tts: fastspeech2
tts_conf:
adim: 384
aheads: 2
elayers: 4
eunits: 1536
dlayers: 4
dunits: 1536
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 256
duration_predictor_kernel_size: 3
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
use_masking: true
use_scaled_pos_enc: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
init_type: xavier_uniform
init_enc_alpha: 1.0
init_dec_alpha: 1.0
transformer_enc_dropout_rate: 0.2
transformer_enc_positional_dropout_rate: 0.2
transformer_enc_attn_dropout_rate: 0.2
transformer_dec_dropout_rate: 0.2
transformer_dec_positional_dropout_rate: 0.2
transformer_dec_attn_dropout_rate: 0.2
pitch_predictor_layers: 5
pitch_predictor_chans: 256
pitch_predictor_kernel_size: 5
pitch_predictor_dropout: 0.5
pitch_embed_kernel_size: 1
pitch_embed_dropout: 0.0
stop_gradient_from_pitch_predictor: true
energy_predictor_layers: 2
energy_predictor_chans: 256
energy_predictor_kernel_size: 3
energy_predictor_dropout: 0.5
energy_embed_kernel_size: 1
energy_embed_dropout: 0.0
stop_gradient_from_energy_predictor: false
pitch_extract: dio
pitch_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
f0max: 400
f0min: 80
reduction_factor: 1
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/a/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/pitch_stats.npz
energy_extract: energy
energy_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
reduction_factor: 1
energy_normalize: global_mvn
energy_normalize_conf:
stats_file: exp/a/tts_train_tacotron2_raw_phn_none/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/energy_stats.npz
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
7723afd5623652e29f2fe732cb153590
|
ChaoLi/xlm-roberta-base-finetuned-panx-de-fr
|
ChaoLi
|
xlm-roberta
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,315 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1643
- F1: 0.8626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1472 | 2.0 | 1430 | 0.1633 | 0.8488 |
| 0.0948 | 3.0 | 2145 | 0.1643 | 0.8626 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
f9e5a17e5a3aa6e25910b2b3ae17da5e
|
testorg2/larger_fork
|
testorg2
|
bert
| 13 | 5 |
sentence-transformers
| 0 |
sentence-similarity
| true | false | false |
apache-2.0
|
['multilingual']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
| false | true | true | 3,613 | false |
# sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
dbc62b5efb883f9702c1215685963b03
|
pidanr/bert-finetuned-race
|
pidanr
|
bert
| 12 | 1 |
transformers
| 0 |
multiple-choice
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,412 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-race
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3863
- Accuracy: 0.2982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3936 | 0.25 | 3100 | 1.3863 | 0.2418 |
| 1.3768 | 0.51 | 6200 | 1.3863 | 0.2483 |
| 1.3954 | 0.76 | 9300 | 1.3863 | 0.2982 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
b06166a54868a423b46ba4b93ccbbde9
|
jcblaise/roberta-tagalog-large
|
jcblaise
|
roberta
| 10 | 607 |
transformers
| 0 |
fill-mask
| true | true | false |
cc-by-sa-4.0
|
['tl']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta', 'tagalog', 'filipino']
| false | true | true | 1,152 | false |
# RoBERTa Tagalog Large
Tagalog RoBERTa trained as an improvement over our previous Tagalog pretrained Transformers. Trained with TLUnified, a newer, larger, more topically-varied pretraining corpus for Filipino. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
This model is a cased model. We do not release uncased RoBERTa models.
## Citations
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
```
@article{cruz2021improving,
title={Improving Large-scale Language Models and Resources for Filipino},
author={Jan Christian Blaise Cruz and Charibeth Cheng},
journal={arXiv preprint arXiv:2111.06053},
year={2021}
}
```
## Data and Other Resources
Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com
## Contact
If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at me@blaisecruz.com
|
028f75600bbaf8862d953b2eee199b21
|
IDEA-CCNL/Yuyuan-GPT2-3.5B
|
IDEA-CCNL
|
gpt2
| 10 | 54 |
transformers
| 2 |
text-generation
| true | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,891 | false |
# Yuyuan-GPT2-3.5B
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
目前最大的,医疗领域的生成语言模型GPT2。
The currently largest, generative language model GPT2 in the medical domain.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 特殊 Special | 领域 Domain | 余元 Yuyuan | GPT2 | 3.5B | - |
## 模型信息 Model Information
我们采用与Wenzhong-GPT2-3.5B相同的架构,在50GB的医学(PubMed)语料库上进行预训练。我们使用了32个NVIDIA A100显卡大约7天。我们的Yuyuan-GPT2-3.5B是医疗领域最大的开源的GPT2模型。进一步地,模型可以通过计算困惑度(PPL)来判断事实。为了完成问答功能,我们将短语模式从疑问的形式转换为了陈述句。
We adopt the same architecture as Wenzhong-GPT2-3.5B to be pre-trained on 50 GB medical (PubMed) corpus. We use 32 NVIDIA A100 GPUs for about 7 days. Our Yuyuan-GPT2-3.5B is the largest open-source GPT2 model in the medical domain. We further allow the model to judge facts by computing perplexity (PPL). To accomplish question-and-answer functionality, we transform the phrase pattern from interrogative to declarative.
## 使用 Usage
### 加载模型 Loading Models
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('IDEA-CCNL/Yuyuan-GPT2-3.5B')
model = GPT2Model.from_pretrained('IDEA-CCNL/Yuyuan-GPT2-3.5B')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### 使用示例 Usage Examples
```python
from transformers import pipeline, set_seed
set_seed(55)
generator = pipeline('text-generation', model='IDEA-CCNL/Yuyuan-GPT2-3.5B')
generator("Diabetics should not eat", max_length=30, num_return_sequences=1)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
864e2bae1a402fbe6b51b0f3d457d2f8
|
muhtasham/tiny-vanilla-target-glue-mnli
|
muhtasham
|
bert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,515 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-vanilla-target-glue-mnli
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8100
- Accuracy: 0.6375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0866 | 0.04 | 500 | 1.0515 | 0.4557 |
| 1.0101 | 0.08 | 1000 | 0.9526 | 0.5612 |
| 0.9599 | 0.12 | 1500 | 0.9195 | 0.5802 |
| 0.9378 | 0.16 | 2000 | 0.9018 | 0.5930 |
| 0.9229 | 0.2 | 2500 | 0.8904 | 0.5954 |
| 0.9182 | 0.24 | 3000 | 0.8802 | 0.6033 |
| 0.9019 | 0.29 | 3500 | 0.8738 | 0.6070 |
| 0.8971 | 0.33 | 4000 | 0.8613 | 0.6154 |
| 0.8788 | 0.37 | 4500 | 0.8593 | 0.6172 |
| 0.8856 | 0.41 | 5000 | 0.8508 | 0.6194 |
| 0.8751 | 0.45 | 5500 | 0.8404 | 0.6256 |
| 0.8718 | 0.49 | 6000 | 0.8445 | 0.6248 |
| 0.8739 | 0.53 | 6500 | 0.8333 | 0.6306 |
| 0.8653 | 0.57 | 7000 | 0.8363 | 0.6280 |
| 0.8588 | 0.61 | 7500 | 0.8213 | 0.6376 |
| 0.8587 | 0.65 | 8000 | 0.8215 | 0.6360 |
| 0.8544 | 0.69 | 8500 | 0.8268 | 0.6292 |
| 0.8556 | 0.73 | 9000 | 0.8045 | 0.6463 |
| 0.8445 | 0.77 | 9500 | 0.8187 | 0.6328 |
| 0.836 | 0.81 | 10000 | 0.8021 | 0.6446 |
| 0.8399 | 0.86 | 10500 | 0.8100 | 0.6375 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
4c861b68514674884df19079004615a4
|
BlackKakapo/t5-base-paraphrase-ro-v2
|
BlackKakapo
|
t5
| 8 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
['apache-2.0']
|
['ro']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,987 | false |
# Romanian paraphrase

Fine-tune t5-base-paraphrase-ro model for paraphrase. Since there is no Romanian dataset for paraphrasing, I had to create my own [dataset](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro-v2). The dataset contains ~30k examples.
### How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("BlackKakapo/t5-base-paraphrase-ro-v2")
model = AutoModelForSeq2SeqLM.from_pretrained("BlackKakapo/t5-base-paraphrase-ro-v2")
```
### Or
```python
from transformers import T5ForConditionalGeneration, T5TokenizerFast
model = T5ForConditionalGeneration.from_pretrained("BlackKakapo/t5-base-paraphrase-ro-v2")
tokenizer = T5TokenizerFast.from_pretrained("BlackKakapo/t5-base-paraphrase-ro-v2")
```
### Generate
```python
text = "Într-un interviu pentru Radio Europa Liberă România, acesta a menționat că Bucureștiul este pregătit oricând și ar dura doar o oră de la solicitare, până când gazele ar ajunge la Chișinău."
encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"], encoding["attention_mask"]
beam_outputs = model.generate(
input_ids=input_ids,
attention_mask=attention_masks,
do_sample=True,
max_length=256,
top_k=20,
top_p=0.9,
early_stopping=False,
num_return_sequences=5
)
final_outputs = []
for beam_output in beam_outputs:
text_para = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
if text.lower() != text_para.lower() or text not in final_outputs:
final_outputs.append(text_para)
print(final_outputs)
```
### Output
```out
['Într-un interviu cu Radio Europa Liberă România, el a spus că Bucureștiul este pregătit în orice moment și ar dura doar o oră de la cererea până când gazele ar ajunge la Chișinău.']
```
|
37d1c681f4f33dd9895a65ba57c395ed
|
ix502iv/wa2vec2-large-xls-r-colab_turkish
|
ix502iv
|
wav2vec2
| 12 | 25 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,784 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wa2vec2-large-xls-r-colab_turkish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3941
- Wer: 0.3812
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0265 | 3.67 | 400 | 0.7368 | 0.8192 |
| 0.4253 | 7.34 | 800 | 0.4467 | 0.5111 |
| 0.1902 | 11.01 | 1200 | 0.4423 | 0.4723 |
| 0.1293 | 14.68 | 1600 | 0.3854 | 0.4216 |
| 0.0989 | 18.35 | 2000 | 0.3997 | 0.4197 |
| 0.0745 | 22.02 | 2400 | 0.4133 | 0.4182 |
| 0.0598 | 25.69 | 2800 | 0.3962 | 0.3925 |
| 0.0488 | 29.36 | 3200 | 0.3941 | 0.3812 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
5e9db94a8ff8a95838e6deda6a985a7c
|
sanamoin/wav2vec2-base-timit-demo-google-colab
|
sanamoin
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,021 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
6bbbd2d3ace5de75d9922b538b1599a6
|
cammy/bart-large-cnn-finetuned-weaksup-1000-pad-early-new1
|
cammy
|
bart
| 11 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,575 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-weaksup-1000-pad-early-new1
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4948
- Rouge1: 28.1465
- Rouge2: 13.4076
- Rougel: 22.2763
- Rougelsum: 25.2087
- Gen Len: 68.58
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.156 | 1.0 | 1000 | 0.4377 | 27.8782 | 13.1274 | 21.2329 | 24.6465 | 66.25 |
| 0.0843 | 2.0 | 2000 | 0.4948 | 28.1465 | 13.4076 | 22.2763 | 25.2087 | 68.58 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
7c4c05d74542b6c59d40c781fd6803aa
|
KoichiYasuoka/roberta-large-korean-ud-goeswith
|
KoichiYasuoka
|
roberta
| 10 | 8 |
transformers
| 1 |
token-classification
| true | false | false |
cc-by-sa-4.0
|
['ko']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['korean', 'token-classification', 'pos', 'dependency-parsing']
| false | true | true | 2,722 | false |
# roberta-large-korean-ud-goeswith
## Model Description
This is a RoBERTa model pre-trained on Korean texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-large-korean-hanja](https://huggingface.co/KoichiYasuoka/roberta-large-korean-hanja).
## How to Use
```py
class UDgoeswith(object):
def __init__(self,bert):
from transformers import AutoTokenizer,AutoModelForTokenClassification
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForTokenClassification.from_pretrained(bert)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=self.tokenizer(text,return_offsets_mapping=True)
v=w["input_ids"]
x=[v[0:i]+[self.tokenizer.mask_token_id]+v[i+1:]+[j] for i,j in enumerate(v[1:-1],1)]
with torch.no_grad():
e=self.model(input_ids=torch.tensor(x)).logits.numpy()[:,1:-2,:]
r=[1 if i==0 else -1 if j.endswith("|root") else 0 for i,j in sorted(self.model.config.id2label.items())]
e+=numpy.where(numpy.add.outer(numpy.identity(e.shape[0]),r)==0,0,numpy.nan)
g=self.model.config.label2id["X|_|goeswith"]
r=numpy.tri(e.shape[0])
for i in range(e.shape[0]):
for j in range(i+2,e.shape[1]):
r[i,j]=r[i,j-1] if numpy.nanargmax(e[i,j-1])==g else 1
e[:,:,g]+=numpy.where(r==0,0,numpy.nan)
m=numpy.full((e.shape[0]+1,e.shape[1]+1),numpy.nan)
m[1:,1:]=numpy.nanmax(e,axis=2).transpose()
p=numpy.zeros(m.shape)
p[1:,1:]=numpy.nanargmax(e,axis=2).transpose()
for i in range(1,m.shape[0]):
m[i,0],m[i,i],p[i,0]=m[i,i],numpy.nan,p[i,i]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
m[:,0]+=numpy.where(m[:,0]==numpy.nanmax(m[[i for i,j in enumerate(h) if j==0],0]),0,numpy.nan)
m[[i for i,j in enumerate(h) if j==0]]+=[0 if i==0 or j==0 else numpy.nan for i,j in enumerate(h)]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text+"\n"
v=[(s,e) for s,e in w["offset_mapping"] if s<e]
for i,(s,e) in enumerate(v,1):
q=self.model.config.id2label[p[i,h[i]]].split("|")
u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=UDgoeswith("KoichiYasuoka/roberta-large-korean-ud-goeswith")
print(nlp("홍시 맛이 나서 홍시라 생각한다."))
```
with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/).
Or without ufal.chu-liu-edmonds:
```
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-large-korean-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("홍시 맛이 나서 홍시라 생각한다."))
```
|
bb016840147e49a875d083cd8709533d
|
birgermoell/wav2vec2-common_voice-tr-demo
|
birgermoell
|
wav2vec2
| 15 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['sv-SE']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer']
| true | true | true | 2,522 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5528
- Wer: 0.3811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.74 | 100 | 3.4444 | 1.0 |
| No log | 1.47 | 200 | 2.9421 | 1.0 |
| No log | 2.21 | 300 | 2.2802 | 1.0137 |
| No log | 2.94 | 400 | 0.9683 | 0.7611 |
| 3.7264 | 3.68 | 500 | 0.7941 | 0.6594 |
| 3.7264 | 4.41 | 600 | 0.6695 | 0.5751 |
| 3.7264 | 5.15 | 700 | 0.6507 | 0.5314 |
| 3.7264 | 5.88 | 800 | 0.5731 | 0.4927 |
| 3.7264 | 6.62 | 900 | 0.5723 | 0.4580 |
| 0.4592 | 7.35 | 1000 | 0.5913 | 0.4479 |
| 0.4592 | 8.09 | 1100 | 0.5562 | 0.4423 |
| 0.4592 | 8.82 | 1200 | 0.5566 | 0.4292 |
| 0.4592 | 9.56 | 1300 | 0.5492 | 0.4303 |
| 0.4592 | 10.29 | 1400 | 0.5665 | 0.4331 |
| 0.2121 | 11.03 | 1500 | 0.5610 | 0.4084 |
| 0.2121 | 11.76 | 1600 | 0.5703 | 0.4014 |
| 0.2121 | 12.5 | 1700 | 0.5669 | 0.3898 |
| 0.2121 | 13.24 | 1800 | 0.5586 | 0.3962 |
| 0.2121 | 13.97 | 1900 | 0.5656 | 0.3897 |
| 0.1326 | 14.71 | 2000 | 0.5565 | 0.3813 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
9e0f37c6898c02c1d9ef79532020a84c
|
pinot/wav2vec2-base-timit-demo-colab
|
pinot
|
wav2vec2
| 12 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,641 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4548
- Wer: 0.3373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3291 | 4.0 | 500 | 1.0403 | 0.7174 |
| 0.5336 | 8.0 | 1000 | 0.4744 | 0.4489 |
| 0.2155 | 12.0 | 1500 | 0.4476 | 0.3832 |
| 0.1256 | 16.0 | 2000 | 0.4358 | 0.3639 |
| 0.0867 | 20.0 | 2500 | 0.4634 | 0.3527 |
| 0.0608 | 24.0 | 3000 | 0.4784 | 0.3466 |
| 0.0476 | 28.0 | 3500 | 0.4548 | 0.3373 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
e68fc0967fc17c1a5aab948986bc0b7e
|
jo-kwsm/xlm-roberta-base-finetuned-panx-de
|
jo-kwsm
|
xlm-roberta
| 16 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1408
- F1: 0.8646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2626 | 1.0 | 525 | 0.1807 | 0.8067 |
| 0.1307 | 2.0 | 1050 | 0.1388 | 0.8526 |
| 0.0829 | 3.0 | 1575 | 0.1408 | 0.8646 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
18bff117b5d84ca0dccc9dd21c02231f
|
MultiBertGunjanPatrick/multiberts-seed-4-1900k
|
MultiBertGunjanPatrick
|
bert
| 7 | 4 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert', 'multiberts', 'multiberts-seed-4']
| false | true | true | 6,487 | false |
# MultiBERTs Seed 4 Checkpoint 1900k (uncased)
Seed 4 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-1900k')
model = BertModel.from_pretrained("multiberts-seed-4-1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
96950581ba05cf9577ab9502817298d8
|
shimdx/wav2vec2-base-demo-sagemaker
|
shimdx
|
wav2vec2
| 11 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,633 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-demo-sagemaker
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4713
- Wer: 0.3381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4274 | 4.0 | 500 | 1.2279 | 0.8902 |
| 0.5778 | 8.0 | 1000 | 0.4838 | 0.4488 |
| 0.2244 | 12.0 | 1500 | 0.4813 | 0.3793 |
| 0.1299 | 16.0 | 2000 | 0.4878 | 0.3714 |
| 0.0871 | 20.0 | 2500 | 0.4796 | 0.3539 |
| 0.0635 | 24.0 | 3000 | 0.4554 | 0.3427 |
| 0.0495 | 28.0 | 3500 | 0.4713 | 0.3381 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.14.0
- Tokenizers 0.10.3
|
463db7723dc1676d108ff4ea64e927d4
|
nickprock/bert-finetuned-ner-ontonotes
|
nickprock
|
bert
| 12 | 17 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['ontonotes5']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,813 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-ontonotes
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the ontonotes5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1503
- Precision: 0.8567
- Recall: 0.8842
- F1: 0.8702
- Accuracy: 0.9755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0842 | 1.0 | 7491 | 0.0950 | 0.8524 | 0.8715 | 0.8618 | 0.9745 |
| 0.0523 | 2.0 | 14982 | 0.1044 | 0.8449 | 0.8827 | 0.8634 | 0.9744 |
| 0.036 | 3.0 | 22473 | 0.1118 | 0.8529 | 0.8843 | 0.8683 | 0.9760 |
| 0.0231 | 4.0 | 29964 | 0.1240 | 0.8589 | 0.8805 | 0.8696 | 0.9752 |
| 0.0118 | 5.0 | 37455 | 0.1416 | 0.8570 | 0.8804 | 0.8685 | 0.9753 |
| 0.0077 | 6.0 | 44946 | 0.1503 | 0.8567 | 0.8842 | 0.8702 | 0.9755 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
b26ee5f9a4e771b2f9bdd1b601782d2c
|
tftgregrge/mpid-bkdbj
|
tftgregrge
| null | 18 | 7 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 422 | false |
### mpid-bkdbj Dreambooth model trained by tftgregrge with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
1ec17798b50ba763b86504dee863818c
|
mrm8488/convbert-base-spanish
|
mrm8488
|
convbert
| 9 | 13 |
transformers
| 1 |
feature-extraction
| true | true | false |
mit
|
['es']
|
['large_spanish_corpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 918 | false |
# ConvBERT base pre-trained on large_spanish_corpus
The ConvBERT architecture is presented in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper.
## Metrics on evaluation set
```
disc_accuracy = 0.9488542
disc_auc = 0.8833056
disc_loss = 0.15933733
disc_precision = 0.79224133
disc_recall = 0.27443287
global_step = 1000000
loss = 9.658503
masked_lm_accuracy = 0.6177698
masked_lm_loss = 1.7050561
sampled_masked_lm_accuracy = 0.5379228
```
## Usage
```python
from transformers import AutoModel, AutoTokenizer
model_name = "mrm8488/convbert-base-spanish"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
4f5671ffc8175e2f5a5abfea0ee6ca2c
|
anuragshas/whisper-small-mr
|
anuragshas
|
whisper
| 17 | 9 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['mr']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 459 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Marathi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 mr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4888
- Wer: 19.71
|
add722c0967c0ac6d976d81b21ef5019
|
jjglilleberg/xlm-roberta-base-finetuned-panx-de
|
jjglilleberg
|
xlm-roberta
| 11 | 4 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,418 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1518
- F1: 0.8616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 786 | 0.1926 | 0.8138 |
| No log | 2.0 | 1572 | 0.1580 | 0.8493 |
| No log | 3.0 | 2358 | 0.1518 | 0.8616 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
ef68d9fff73a564a3777dd6f405671e8
|
yanaiela/roberta-base-epoch_5
|
yanaiela
|
roberta
| 9 | 3 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
|
['en']
|
['wikipedia', 'bookcorpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta-base', 'roberta-base-epoch_5']
| false | true | true | 2,100 | false |
# RoBERTa, Intermediate Checkpoint - Epoch 5
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_5.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
05d0b9edcd8c0e5461d373a738d7248b
|
gojiteji/text2QR
|
gojiteji
| null | 4 | 0 | null | 0 | null | false | false | false |
odbl
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 649 | false |
This is a diffusion model fine-tuned with [QRsst2](https://huggingface.co/datasets/gojiteji/QRsst2).
This model generates a QR code from text.
Please clone this repository and replace [LambdaLabsML's example's inference code ](https://github.com/LambdaLabsML/examples/blob/767e1101b0125202871812ec7e1b5c46aa9c8d95/stable-diffusion-finetuning/pokemon_finetune.ipynb). checkpoint filename check pint name with `main.ckpt`.
The below images are examples of an input :`The way to get started is to quit talking and begin doing.`

sample code is here:https://github.com/gojiteji/text2QR/blob/main/samplecode.ipynb
|
af3a6a7c847216cc5e1f0dfc491a3638
|
Helsinki-NLP/opus-mt-de-lt
|
Helsinki-NLP
|
marian
| 10 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 | false |
### opus-mt-de-lt
* source languages: de
* target languages: lt
* OPUS readme: [de-lt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-lt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-lt/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-lt/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-lt/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.de.lt | 37.9 | 0.633 |
|
f3b94b7c54986c506cf08cd2ccf46e26
|
jonatasgrosman/exp_w2v2r_es_xls-r_age_teens-8_sixties-2_s287
|
jonatasgrosman
|
wav2vec2
| 10 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['es']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'es']
| false | true | true | 475 | false |
# exp_w2v2r_es_xls-r_age_teens-8_sixties-2_s287
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
9e4fa07b9e92cae18fe6ebb83b38f6b7
|
D3xter1922/electra-base-discriminator-finetuned-cola
|
D3xter1922
|
electra
| 13 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,595 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-finetuned-cola
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6367
- Matthews Correlation: 0.6824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4139 | 1.0 | 535 | 0.4137 | 0.6381 |
| 0.2452 | 2.0 | 1070 | 0.4887 | 0.6504 |
| 0.17 | 3.0 | 1605 | 0.5335 | 0.6757 |
| 0.1135 | 4.0 | 2140 | 0.6367 | 0.6824 |
| 0.0817 | 5.0 | 2675 | 0.6742 | 0.6755 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
e4458928f35e11e6c14619cc18fe8be9
|
pyf98/chime4_e_branchformer_e10
|
pyf98
| null | 33 | 5 |
espnet
| 0 |
automatic-speech-recognition
| false | false | false |
cc-by-4.0
|
['en']
|
['chime4']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition']
| false | true | true | 8,780 | false |
## ESPnet2 ASR model
### `pyf98/chime4_e_branchformer_e10`
This model was trained by Yifan Peng using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
References:
- [E-Branchformer: Branchformer with Enhanced merging for speech recognition (SLT 2022)](https://arxiv.org/abs/2210.00077)
- [Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding (ICML 2022)](https://proceedings.mlr.press/v162/peng22a.html)
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout ad91279f0108d54bd22abe29671b376f048822c5
pip install -e .
cd egs2/chime4/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/chime4_e_branchformer_e10
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Dec 28 15:49:24 EST 2022`
- python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]`
- espnet version: `espnet 202211`
- pytorch version: `pytorch 1.12.1`
- Git hash: `f9a8009aef6ff9ba192a78c19b619ae4a9f3b9d2`
- Commit date: `Wed Dec 28 00:30:54 2022 -0500`
## asr_train_asr_e_branchformer_e10_mlp1024_linear1024_macaron_lr1e-3_warmup25k_raw_en_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer_en_char_valid.loss.ave_asr_model_valid.acc.ave/dt05_real_beamformit_5mics|1640|27119|93.7|5.0|1.2|0.6|6.8|52.5|
|decode_asr_lm_lm_train_lm_transformer_en_char_valid.loss.ave_asr_model_valid.acc.ave/dt05_simu_beamformit_5mics|1640|27120|92.4|6.1|1.6|0.7|8.4|58.2|
|decode_asr_lm_lm_train_lm_transformer_en_char_valid.loss.ave_asr_model_valid.acc.ave/et05_real_beamformit_5mics|1320|21409|90.2|8.0|1.8|1.0|10.8|60.2|
|decode_asr_lm_lm_train_lm_transformer_en_char_valid.loss.ave_asr_model_valid.acc.ave/et05_simu_beamformit_5mics|1320|21416|88.4|9.3|2.4|1.4|13.0|66.1|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer_en_char_valid.loss.ave_asr_model_valid.acc.ave/dt05_real_beamformit_5mics|1640|160390|97.4|1.3|1.3|0.7|3.3|52.5|
|decode_asr_lm_lm_train_lm_transformer_en_char_valid.loss.ave_asr_model_valid.acc.ave/dt05_simu_beamformit_5mics|1640|160400|96.6|1.8|1.7|0.9|4.3|58.2|
|decode_asr_lm_lm_train_lm_transformer_en_char_valid.loss.ave_asr_model_valid.acc.ave/et05_real_beamformit_5mics|1320|126796|95.7|2.3|2.0|1.1|5.4|60.2|
|decode_asr_lm_lm_train_lm_transformer_en_char_valid.loss.ave_asr_model_valid.acc.ave/et05_simu_beamformit_5mics|1320|126812|94.4|2.8|2.8|1.5|7.2|66.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_e_branchformer_e10_mlp1024_linear1024_macaron_lr1e-3_warmup25k.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_e_branchformer_e10_mlp1024_linear1024_macaron_lr1e-3_warmup25k_raw_en_char_sp
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 33561
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 15000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_char_sp/train/speech_shape
- exp/asr_stats_raw_en_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_en_char_sp/valid/speech_shape
- exp/asr_stats_raw_en_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr05_multi_noisy_si284_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/tr05_multi_noisy_si284_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dt05_multi_isolated_1ch_track/wav.scp
- speech
- kaldi_ark
- - dump/raw/dt05_multi_isolated_1ch_track/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- <space>
- E
- T
- A
- N
- I
- O
- S
- R
- H
- L
- D
- C
- U
- M
- P
- F
- G
- Y
- W
- B
- V
- K
- .
- X
- ''''
- J
- Q
- Z
- ','
- '-'
- '"'
- <NOISE>
- '*'
- ':'
- (
- )
- '?'
- '&'
- ;
- '!'
- /
- '{'
- '}'
- '1'
- '2'
- '0'
- $
- '8'
- '9'
- '6'
- '3'
- '5'
- '7'
- '4'
- '~'
- '`'
- _
- <*IN*>
- <*MR.*>
- \
- ^
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: data/nlsyms.txt
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_char_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: e_branchformer
encoder_conf:
output_size: 256
attention_heads: 4
attention_layer_type: rel_selfattn
pos_enc_layer_type: rel_pos
rel_pos_type: latest
cgmlp_linear_units: 1024
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
num_blocks: 10
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
layer_drop_rate: 0.0
linear_units: 1024
positionwise_layer_type: linear
use_ffn: true
macaron_ffn: true
merge_conv_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202211'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ecafb659f3e774136858327209a63a7a
|
charlemagne/distilbert-base-uncased-training-cola
|
charlemagne
|
distilbert
| 30 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,569 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-training-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2215
- Matthews Correlation: 0.8777
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 113 | 0.2954 | 0.7090 |
| No log | 2.0 | 226 | 0.2212 | 0.8232 |
| No log | 3.0 | 339 | 0.1899 | 0.8671 |
| No log | 4.0 | 452 | 0.2006 | 0.8672 |
| 0.19 | 5.0 | 565 | 0.2215 | 0.8777 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.0+cu111
- Datasets 2.1.0
- Tokenizers 0.11.6
|
9e60e7a8909e99a2b7a70b421c3c4441
|
juierror/wav2vec2-large-xls-r-thai-test
|
juierror
|
wav2vec2
| 17 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,287 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-thai-test
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7728
- eval_wer: 0.9490
- eval_runtime: 678.2819
- eval_samples_per_second: 3.226
- eval_steps_per_second: 0.404
- epoch: 2.56
- step: 600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
848947838da327187eab94513c9cb871
|
inkoziev/rugpt_interpreter
|
inkoziev
|
gpt2
| 12 | 14 |
transformers
| 5 |
text-generation
| true | false | false |
unlicense
|
['ru']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PyTorch', 'Transformers', 'gpt2']
| false | true | true | 9,473 | false |
## Задача Incomplete Utterance Restoration
Генеративная модель на основе [sberbank-ai/rugpt3large_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3large_based_on_gpt2) для восстановления полного текста реплик в диалоге из контекста.
Допустим, последние 2 строки диалога имеют вид:
```
- Как тебя зовут?
- Джульетта Мао
```
Модель позволяет получить полный текст последней реплики, с раскрытыми анафорами, эллипсисами и т.д.:
```
Меня зовут Джульетта Мао
```
Раскрытая реплика позволяет использовать многие классические инструменты NLP для своей обработки,
включая регулярные выражения, классификаторы интентов и т.д.
Подробнее о том, какие ситуации и как обрабатываются моделью, смотрите в [конце страницы](#обрабатываемые-ситуации) и в [этом документе](https://huggingface.co/inkoziev/rugpt_interpreter/blob/main/%D0%92%D0%BE%D1%81%D1%81%D1%82%D0%B0%D0%BD%D0%BE%D0%B2%D0%BB%D0%B5%D0%BD%D0%B8%D0%B5%20%D0%BF%D0%BE%D0%BB%D0%BD%D1%8B%D1%85%20%D1%80%D0%B5%D0%BF%D0%BB%D0%B8%D0%BA%20%D0%B2%20%D0%B4%D0%B8%D0%B0%D0%BB%D0%BE%D0%B3%D0%B5.pdf).
## Пример использования
Данная модель работает в прототипе [диалоговой системы](https://github.com/Koziev/chatbot). Она не требует для работы никакой "обвязки", пре- или постпроцессинга, помимо стандартных для моделей семейства GPT,
поэтому использовать ее очень просто:
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "inkoziev/rugpt_interpreter"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.add_special_tokens({'bos_token': '<s>', 'eos_token': '</s>', 'pad_token': '<pad>'})
model = AutoModelForCausalLM.from_pretrained(model_name)
model.to(device)
model.eval()
# На вход модели подаем последние 2-3 реплики диалога. Каждая реплика на отдельной строке, начинается с символа "-"
# В конце добавляем символ "#"
input_text = """<s>- Как тебя зовут?
- Джульетта Мао #"""
#input_text = """<s>- Что Предтечи забрали у Предшественников?
#- Они узурпировали у них Мантию — защиту всего живого в галактике #"""
encoded_prompt = tokenizer.encode(input_text, add_special_tokens=False, return_tensors="pt").to(device)
output_sequences = model.generate(input_ids=encoded_prompt, max_length=100, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id)
text = tokenizer.decode(output_sequences[0].tolist(), clean_up_tokenization_spaces=True)[len(input_text)+1:]
text = text[: text.find('</s>')]
print(text)
```
## Формат входных данных
На вход модели подается результат токенизации для текста, составленного из 2 или 3 последних реплик диалога.
Первым токеном должен быть ```<s>```.
Каждая реплика должна начинаться префиксом "- ".
Реплики разделяются символом перевода строки.
К последней реплике, которая будет раскрываться, добавляется подстрока " #".
```
<s>- Как тебя зовут?
- Джульетта Мао #
```
## Обрабатываемые ситуации
Модель разрабатывается с прицелом на использование в [чатботе](https://github.com/Koziev/chatbot). Она поддерживает некоторые
типичные ситуации в читчате, которые перечислены далее.
В примерах после символа ⇒ идет эталонная раскрытая реплика, которую должна сгенерировать модель.
[Эллипсисы](https://ru.wikipedia.org/wiki/%D0%AD%D0%BB%D0%BB%D0%B8%D0%BF%D1%81%D0%B8%D1%81):
```
- Как же тебя зовут, а?
- Меня – Стас, а тебя? ⇒ Меня зовут Стас. Как тебя зовут?
```
В редких случаях и главное слово в словосочетании может опускаться, модель попытается его восстановить:
```
- Мама, купи мне собаку.
- А ты будешь за ней ухаживать?
- А ты мне здоровую купи. ⇒ купи мне здоровую собаку
```
[Анафора](https://ru.wikipedia.org/wiki/%D0%90%D0%BD%D0%B0%D1%84%D0%BE%D1%80%D0%B0_(%D0%BB%D0%B8%D0%BD%D0%B3%D0%B2%D0%B8%D1%81%D1%82%D0%B8%D0%BA%D0%B0)):
```
- Ты собак любишь?
- Не люблю я их ⇒ я не люблю собак
```
Иногда для раскрытия полной реплики требуется привлечение здравого смысла, модель для этого будет опираться
на статистику претрейна:
```
- Мне на голову упала коробка.
- А что в ней было? ⇒ что было в коробке|голове?
```
[Гэппинг](https://ru.wikipedia.org/wiki/%D0%AD%D0%BB%D0%BB%D0%B8%D0%BF%D1%81%D0%B8%D1%81#%D0%93%D1%8D%D0%BF%D0%BF%D0%B8%D0%BD%D0%B3_(en:Gapping)):
```
- Ты кошек любишь?
- Их – нет ⇒ я не люблю кошек
```
Сложный гэппинг:
```
- В 25 лет вы получаете пенсию?
- Не я - отец. ⇒ Я не получаю пенсию. Отец получает пенсию
```
Восстановление необязательного местоименного подлежащего (см. [pro drop](https://en.wikipedia.org/wiki/Pro-drop_language)):
```
- Согласна?
- Да ⇒ я согласна
```
Модель пытается "читать между строк" и восстанавливать подразумеваемые части реплики:
```
- Ты разве ещё не ел?
- Тебя ждал ⇒ я еще не ел. я ждал тебя.
```
Отрицания в диалоге:
```
- Я не прав?
- Нет. (Да.) ⇒ ты не прав
```
Интерпретация не сводится к копированию слов из контекста, иногда модель должна добавить ассоциируемые с ситуацией слова:
```
- Как прошли выходные?
- В Простоквашино ездила... ⇒ я на выходных ездила в Простоквашино
```
Все вышесказанное может быть в разных сочетаниях одновременно:
```
- Где твой кот?
- Жена к ветеринару повезла. ⇒ жена повезла моего кота к ветеринару
- Заболел? ⇒ твой кот заболел?
```
Сложные предложения:
```
- Я сварила суп, иди ешь.
- Из чего? ⇒ из чего ты сварила суп?
```
Замена подлежащего производится, если это улучшает понимание реплики:
```
- Как себя чувствует твой попугай?
- Бедняга умер... ⇒ мой попугай умер
```
Иногда от реплики остается только наречие, модель будет восстанавливать все остальное:
```
- Девушка, а Вы животных любите?
- Очень! ⇒ я очень люблю животных
```
Форма сказуемого иногда может меняться из соображений согласованности:
```
- Рабинович, как думаете, что будет делать правительство, если завтра население разом бросит курить?
- Таки, поднимут акцизы на алкоголь... ⇒ правительно поднимет акцизы на алкоголь, если завтра население разом бросит курить
```
Во всех случаях модель не выдает никакой информации, откуда она взяла подстановку
для замены или заполнения в выходном тексте. На выходе получается просто текст реплики
в том виде, как ее мог бы сказать человек, безо всяких дополнительных отсылок и маркеров:
```
- У тебя брат есть?
- Да, есть
- Где он работает? ⇒ Где работает твой брат?
```
В данном примере модель никак не сообщит нам, откуда она взяла подстановку “твой брат” для местоимения “он”.
Это сильно упрощает ручную разметку обучающего корпуса и не особо мешает диалоговой системе.
Во многих случаях модель приводит порядок слов к более-менее каноническому. Точнее говоря, она старается
выдать текст с таким порядком слов, который обычно используют носители языка в данном контексте диалога.
Если русскоговорящие предпочитают OVS вместо формального SVO, то модель будет выдавать именно OVS:
```
- У тебя штрафы были?
- Нет, их никогда не было ⇒ у меня никогда не было штрафов
```
Модель обычно вставляет личные местоимения, даже если форма глагола позволяет обойтись без них:
```
- Жару любишь?
- Ненавижу ее ⇒ я ненавижу жару
```
Сложносочиненные ответы разбиваются на отдельные клаузы, чтобы downstream pipeline мог обработать их последовательно:
```
- Тебя как зовут?
- Кортана, а тебя как? ⇒ Меня зовут Кортана. Как тебя зовут?
```
В качестве контекста можно подавать последние 2 или 3 реплики. Более длинные отношения весьма редки, чтобы ради них усложнять датасет.
Кроме того, во многих случаях достаточно применить модель рекурсивно - подать вместо исходных реплик диалога
результат их раскрытия моделью:
```
- Где живешь?
- В Шанхае ⇒ я живу в Шанхае
- Давно? ⇒ ты давно живешь в Шанхае?
- Два года уже ⇒ я уже два года живу в Шанхае
- Как там погода? ⇒ как там погода в Шанхае?
```
Последнее, что хочется отметить: модель обучена **только** на диалоговых данных с короткими репликами (читчат).
Она практически не способна раскрывать анафоры в художественных текстах, хотя это не ограничение модели,
а особенность обучающего датасета.
### Пример работы в чате
Первый столбец содержит реплики диалога [модели читчата](https://huggingface.co/inkoziev/rugpt_chitchat) с собой, второй столбец - результат раскрытия реплик моделью интерпретатора:
```
- Добрый вечер, бро! |
- Чё, будем общаться? | Мы будем общаться?
- Ага, а как же | Мы будем общаться
- О чем хочешь говорить? | О чем ты хочешь говорить?
- Давай о чем-нибудь хорошем | Я хочу говорить о чем-нибудь хорошем
- Мне нравится обсуждать компьютерные игры | Мне нравится обсуждать компьютерные игры
- О, компьютерные игры меня тоже интересуют | Меня тоже интересуют компьютерные игры
- Ты играл в Minecraft? | Ты играл в Minecraft?
- Неа, но хотел бы | Я не играл в игру Minecraft. Я хочу поиграть в игру Minecraft.
```
### Датасет
Обучающие данные без аугментации: [inkoziev/incomplete_utterance_restoration](https://huggingface.co/datasets/inkoziev/incomplete_utterance_restoration).
### Контакты
Если у Вас есть какие-то вопросы по использованию этой модели, или предложения по ее улучшению - пишите мне mentalcomputing@gmail.com
### Citation:
```
@MISC{rugpt_interpreter,
author = {Ilya Koziev},
title = {Incomplete Utterance Restoration in Russian Chit-Chat conversations},
url = {https://huggingface.co/inkoziev/rugpt_interpreter},
year = 2022
}
```
|
061c55590bf907c008782038f90d7695
|
Xiaoman/NER-CoNLL2003-V3
|
Xiaoman
|
bert
| 14 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 934 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-CoNLL2003-V3
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.961395091713594e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 27
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
1971fa6ba4e28350f7b3d0ba800babf8
|
spooncats/scottpilgrim
|
spooncats
| null | 19 | 5 |
diffusers
| 4 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 738 | false |
### scottpilgrim Dreambooth model trained by spooncats with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:

|
ea8399f9163b51eede2f693b0320c81e
|
espnet/kan-bayashi_jsut_vits_accent_with_pause
|
espnet
| null | 27 | 98 |
espnet
| 2 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['ja']
|
['jsut']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 1,810 | false |
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_vits_accent_with_pause`
♻️ Imported from https://zenodo.org/record/5414980/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
f564a94c75aab3a1c434ae83da698461
|
malteos/aspect-acl-scibert-scivocab-uncased
|
malteos
|
bert
| 5 | 9 |
transformers
| 1 | null | true | false | false |
mit
|
['sci', 'en', 'multilingual']
|
['acl-arc']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['classification', 'similarity']
| false | true | true | 1,133 | false |
# Aspect-based Document Similarity for Research Papers
A `scibert-scivocab-uncased` model fine-tuned on the ACL Anthology corpus as in [Aspect-based Document Similarity for Research Papers](https://arxiv.org/abs/2010.06395).
<img src="https://raw.githubusercontent.com/malteos/aspect-document-similarity/master/docrel.png">
See GitHub for more details: https://github.com/malteos/aspect-document-similarity
## Demo
<a href="https://colab.research.google.com/github/malteos/aspect-document-similarity/blob/master/demo.ipynb"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Google Colab"></a>
You can try our trained models directly on Google Colab on all papers available on Semantic Scholar (via DOI, ArXiv ID, ACL ID, PubMed ID):
<a href="https://colab.research.google.com/github/malteos/aspect-document-similarity/blob/master/demo.ipynb"><img src="https://raw.githubusercontent.com/malteos/aspect-document-similarity/master/demo.gif" alt="Click here for demo"></a>
|
eb0b34277c2085bf69797ea0640327c5
|
WillHeld/t5-base-adv-top_v2
|
WillHeld
|
mt5
| 25 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['en']
|
['top_v2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,184 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-adv-top_v2
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the top_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0336
- Exact Match: 0.8540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|
| 1.4252 | 0.21 | 200 | 0.3381 | 0.1505 |
| 0.4478 | 0.41 | 400 | 0.0673 | 0.3914 |
| 0.38 | 0.62 | 600 | 0.0533 | 0.4060 |
| 0.3603 | 0.82 | 800 | 0.0490 | 0.4132 |
| 0.3539 | 1.03 | 1000 | 0.0420 | 0.4186 |
| 0.3425 | 1.23 | 1200 | 0.0396 | 0.4219 |
| 0.3373 | 1.44 | 1400 | 0.0384 | 0.4233 |
| 0.3345 | 1.64 | 1600 | 0.0361 | 0.4247 |
| 0.3334 | 1.85 | 1800 | 0.0357 | 0.4255 |
| 0.33 | 2.05 | 2000 | 0.0361 | 0.4277 |
| 0.3269 | 2.26 | 2200 | 0.0349 | 0.4278 |
| 0.3262 | 2.46 | 2400 | 0.0345 | 0.4288 |
| 0.324 | 2.67 | 2600 | 0.0342 | 0.4285 |
| 0.3212 | 2.87 | 2800 | 0.0337 | 0.4295 |
| 0.3257 | 3.08 | 3000 | 0.0336 | 0.4293 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
15f9589c894fe1c51007d731abe39ca8
|
Shushant/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
|
Shushant
|
bert
| 12 | 14 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,759 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 22 | 3.9518 |
| No log | 2.0 | 44 | 3.2703 |
| No log | 3.0 | 66 | 2.9308 |
| No log | 4.0 | 88 | 2.7806 |
| No log | 5.0 | 110 | 2.6926 |
| No log | 6.0 | 132 | 2.7043 |
| No log | 7.0 | 154 | 2.7113 |
| No log | 8.0 | 176 | 2.7236 |
| No log | 9.0 | 198 | 2.7559 |
| No log | 10.0 | 220 | 2.7515 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
28b53ca032816721c205264140962e86
|
anton-l/ddpm-ema-pokemon-64
|
anton-l
| null | 8 | 10 |
diffusers
| 1 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/pokemon']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,208 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-ema-pokemon-64
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/pokemon` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
- lr_scheduler: cosine
- lr_warmup_steps: 500
- ema_inv_gamma: 1.0
- ema_inv_gamma: 0.75
- ema_inv_gamma: 0.9999
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/anton-l/ddpm-ema-pokemon-64/tensorboard?#scalars)
|
115a13909de5ad978ad07c23166d24f5
|
fathyshalab/massive_social-roberta-large-v1-1
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,456 | false |
# fathyshalab/massive_social-roberta-large-v1-1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_social-roberta-large-v1-1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
232620f94113960e4e057ce2cf3140e8
|
toloka/t5-large-for-text-aggregation
|
toloka
|
t5
| 7 | 33 |
transformers
| 3 |
summarization
| true | false | false |
apache-2.0
|
['en']
|
['toloka/CrowdSpeech']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text aggregation', 'summarization']
| false | true | true | 2,939 | false |
# T5 Large for Text Aggregation
## Model description
This is a T5 Large fine-tuned for crowdsourced text aggregation tasks. The model takes multiple performers' responses and yields a single aggregated response. This approach was introduced for the first time during [VLDB 2021 Crowd Science Challenge](https://crowdscience.ai/challenges/vldb21) and originally implemented at the second-place competitor's [GitHub](https://github.com/A1exRey/VLDB2021_workshop_t5). The [paper](http://ceur-ws.org/Vol-2932/short2.pdf) describing this model was presented at the [2nd Crowd Science Workshop](https://crowdscience.ai/conference_events/vldb21).
## How to use
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, AutoConfig
mname = "toloka/t5-large-for-text-aggregation"
tokenizer = AutoTokenizer.from_pretrained(mname)
model = AutoModelForSeq2SeqLM.from_pretrained(mname)
input = "samplee text | sampl text | sample textt"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # sample text
```
## Training data
Pretrained weights were taken from the [original](https://huggingface.co/t5-large) T5 Large model by Google. For more details on the T5 architecture and training procedure see https://arxiv.org/abs/1910.10683
Model was fine-tuned on `train-clean`, `dev-clean` and `dev-other` parts of the [CrowdSpeech](https://huggingface.co/datasets/toloka/CrowdSpeech) dataset that was introduced in [our paper](https://openreview.net/forum?id=3_hgF1NAXU7&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DNeurIPS.cc%2F2021%2FTrack%2FDatasets_and_Benchmarks%2FRound1%2FAuthors%23your-submissions).
## Training procedure
The model was fine-tuned for eight epochs directly following the HuggingFace summarization training [example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization).
## Eval results
Dataset | Split | WER
-----------|------------|----------
CrowdSpeech| test-clean | 4.99
CrowdSpeech| test-other | 10.61
### BibTeX entry and citation info
```bibtex
@inproceedings{Pletenev:21,
author = {Pletenev, Sergey},
title = {{Noisy Text Sequences Aggregation as a Summarization Subtask}},
year = {2021},
booktitle = {Proceedings of the 2nd Crowd Science Workshop: Trust, Ethics, and Excellence in Crowdsourced Data Management at Scale},
pages = {15--20},
address = {Copenhagen, Denmark},
issn = {1613-0073},
url = {http://ceur-ws.org/Vol-2932/short2.pdf},
language = {english},
}
```
```bibtex
@misc{pavlichenko2021vox,
title={Vox Populi, Vox DIY: Benchmark Dataset for Crowdsourced Audio Transcription},
author={Nikita Pavlichenko and Ivan Stelmakh and Dmitry Ustalov},
year={2021},
eprint={2107.01091},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
|
fda63336d5783fb1491e96b2ba23bf33
|
lenses/distilroberta-base-finetuned-assignment2
|
lenses
|
roberta
| 9 | 4 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,269 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-assignment2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 52 | 0.6602 |
| No log | 2.0 | 104 | 0.5939 |
| No log | 3.0 | 156 | 0.6450 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
124f5edc29edaf43b7f68349dace781e
|
muhtasham/small-vanilla-target-glue-mnli-linear-probe
|
muhtasham
|
bert
| 10 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,829 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-vanilla-target-glue-mnli-linear-probe
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0612
- Accuracy: 0.4363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1093 | 0.04 | 500 | 1.0875 | 0.3914 |
| 1.089 | 0.08 | 1000 | 1.0814 | 0.3988 |
| 1.0811 | 0.12 | 1500 | 1.0760 | 0.4113 |
| 1.0753 | 0.16 | 2000 | 1.0728 | 0.4200 |
| 1.0758 | 0.2 | 2500 | 1.0702 | 0.4252 |
| 1.0727 | 0.24 | 3000 | 1.0684 | 0.4269 |
| 1.0707 | 0.29 | 3500 | 1.0665 | 0.4295 |
| 1.0702 | 0.33 | 4000 | 1.0648 | 0.4317 |
| 1.0654 | 0.37 | 4500 | 1.0627 | 0.4352 |
| 1.0637 | 0.41 | 5000 | 1.0612 | 0.4363 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
27a7d524492d6ac3b47446c9cbd68e36
|
gababas/rraacchhiissbb
|
gababas
| null | 16 | 2 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 423 | false |
### rraacchhiissbb Dreambooth model trained by gababas with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
3d70e3b05a9953b3a126f13fc975bb24
|
hassnain/wav2vec2-base-timit-demo-colab0
|
hassnain
|
wav2vec2
| 12 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,461 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab0
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1808
- Wer: 0.7734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8077 | 7.04 | 500 | 3.1554 | 1.0 |
| 2.8549 | 14.08 | 1000 | 2.0683 | 1.0846 |
| 1.3297 | 21.13 | 1500 | 1.2084 | 0.7984 |
| 0.6725 | 28.17 | 2000 | 1.1808 | 0.7734 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
cf7379692883447eea27291a398a4072
|
henryscheible/wnli_bert-base-uncased_81_v2
|
henryscheible
| null | 13 | 0 | null | 0 | null | true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,019 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wnli_bert-base-uncased_81_v2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6991
- Accuracy: 0.4507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
8915421ab7f853615da2a925142810b2
|
Seyfelislem/wspr-sm-ar
|
Seyfelislem
|
whisper
| 14 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,458 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wspr-sm-ar
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4515
- Wer: 72.6173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4569 | 0.25 | 500 | 0.8556 | 105.5427 |
| 0.5478 | 0.5 | 1000 | 0.7056 | 86.3373 |
| 0.2269 | 0.75 | 1500 | 0.6320 | 114.2627 |
| 0.1936 | 1.12 | 2000 | 0.4515 | 72.6173 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.11.0
- Datasets 2.9.1.dev0
- Tokenizers 0.12.1
|
15a37a182ec58ca894e2259f422b0d31
|
gpssohi/distilbart-qgen-3-3
|
gpssohi
|
bart
| 10 | 6 |
transformers
| 2 |
summarization
| true | false | false |
apache-2.0
|
['en']
|
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question-generation', 'summarization']
| false | true | true | 2,606 | false |
# Introduction
This model checkpoint is obtained by first fine-tuning the sshleifer/distilbart-cnn-6-6 summarization checkpoint on the SQuAD dataset. After this, the 6-6 fine-tuned model is distilled down to a 3-3 model which gives us the final checkpoint. [GitHub Link for training scripts.](https://github.com/darth-c0d3r/bart-question-generation)
# Usage
The input format is as follows: `[answer] <s> [passage]`. The model will predict the question that corresponds to the answer from the passage.
# Plot

# Dataset
The goal of Question Generation is to generate a valid and fluent question according to a given passage and the target answer. Hence, the input to the model will be a passage context and an answer, and the output / target will be the question for the given answer. Question Generation can be used in many scenarios, such as automatic tutoring systems, improving the performance of Question Answering models and enabling chat-bots to lead a conversation. The final dataset is created by taking the union of the following Question Answering Datasets. The dataset must have the following three columns: context, question, answer.
## [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowd-workers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. We use the SQuAD 1.1 variant which does not have unanswerable questions. So, every question will have a corresponding answer and vice-versa.
### Preprocessing
The first step is to remove questions which don't have answers. After that, we split the train set into Train and Eval sets and treat the dev set as the test set.
### Stats
**Original Dataset**
| Split | Num Docs | Num Contexts | Ques w/ Ans | Ques w/o Ans | Num Unique Ans |
| ----- | -------- | ------------ | ----------- | ------------ | -------------- |
| Train | 442 | 19035 | 86821 | 43498 | 86821 |
| Dev | 35 | 1204 | 5928 | 5945 | 10279 |
**After Preprocessing**
| Split | Num Rows | Context | Answer | Question |
| ----- | -------- | ---------- | ------ | -------- |
| Train | 80995 | 653,120,20 | 43,3,1 | 40,10,1 |
| Eval | 5826 | 445,123,67 | 28,3,1 | 29,10,3 |
| Test | 10297 | 629,129,25 | 29,4,1 | 31,10,3 |
The numbers in the columns indicate max, avg, min number of words.
|
da7a96131dde64fb762792c4586eeeb5
|
asapp/sew-d-base-100k
|
asapp
|
sew-d
| 5 | 6 |
transformers
| 0 |
feature-extraction
| true | false | false |
apache-2.0
|
['en']
|
['librispeech_asr']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['speech']
| false | true | true | 1,700 | false |
# SEW-D-base
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
e132b70f1e539612269dae3cd758940c
|
Rakib/whisper-small-bn
|
Rakib
|
whisper
| 32 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['bn']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 5,079 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Bengali
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 bn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3377
- Wer: 14.4623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- training_steps: 60000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:-------:|
| 0.2431 | 1.92 | 1000 | 0.2604 | 33.5683 |
| 0.1403 | 3.83 | 2000 | 0.1703 | 23.7591 |
| 0.0799 | 5.75 | 3000 | 0.1429 | 19.5394 |
| 0.0411 | 7.66 | 4000 | 0.1568 | 19.0023 |
| 0.0244 | 9.58 | 5000 | 0.1684 | 18.3154 |
| 0.0139 | 11.49 | 6000 | 0.1856 | 17.7401 |
| 0.0085 | 13.41 | 7000 | 0.2062 | 17.5263 |
| 0.0063 | 15.33 | 8000 | 0.2146 | 17.2952 |
| 0.0041 | 17.24 | 9000 | 0.2202 | 16.9966 |
| 0.0033 | 19.16 | 10000 | 0.2163 | 16.4749 |
| 0.0027 | 21.07 | 11000 | 0.2267 | 16.5334 |
| 0.0031 | 22.99 | 12000 | 0.2313 | 16.4263 |
| 0.0033 | 24.9 | 13000 | 0.2289 | 16.3544 |
| 0.0025 | 26.82 | 14000 | 0.2384 | 16.0087 |
| 0.0023 | 28.74 | 15000 | 0.2343 | 16.1089 |
| 0.0027 | 30.65 | 16000 | 0.2389 | 16.1495 |
| 0.0022 | 32.57 | 17000 | 0.2461 | 15.9631 |
| 0.0016 | 34.48 | 18000 | 0.2364 | 15.9040 |
| 0.0015 | 36.4 | 19000 | 0.2415 | 15.7161 |
| 0.0009 | 38.31 | 20000 | 0.2411 | 15.3724 |
| 0.0013 | 40.23 | 21000 | 0.2425 | 15.5817 |
| 0.0013 | 42.15 | 22000 | 0.2469 | 15.5112 |
| 0.001 | 44.06 | 23000 | 0.2549 | 15.5474 |
| 0.0015 | 45.98 | 24000 | 0.2481 | 15.3624 |
| 0.0013 | 47.89 | 25000 | 0.2517 | 15.5316 |
| 0.0007 | 49.81 | 26000 | 0.2559 | 15.2305 |
| 0.0006 | 51.72 | 27000 | 0.2567 | 15.4066 |
| 0.0008 | 53.64 | 28000 | 0.2538 | 15.2464 |
| 0.0009 | 55.56 | 29000 | 0.2468 | 15.1284 |
| 0.0005 | 57.47 | 30000 | 0.2660 | 15.0138 |
| 0.0003 | 59.39 | 31000 | 0.2594 | 14.9384 |
| 0.0004 | 61.3 | 32000 | 0.2580 | 14.8814 |
| 0.0006 | 63.22 | 33000 | 0.2642 | 14.9374 |
| 0.0005 | 65.13 | 34000 | 0.2650 | 15.1155 |
| 0.0003 | 67.05 | 35000 | 0.2660 | 14.9939 |
| 0.0004 | 68.97 | 36000 | 0.2625 | 15.1031 |
| 0.0002 | 70.88 | 37000 | 0.2782 | 14.8139 |
| 0.0003 | 72.8 | 38000 | 0.2647 | 15.0768 |
| 0.0004 | 74.71 | 39000 | 0.2665 | 14.8680 |
| 0.0004 | 76.63 | 40000 | 0.2711 | 14.7966 |
| 0.0001 | 78.54 | 41000 | 0.2742 | 14.8075 |
| 0.0002 | 80.46 | 42000 | 0.2703 | 14.9364 |
| 0.0001 | 82.38 | 43000 | 0.2733 | 14.7604 |
| 0.0003 | 84.29 | 44000 | 0.2741 | 14.8209 |
| 0.0 | 86.21 | 45000 | 0.2792 | 14.6046 |
| 0.0 | 88.12 | 46000 | 0.2764 | 14.7356 |
| 0.0 | 90.04 | 47000 | 0.2830 | 14.6874 |
| 0.0 | 91.95 | 48000 | 0.2887 | 14.5630 |
| 0.0 | 93.87 | 49000 | 0.2951 | 14.5803 |
| 0.0 | 95.79 | 50000 | 0.3008 | 14.5476 |
| 0.0 | 97.7 | 51000 | 0.3060 | 14.5188 |
| 0.0 | 99.62 | 52000 | 0.3110 | 14.5248 |
| 0.0 | 101.53 | 53000 | 0.3158 | 14.4985 |
| 0.0 | 103.45 | 54000 | 0.3207 | 14.4980 |
| 0.0 | 105.36 | 55000 | 0.3255 | 14.5124 |
| 0.0 | 107.28 | 56000 | 0.3298 | 14.4945 |
| 0.0 | 109.2 | 57000 | 0.3342 | 14.4752 |
| 0.0 | 111.11 | 58000 | 0.3377 | 14.4623 |
| 0.0 | 113.03 | 59000 | 0.3401 | 14.4856 |
| 0.0 | 114.94 | 60000 | 0.3412 | 14.4896 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 2.0.0.dev20230117+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
bc0fe790a16e0779a96b7f9be26618c3
|
Digitalwitness/distilgpt2-finetuned-shakespeare
|
Digitalwitness
|
gpt2
| 13 | 4 |
transformers
| 0 |
text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,965 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Digitalwitness/distilgpt2-finetuned-shakespeare
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0603
- Validation Loss: 2.2069
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4056 | 3.1490 | 0 |
| 3.1359 | 2.9958 | 1 |
| 2.9970 | 2.9052 | 2 |
| 2.9003 | 2.8363 | 3 |
| 2.8192 | 2.7759 | 4 |
| 2.7524 | 2.7306 | 5 |
| 2.6881 | 2.6775 | 6 |
| 2.6294 | 2.6329 | 7 |
| 2.5716 | 2.5949 | 8 |
| 2.5213 | 2.5512 | 9 |
| 2.4652 | 2.5107 | 10 |
| 2.4156 | 2.4803 | 11 |
| 2.3677 | 2.4329 | 12 |
| 2.3163 | 2.3989 | 13 |
| 2.2735 | 2.3695 | 14 |
| 2.2311 | 2.3317 | 15 |
| 2.1842 | 2.2924 | 16 |
| 2.1386 | 2.2688 | 17 |
| 2.1015 | 2.2297 | 18 |
| 2.0603 | 2.2069 | 19 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.0
- Tokenizers 0.13.1
|
b67fd06d8ef3565d460e64b4146ffcc8
|
TencentGameMate/chinese-hubert-large
|
TencentGameMate
|
hubert
| 6 | 37,250 |
transformers
| 8 |
feature-extraction
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,169 | false |
Pretrained on 10k hours WenetSpeech L subset. More details in [TencentGameMate/chinese_speech_pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
This model does not have a tokenizer as it was pretrained on audio alone.
In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data.
python package:
transformers==4.16.2
```python
import torch
import torch.nn.functional as F
import soundfile as sf
from transformers import (
Wav2Vec2FeatureExtractor,
HubertModel,
)
model_path=""
wav_path=""
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_path)
model = HubertModel.from_pretrained(model_path)
# for pretrain: Wav2Vec2ForPreTraining
# model = Wav2Vec2ForPreTraining.from_pretrained(model_path)
model = model.to(device)
model = model.half()
model.eval()
wav, sr = sf.read(wav_path)
input_values = feature_extractor(wav, return_tensors="pt").input_values
input_values = input_values.half()
input_values = input_values.to(device)
with torch.no_grad():
outputs = model(input_values)
last_hidden_state = outputs.last_hidden_state
```
|
e12f048513ec778e452868b28fd15ec4
|
lmqg/mt5-small-ruquad-qg
|
lmqg
|
mt5
| 40 | 73 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['ru']
|
['lmqg/qg_ruquad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation']
| true | true | true | 6,698 | false |
# Model Card of `lmqg/mt5-small-ruquad-qg`
This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation task on the [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
- **Language:** ru
- **Training data:** [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ru", model="lmqg/mt5-small-ruquad-qg")
# model prediction
questions = model.generate_q(list_context="Нелишним будет отметить, что, развивая это направление, Д. И. Менделеев, поначалу априорно выдвинув идею о температуре, при которой высота мениска будет нулевой, в мае 1860 года провёл серию опытов.", list_answer="в мае 1860 года")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-small-ruquad-qg")
output = pipe("Нелишним будет отметить, что, развивая это направление, Д. И. Менделеев, поначалу априорно выдвинув идею о температуре, при которой высота мениска будет нулевой, <hl> в мае 1860 года <hl> провёл серию опытов.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-ruquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_ruquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 84.27 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| Bleu_1 | 31.03 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| Bleu_2 | 24.58 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| Bleu_3 | 19.92 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| Bleu_4 | 16.31 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| METEOR | 26.39 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| MoverScore | 62.49 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| ROUGE_L | 31.39 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
- ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mt5-small-ruquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_ruquad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 90.17 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| QAAlignedF1Score (MoverScore) | 68.22 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| QAAlignedPrecision (BERTScore) | 90.17 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| QAAlignedPrecision (MoverScore) | 68.23 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| QAAlignedRecall (BERTScore) | 90.16 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| QAAlignedRecall (MoverScore) | 68.21 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
- ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mt5-small-ruquad-ae`](https://huggingface.co/lmqg/mt5-small-ruquad-ae). [raw metric file](https://huggingface.co/lmqg/mt5-small-ruquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_ruquad.default.lmqg_mt5-small-ruquad-ae.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 76.96 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| QAAlignedF1Score (MoverScore) | 55.53 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| QAAlignedPrecision (BERTScore) | 73.41 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| QAAlignedPrecision (MoverScore) | 53.24 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| QAAlignedRecall (BERTScore) | 81.05 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
| QAAlignedRecall (MoverScore) | 58.25 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_ruquad
- dataset_name: default
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: google/mt5-small
- max_length: 512
- max_length_output: 32
- epoch: 5
- batch: 64
- lr: 0.001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-ruquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
a9638dd02dfa1c7a2896c484fa957682
|
vumichien/whisper-medium-jp
|
vumichien
|
whisper
| 22 | 1,754 |
transformers
| 4 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ja']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,544 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3029
- Wer: 9.0355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0392 | 3.03 | 1000 | 0.2023 | 10.1807 |
| 0.0036 | 7.01 | 2000 | 0.2478 | 9.4409 |
| 0.0013 | 10.04 | 3000 | 0.2791 | 9.1014 |
| 0.0002 | 14.01 | 4000 | 0.2970 | 9.0625 |
| 0.0002 | 17.04 | 5000 | 0.3029 | 9.0355 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
2013034464fb1748af580a5745d2e7d9
|
BeardedJohn/bert-finetuned-ner-per-v6
|
BeardedJohn
|
bert
| 8 | 15 |
transformers
| 0 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,507 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-per-v6
This model is a fine-tuned version of [BeardedJohn/bert-ner-wikiann](https://huggingface.co/BeardedJohn/bert-ner-wikiann) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0155
- Validation Loss: 0.0025
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 313, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0155 | 0.0025 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.11.0
|
920fbf5807b27332ec33be8fc9ebcf7e
|
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-5_female-5_s263
|
jonatasgrosman
|
wav2vec2
| 10 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['es']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'es']
| false | true | true | 476 | false |
# exp_w2v2r_es_xls-r_gender_male-5_female-5_s263
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
30dcaa3ba298ffcff518a42edf39d399
|
sentence-transformers/msmarco-distilbert-base-dot-prod-v3
|
sentence-transformers
|
distilbert
| 15 | 3,767 |
sentence-transformers
| 1 |
sentence-similarity
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
| false | true | true | 2,203 | false |
# sentence-transformers/msmarco-distilbert-base-dot-prod-v3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-dot-prod-v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-base-dot-prod-v3)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
6e35ec5678e772310315ca1ca73670e7
|
zhangfx7/distilbert-base-uncased-finetuned-cola
|
zhangfx7
|
distilbert
| 13 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,275 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4908
- Matthews Correlation: 0.4468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5214 | 1.0 | 535 | 0.4908 | 0.4468 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
b22f6b193617578d84e04fd7e99cd024
|
sirhugh15/xlm-roberta-base-finetuned-panx-de-fr
|
sirhugh15
|
xlm-roberta
| 10 | 8 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,321 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1661
- F1: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2935 | 1.0 | 715 | 0.1887 | 0.8216 |
| 0.1476 | 2.0 | 1430 | 0.1625 | 0.8473 |
| 0.0955 | 3.0 | 2145 | 0.1661 | 0.8557 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
6922da407c9857b06aaba8d71b866318
|
s3nh/DialoGPT-small-buzz-toy-story
|
s3nh
|
gpt2
| 9 | 8 |
transformers
| 0 |
conversational
| true | false | false |
openrail
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,580 | false |
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
<img src = 'https://images.unsplash.com/photo-1599623560574-39d485900c95?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1170&q=80'>
### Description
DialogGPT is a variant of the GPT (Generative Pretrained Transformer) language model developed by OpenAI. It's a deep neural network-based language model that's trained on massive amounts of text data to generate human-like text.
DialogGPT uses the transformer architecture, which is a type of neural network designed for processing sequential data such as language. During the training phase, the model is exposed to a large corpus of text and learns to predict the next word in a sequence given the previous words.
In the context of dialog, DialogGPT is trained to predict the response in a conversation, given the context of the conversation. This context can include one or more turns of the conversation, along with any additional information such as the topic of the conversation or the speaker's personality.
At inference time, the model takes the current context of the conversation as input and generates a response. The response is generated by sampling from the model's predicted distribution over the vocabulary.
Overall, DialogGPT provides a flexible and powerful solution for generating human-like text in a conversational context, allowing for the creation of a wide range of applications such as chatbots, conversational agents, and virtual assistants
## Parameters
Model was trained for 20 epochs, using params as follows.
```
per_gpu_train_batch_size: int = 2
self.per_gpu_eval_batch_size: int = 2
self.gradient_accumulation_steps: int = 1
self.learning_rate: float = 5e-5
self.weight_decay: float = 0.0
self.adam_epsilon: float = 1e-8
self.max_grad_norm: int = 1.0
self.num_train_epochs: int = 20
self.max_steps: int = -1
self.warmup_steps: int = 0
self.logging_steps: int = 1000
self.save_steps: int = 3500
self.save_total_limit = None
self.eval_all_checkpoints: bool = False
self.no_cuda: bool = False
self.overwrite_output_dir: bool = True
self.overwrite_cache: bool = True
self.should_continue: bool = False
self.seed: int = 42
self.local_rank: int = -1
self.fp16: bool = False
self.fp16_opt_level: str = 'O1'
```
## Usage
DialoGPT small version, finetuned on Buzz Scripts from Toy Story.
Simple snippet of how to infer of this model:
```python
from transformers import AutoModelWithLMHead, AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('s3nh/DialoGPT-small-buzz-toy-story')
model = AutoModelWithLMHead.from_pretrained('s3nh/DialoGPT-small-buzz-toy-story')
for step in range(4):
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
print("BuzzBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
2ba374022b87e7c37114fe73a13580c0
|
MultiBertGunjanPatrick/multiberts-seed-4-180k
|
MultiBertGunjanPatrick
|
bert
| 7 | 4 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert', 'multiberts', 'multiberts-seed-4']
| false | true | true | 6,483 | false |
# MultiBERTs Seed 4 Checkpoint 180k (uncased)
Seed 4 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-180k')
model = BertModel.from_pretrained("multiberts-seed-4-180k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
4199325fb4c9a1a806f4dddc98b1dd77
|
ErodeesFleurs/Amtmp
|
ErodeesFleurs
| null | 6 | 38 |
espnet
| 0 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['jp']
|
['ErodeesFleurs']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 10,312 | false |
## ESPnet2 TTS model
### `ErodeesFleurs/Amtmp`
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout d5b5ec7b2e77bd3e10707141818b7e6c57ac6b3f
pip install -e .
cd egs2/amadeus/tts1
./run.sh --skip_data_prep false --skip_train true --download_model ErodeesFleurs/Amtmp
```
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/finetune_vits.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_amadeus_vits_finetune_from_jsut_32_sentence
ngpu: 1
seed: 777
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 2000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- total_count
- max
keep_nbest_models: 3
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: true
wandb_project: amadeus
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- downloads/f3698edf589206588f58f5ec837fa516/exp/tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause/train.total_count.ave_10best.pth:tts:tts
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 5000000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/train/text_shape.phn
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/valid/text_shape.phn
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/22k/raw/train/text
- text
- text
- - dump/22k/raw/train/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/22k/raw/dev/text
- text
- text
- - dump/22k/raw/dev/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0001
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.999875
optim2: adamw
optim2_conf:
lr: 0.0001
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.999875
generator_first: false
token_list:
- <blank>
- <unk>
- '1'
- '2'
- '0'
- '3'
- '4'
- '-1'
- '5'
- a
- o
- '-2'
- i
- '-3'
- u
- e
- k
- n
- t
- '6'
- r
- '-4'
- s
- N
- m
- pau
- '7'
- sh
- d
- g
- w
- '8'
- U
- '-5'
- I
- cl
- h
- y
- b
- '9'
- j
- ts
- ch
- '-6'
- z
- p
- '-7'
- f
- ky
- ry
- '-8'
- gy
- '-9'
- hy
- ny
- '-10'
- by
- my
- '-11'
- '-12'
- '-13'
- py
- '-14'
- '-15'
- v
- '10'
- '-16'
- '-17'
- '11'
- '-21'
- '-20'
- '12'
- '-19'
- '13'
- '-18'
- '14'
- dy
- '15'
- ty
- '-22'
- '16'
- '18'
- '19'
- '17'
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: jaconv
g2p: pyopenjtalk_accent_with_pause
feats_extract: linear_spectrogram
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
normalize: null
normalize_conf: {}
tts: vits
tts_conf:
generator_type: vits_generator
generator_params:
hidden_channels: 192
spks: -1
global_channels: -1
segment_size: 32
text_encoder_attention_heads: 2
text_encoder_ffn_expand: 4
text_encoder_blocks: 6
text_encoder_positionwise_layer_type: conv1d
text_encoder_positionwise_conv_kernel_size: 3
text_encoder_positional_encoding_layer_type: rel_pos
text_encoder_self_attention_layer_type: rel_selfattn
text_encoder_activation_type: swish
text_encoder_normalize_before: true
text_encoder_dropout_rate: 0.1
text_encoder_positional_dropout_rate: 0.0
text_encoder_attention_dropout_rate: 0.1
use_macaron_style_in_text_encoder: true
use_conformer_conv_in_text_encoder: false
text_encoder_conformer_kernel_size: -1
decoder_kernel_size: 7
decoder_channels: 512
decoder_upsample_scales:
- 8
- 8
- 2
- 2
decoder_upsample_kernel_sizes:
- 16
- 16
- 4
- 4
decoder_resblock_kernel_sizes:
- 3
- 7
- 11
decoder_resblock_dilations:
- - 1
- 3
- 5
- - 1
- 3
- 5
- - 1
- 3
- 5
use_weight_norm_in_decoder: true
posterior_encoder_kernel_size: 5
posterior_encoder_layers: 16
posterior_encoder_stacks: 1
posterior_encoder_base_dilation: 1
posterior_encoder_dropout_rate: 0.0
use_weight_norm_in_posterior_encoder: true
flow_flows: 4
flow_kernel_size: 5
flow_base_dilation: 1
flow_layers: 4
flow_dropout_rate: 0.0
use_weight_norm_in_flow: true
use_only_mean_in_flow: true
stochastic_duration_predictor_kernel_size: 3
stochastic_duration_predictor_dropout_rate: 0.5
stochastic_duration_predictor_flows: 4
stochastic_duration_predictor_dds_conv_layers: 3
vocabs: 85
aux_channels: 513
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: true
downsample_scales:
- 2
- 2
- 4
- 4
- 1
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
follow_official_norm: false
periods:
- 2
- 3
- 5
- 7
- 11
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 5
- 3
channels: 32
downsample_scales:
- 3
- 3
- 3
- 3
- 1
max_downsample_channels: 1024
bias: true
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
mel_loss_params:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
window: hann
n_mels: 80
fmin: 0
fmax: null
log_base: null
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
lambda_dur: 1.0
lambda_kl: 1.0
sampling_rate: 22050
cache_generator_outputs: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202207'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
0b1efefc1b8b1b7bda1d8f5f20656a9f
|
pig4431/CR_DistilBERT_5E
|
pig4431
|
distilbert
| 10 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,065 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CR_DistilBERT_5E
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3663
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6345 | 0.33 | 50 | 0.5656 | 0.66 |
| 0.4704 | 0.66 | 100 | 0.3705 | 0.82 |
| 0.3428 | 0.99 | 150 | 0.3186 | 0.8867 |
| 0.2272 | 1.32 | 200 | 0.2871 | 0.9 |
| 0.259 | 1.66 | 250 | 0.2975 | 0.8867 |
| 0.2583 | 1.99 | 300 | 0.3125 | 0.8867 |
| 0.1713 | 2.32 | 350 | 0.3146 | 0.8867 |
| 0.181 | 2.65 | 400 | 0.3602 | 0.8867 |
| 0.1868 | 2.98 | 450 | 0.3319 | 0.8933 |
| 0.1521 | 3.31 | 500 | 0.3413 | 0.8867 |
| 0.1153 | 3.64 | 550 | 0.3868 | 0.88 |
| 0.1238 | 3.97 | 600 | 0.3686 | 0.8867 |
| 0.1104 | 4.3 | 650 | 0.3674 | 0.8867 |
| 0.0881 | 4.64 | 700 | 0.3750 | 0.8867 |
| 0.1247 | 4.97 | 750 | 0.3663 | 0.9 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.1
|
f72b8f718602f539eb17409c6d7b7d36
|
StonyBrookNLP/teabreac-t5-3b-tatqa
|
StonyBrookNLP
|
t5
| 10 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question-answering, multi-step-reasoning, multi-hop-reasoning']
| false | true | true | 2,624 | false |
# What's this?
This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496).
This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details.
We release the following models:
- **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}`
- **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}`
- **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}`
The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`.
The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`.
The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**.
# How to use it?
Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac
model_name = "StonyBrookNLP/teabreac-t5-3b-tatqa"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
enable_digit_tokenization(tokenizer)
input_texts = [
"answer_me: Who scored the first touchdown of the game?" +
"context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..."
# Note: some models have slightly different qn/ctxt format. See the github repo.
]
input_ids = tokenizer(
input_texts, return_tensors="pt",
truncation=True, max_length=800,
add_special_tokens=True, padding=True,
)["input_ids"]
generated_ids = model.generate(input_ids, min_length=1, max_length=50)
generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
generated_predictions = [
tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions
]
# => ["Chaz Schilens"]
```
|
7dfa3e6f52e9feec3a110b7555877fcf
|
waifu-research-department/CC
|
waifu-research-department
| null | 3 | 0 | null | 2 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 509 | false |
# Description
Trainer: naotsue
C.C. from Code Geass
# Dataset
>Training: 24 images
>Regularization: 500 images
# Info
>Model Used: Waifu Diffusion 1.2
>Steps: 3000
>Keyword: C.C (Use this in the prompt)
>Class Phrase: 1girl_green_hair_yellow_eyes_anime

|
f41e0a57de68cdfbb43f80196e474f8e
|
sd-concepts-library/lucky-luke
|
sd-concepts-library
| null | 13 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,462 | false |
### lucky-luck on Stable Diffusion
This is the `<lucky-luke>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:








|
741fef04a1952fd3877c34e672b92f0f
|
inhee/m2m100_418M-finetuned-ko-to-en4-finetuned-ko-to-en5
|
inhee
|
m2m_100
| 14 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,697 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-ko-to-en4-finetuned-ko-to-en5
This model is a fine-tuned version of [inhee/m2m100_418M-finetuned-ko-to-en4](https://huggingface.co/inhee/m2m100_418M-finetuned-ko-to-en4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2863
- Bleu: 87.4185
- Gen Len: 9.7107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 105 | 0.3571 | 78.7464 | 9.5775 |
| No log | 2.0 | 210 | 0.3410 | 81.9462 | 9.6505 |
| No log | 3.0 | 315 | 0.3102 | 84.746 | 9.6732 |
| No log | 4.0 | 420 | 0.2929 | 86.5137 | 9.6997 |
| 0.2431 | 5.0 | 525 | 0.2863 | 87.4185 | 9.7107 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
62e24f436a5eb89347859fbb92ccc680
|
thomas0104/whisper_medium_nan_tw
|
thomas0104
|
whisper
| 19 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['zh']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,644 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper medium nan-tw
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 nan-tw dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9100
- Wer: 42.0709
- Cer: 22.3681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.0568 | 5.0 | 1000 | 0.7769 | 48.2706 | 26.0890 |
| 0.0057 | 10.0 | 2000 | 0.8438 | 44.0722 | 23.9270 |
| 0.0041 | 15.01 | 3000 | 0.8740 | 42.8540 | 22.9554 |
| 0.0001 | 20.01 | 4000 | 0.9041 | 42.1797 | 22.5496 |
| 0.0001 | 25.01 | 5000 | 0.9100 | 42.0709 | 22.3681 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
3de250973ec9791c30823e4526f0801f
|
NitishKumar/distilbert-base-uncased-finetuned-squad
|
NitishKumar
|
distilbert
| 12 | 2 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,278 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 65 | 3.3894 |
| No log | 2.0 | 130 | 3.0268 |
| No log | 3.0 | 195 | 2.9423 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ed892478db29408a6f87a093c9356a9d
|
wietsedv/xlm-roberta-base-ft-udpos28-ca
|
wietsedv
|
xlm-roberta
| 8 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
|
['ca']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['part-of-speech', 'token-classification']
| true | true | true | 567 | false |
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Catalan
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ca")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ca")
```
|
1b1ac9d0c82f7fddb338caa4f8489616
|
shibing624/prompt-t5-base-chinese
|
shibing624
|
t5
| 11 | 62 |
transformers
| 4 |
text2text-generation
| true | false | false |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['t5', 'pytorch', 'prompt', 'zh', 'Text2Text-Generation']
| false | true | true | 6,440 | false |
# Chinese Prompt(prompt-t5-base-chinese) Model
中文NLP的Prompt模型[shibing624/prompt-t5-base-chinese](https://huggingface.co/shibing624/prompt-t5-base-chinese),One model For All nlp task(OFA)
1. 在[ClueAI/PromptCLUE-base](https://huggingface.co/ClueAI/PromptCLUE-base)预训练模型上fine-tuned
了[pCLUE中文prompt数据集](https://github.com/CLUEbenchmark/pCLUE)和[SIGHAN+Wang271K中文纠错数据集](https://github.com/shibing624/pycorrector#Dataset)
2. 模型用[textgen](https://github.com/shibing624/textgen)的`T5Model`训练,复现脚本:[training_zh_prompt_model_demo.py](https://github.com/shibing624/textgen/blob/main/examples/T5/training_zh_prompt_model_demo.py)
`prompt-t5-base-chinese` evaluate public test data:
The overall performance of T5 on `pCLUE_test_public.json` **test**:
|model|classify_score|nli_score|generate_score|mrc_f1_score|avg_score|
|:-- |:--- |:--- |:--- |:--- |:--- |
|ClueAI/PromptCLUE-base|0.2417|0.0|0.1731|0.2371|0.1549|
|shibing624/prompt-t5-base-chinese|0.5494|0.525|0.2751|0.2259|0.3893|
## Feature
PromptCLUE:大规模多任务Prompt预训练中文开源模型。
千亿中文token上大规模预训练,累计学习1.5万亿中文token,支持几十个不同类型的NLP任务,具有较好的零样本学习能力和少样本学习能力。针对理解类任务,如分类、情感分析、抽取等,可以自定义标签体系;针对生成任务,可以进行多样性的文本生成。
中文上的三大统一:统一模型框架,统一任务形式,统一应用方式:
- 统一模型框架:采用Text-to-Text的生成式预训练模型进行统一建模。
- 统一任务形式:Prompt统一不同的NLP任务间的差异,转化为统一的text-to-text数据形式。
- 统一应用方式:对目标任务形成拿来即用的模型,下游应用时都可转化为统一的prompt自适应方式,进行zero-shot/few-shot测试。

Fine-tuned的数据集包括:
1. 单分类tnews
2. 单分类iflytek
3. 自然语言推理ocnli
4. 语义匹配afqmc
5. 指代消解-cluewsc2020
6. 关键词识别-csl
7. 阅读理解-自由式c3
8. 阅读理解-抽取式cmrc2018
9. 阅读理解-成语填空chid
10. 中文纠错数据集-sighan+wang271k
## Usage
本项目开源在文本生成项目:[textgen](https://github.com/shibing624/textgen),可支持T5模型,通过如下命令调用:
Install package:
```shell
pip install -U textgen
```
```python
from textgen import T5Model
model = T5Model("t5", "shibing624/prompt-t5-base-chinese")
r = model.predict(["中文改错:为了让人们遵守交通规律,警查叔叔不分昼夜在忙碌。"])
print(r) # ['为了让人们遵守交通规律,警察叔叔不分昼夜在忙碌。']
```
## Usage (HuggingFace Transformers)
Without [textgen](https://github.com/shibing624/textgen), you can use the model like this:
First, you pass your input through the transformer model, then you get the generated sentence.
Install package:
```
pip install transformers
```
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("shibing624/prompt-t5-base-chinese")
model = T5ForConditionalGeneration.from_pretrained("shibing624/prompt-t5-base-chinese")
def batch_generate(input_texts, max_length=64):
features = tokenizer(input_texts, return_tensors='pt')
outputs = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'],
max_length=max_length)
return tokenizer.batch_decode(outputs, skip_special_tokens=True)
r = batch_generate(["中文改错:为了让人们遵守交通规律,警查叔叔不分昼夜在忙碌。"])
print(r)
```
output:
```shell
['为了让人们遵守交通规律,警察叔叔不分昼夜在忙碌。']
```
模型文件组成:
```
prompt-t5-base-chinese
├── config.json
├── model_args.json
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
├── spiece.model
└── vocab.txt
```
## 预测示例
#### 中文改错(correction)
```bash
Input:
中文改错:为了让人们遵守交通规律,警查叔叔不分昼夜在忙碌。
Model output:
为了让人们遵守交通规律,警察叔叔不分昼夜在忙碌。
```
#### 新闻分类(classify)
```bash
Input:
分类任务:
折价率过低遭抛售基金泰和跌7.15%,证券时报记者 朱景锋本报讯 由于折价率在大盘封基中处于最低水平,基金泰和昨日遭到投资者大举抛售,跌幅达到7.15%,远超大盘。盘面显示,基金泰和随大盘高开,之后开始震荡走低,午后开始加速下行,几乎没有像样反弹。截至收盘时,在沪深300指数仅下跌2.56%的情况下,基金泰和收盘跌幅高达7.15%,在所有封基中跌幅最大,而昨日多数封基跌幅在2%左右。
选项:财经,娱乐,时政,股票
答案:
Model output:
财经
```
#### 意图分类(classify)
```bash
Input:
意图分类:
帮我定一个周日上海浦东的房间
选项:闹钟,文学,酒店,艺术,体育,健康,天气,其他
答案:
Model output:
酒店
```
#### 情感分析(classify)
```bash
Input:
情感分析:
这个看上去还可以,但其实我不喜欢
选项:积极,消极
答案:
Model output:
消极
```
#### 推理(generate)
```bash
Input:
请推理出上下文的关系:
前提:对不起事情就是这样。
假设:事情就是这样,不需要道歉。
选项:中立,蕴涵,矛盾
答案:
Model output:
矛盾
```
#### 阅读理解(generate)
```bash
Input:
阅读文章,给出答案:
段落:
港汇指数,全称港元实际汇兑指数(Effective Exchange Rate Index for the Hong Kong Dollar)是由香港政府统计处编制的一项指数,以反映港元与香港主要贸易伙伴之货币的名义有效汇率加权平均数的变动情况。加权比重是按1999年至2000年平均贸易模式所制定,但政府并未有公布详细的计算公式。旧港汇指数基准日为2000年1月1日,基数为100点。由2012年1月3日起,新系列港汇指数 (包括15种货币及以2010年1月 = 100) 已取代旧港汇指数系列。港汇指数的作用,主要是用于反映香港的货品及服务的价格相对于其主要贸易伙伴的变动,并通常被视作反映香港价格竞争力的指标。
问题:港汇指数的加权比重如何制定?
答案:
Model output:
按1999年至2000年平均贸易模式所制定
```
#### 阅读理解-自由式(generate)
```bash
Input:
阅读以下对话并回答问题。
男:今天怎么这么晚才来上班啊?女:昨天工作到很晚,而且我还感冒了。男:那你回去休息吧,我帮你请假。女:谢谢你。
问题:女的怎么样?
选项:正在工作,感冒了,在打电话,要出差。
答案:
Model output:
感冒了
```
#### 摘要(generate)
```bash
Input:
为下面的文章生成摘要:
北京时间9月5日12时52分,四川甘孜藏族自治州泸定县发生6.8级地震。地震发生后,领导高度重视并作出重要指示,要求把抢救生命作为首要任务,全力救援受灾群众,最大限度减少人员伤亡
答案:
Model output:
四川甘孜发生6.8级地震
```
#### 通用信息抽取(generate)
```bash
Input:
信息抽取:
据新华社电广东省清远市清城区政府昨日对外发布信息称,日前被实名举报涉嫌勒索企业、说“分分钟可以搞垮一间厂”的清城区环保局局长陈柏,已被免去清城区区委委员
问题:机构名,人名,职位
答案:
Model output:
机构名:新华社,清城区政府,清城区环保局,清城区区委
人名:陈柏
职位:局长,区委委员
```
#### 指代消解(generate)
```bash
Input:
指代消解:
段落:
少平跟润叶进了她二爸家的院子,润生走过来对他(代词)说:“我到宿舍找了你两回,你到哪里去了?”
问题:代词“他”指代的是?
答案:
Model output:
少平
```
#### 关键词抽取(generate)
```bash
Input:
抽取关键词:
当地时间21日,美国联邦储备委员会宣布加息75个基点,将联邦基金利率目标区间上调到3.00%至3.25%之间,符合市场预期。这是美联储今年以来第五次加息,也是连续第三次加息,创自1981年以来的最大密集加息幅度。
关键词:
Model output:
美联储,利率目标区间,加息,基点
```
#### 情感倾向(classify)
```bash
文字中包含了怎样的情感:
超可爱的帅哥,爱了。。。
选项:厌恶,喜欢,开心,悲伤,惊讶,生气,害怕
答案:
Model output:
喜欢
```
## 训练数据集
#### 中文Prompt数据集
- 数据:[pCLUE中文prompt数据集](https://github.com/CLUEbenchmark/pCLUE)
- 相关内容
- [Huggingface](https://huggingface.co/)
- [PromptCLUE-base Model](https://huggingface.co/ClueAI/PromptCLUE-base)
- [textgen](https://github.com/shibing624/textgen)
数据格式:
```text
{"input": "哪个类别最好的描述了这篇新闻?扣篮王拉文:精彩暴扣表演!炸\n选项:故事,文化,娱乐,体育,财经,房产,汽车,教育,科技,军事,旅游,国际,股票,农业,游戏\n答案:", "target": "电竞", "answer_choices": ["故事", "文化", "娱乐", "体育", "财经", "房产", "汽车", "教育", "科技", "军事", "旅游", "国际", "股票", "农业", "游戏"], "type": "classify"}
{"input": "“现在婴儿的健康状况仍很严重”记住上面的文字,考虑:“婴儿已经完全康复了。”这是总是,绝不,或有时正确的?\n答案:", "target": "绝不", "answer_choices": ["总是", "绝不", "有时"], "type": "nli"}
```
如果需要训练Prompt模型,请参考[https://github.com/shibing624/textgen/blob/main/examples/T5/training_zh_prompt_model_demo.py](https://github.com/shibing624/textgen/blob/main/examples/T5/training_zh_prompt_model_demo.py)
附上我的训练参数:
```
epoch=5
batch_size=50
max_length=512 # input text length
max_seq_length=128 # output text length
```
V100单卡训练大概48小时。
## Citation
```latex
@software{textgen,
author = {Xu Ming},
title = {textgen: Implementation of Text Generation models},
year = {2022},
url = {https://github.com/shibing624/textgen},
}
```
|
69aed90627e8f8a7243ada920f383d23
|
Namig/finetuning-sentiment-model-3000-samples
|
Namig
|
distilbert
| 13 | 9 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,053 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3222
- Accuracy: 0.87
- F1: 0.8704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
9aef1f02c8f069b0c8b5bad219e07d3c
|
dbmdz/flair-historic-ner-lft
|
dbmdz
| null | 4 | 8 |
flair
| 1 |
token-classification
| true | false | false |
mit
|
['de']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['flair', 'token-classification', 'sequence-tagger-model']
| false | true | true | 683 | false |
# Towards Robust Named Entity Recognition for Historic German
Based on [our paper](https://www.aclweb.org/anthology/W19-4312/)
we release a new model trained on the LFT dataset.
**Note:** We use BPEmbeddings instead of the combination of
Wikipedia, Common Crawl and character embeddings (as used in the paper),
so save space and training/inferencing time.
# Results
| Dataset \ Run | Run 1 | Run 2 | Run 3† | Avg.
| ------------- | ----- | ----- | --------- | ------------
| Development | 76.32 | 76.13 | **76.36** | 76.27
| Test | 77.07 | 77.35 | 77.20 | 77.21
Paper reported an averaged F1-score of 77.51.
† denotes that this model is selected for upload.
|
702bf01ee6d772a1cc7879daf8cb5cad
|
gauravtripathy/distilbert-base-uncased-finetuned-cola
|
gauravtripathy
|
distilbert
| 13 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,571 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7550
- Matthews Correlation: 0.5265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5296 | 1.0 | 535 | 0.5144 | 0.4215 |
| 0.3504 | 2.0 | 1070 | 0.4903 | 0.5046 |
| 0.2393 | 3.0 | 1605 | 0.6339 | 0.5058 |
| 0.175 | 4.0 | 2140 | 0.7550 | 0.5265 |
| 0.1259 | 5.0 | 2675 | 0.8688 | 0.5259 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
26b529f8e97d6a03251207a54b67fc6b
|
gokuls/mobilebert_sa_GLUE_Experiment_rte_128
|
gokuls
|
mobilebert
| 17 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,581 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_rte_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6926
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6935 | 1.0 | 20 | 0.6926 | 0.5271 |
| 0.6934 | 2.0 | 40 | 0.6930 | 0.5271 |
| 0.6931 | 3.0 | 60 | 0.6932 | 0.4982 |
| 0.6932 | 4.0 | 80 | 0.6929 | 0.5343 |
| 0.6929 | 5.0 | 100 | 0.6945 | 0.4729 |
| 0.6921 | 6.0 | 120 | 0.6929 | 0.5199 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
d7bcc1dc11ea91ebf20015f33f5eb0bf
|
yanekyuk/bert-uncased-keyword-discriminator
|
yanekyuk
|
bert
| 10 | 7 |
transformers
| 2 |
token-classification
| true | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,974 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-keyword-discriminator
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1296
- Precision: 0.8439
- Recall: 0.8722
- Accuracy: 0.9727
- F1: 0.8578
- Ent/precision: 0.8723
- Ent/accuracy: 0.9077
- Ent/f1: 0.8896
- Con/precision: 0.8010
- Con/accuracy: 0.8196
- Con/f1: 0.8102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 | Ent/precision | Ent/accuracy | Ent/f1 | Con/precision | Con/accuracy | Con/f1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|:-------------:|:------------:|:------:|:-------------:|:------------:|:------:|
| 0.1849 | 1.0 | 1875 | 0.1323 | 0.7039 | 0.7428 | 0.9488 | 0.7228 | 0.7522 | 0.8166 | 0.7831 | 0.6268 | 0.6332 | 0.6300 |
| 0.1357 | 2.0 | 3750 | 0.1132 | 0.7581 | 0.8024 | 0.9592 | 0.7796 | 0.7948 | 0.8785 | 0.8346 | 0.6971 | 0.6895 | 0.6933 |
| 0.0965 | 3.0 | 5625 | 0.1033 | 0.8086 | 0.7980 | 0.9646 | 0.8032 | 0.8410 | 0.8592 | 0.8500 | 0.7560 | 0.7071 | 0.7307 |
| 0.0713 | 4.0 | 7500 | 0.1040 | 0.8181 | 0.8435 | 0.9683 | 0.8306 | 0.8526 | 0.8906 | 0.8712 | 0.7652 | 0.7736 | 0.7694 |
| 0.0525 | 5.0 | 9375 | 0.1126 | 0.8150 | 0.8633 | 0.9689 | 0.8385 | 0.8495 | 0.9064 | 0.8770 | 0.7629 | 0.7993 | 0.7807 |
| 0.0386 | 6.0 | 11250 | 0.1183 | 0.8374 | 0.8678 | 0.9719 | 0.8523 | 0.8709 | 0.9020 | 0.8862 | 0.7877 | 0.8170 | 0.8021 |
| 0.03 | 7.0 | 13125 | 0.1237 | 0.8369 | 0.8707 | 0.9723 | 0.8535 | 0.8657 | 0.9079 | 0.8863 | 0.7934 | 0.8155 | 0.8043 |
| 0.0244 | 8.0 | 15000 | 0.1296 | 0.8439 | 0.8722 | 0.9727 | 0.8578 | 0.8723 | 0.9077 | 0.8896 | 0.8010 | 0.8196 | 0.8102 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
7f1584b40f64ce63eb5587c0c8fc0211
|
jonatasgrosman/exp_w2v2t_ar_no-pretraining_s6
|
jonatasgrosman
|
wav2vec2
| 10 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ar']
| false | true | true | 412 | false |
# exp_w2v2t_ar_no-pretraining_s6
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
9215a27c36b3580adf94451f91f1858a
|
spacy/ja_core_news_sm
|
spacy
| null | 27 | 8 |
spacy
| 0 |
token-classification
| false | false | false |
cc-by-sa-4.0
|
['ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['spacy', 'token-classification']
| false | true | true | 2,451 | false |
### Details: https://spacy.io/models/ja#ja_core_news_sm
Japanese pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler.
| Feature | Description |
| --- | --- |
| **Name** | `ja_core_news_sm` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `morphologizer`, `parser`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [UD Japanese GSD v2.8](https://github.com/UniversalDependencies/UD_Japanese-GSD) (Omura, Mai; Miyao, Yusuke; Kanayama, Hiroshi; Matsuda, Hiroshi; Wakasa, Aya; Yamashita, Kayo; Asahara, Masayuki; Tanaka, Takaaki; Murawaki, Yugo; Matsumoto, Yuji; Mori, Shinsuke; Uematsu, Sumire; McDonald, Ryan; Nivre, Joakim; Zeman, Daniel)<br />[UD Japanese GSD v2.8 NER](https://github.com/megagonlabs/UD_Japanese-GSD) (Megagon Labs Tokyo) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (65 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`morphologizer`** | `POS=NOUN`, `POS=ADP`, `POS=VERB`, `POS=SCONJ`, `POS=AUX`, `POS=PUNCT`, `POS=PART`, `POS=DET`, `POS=NUM`, `POS=ADV`, `POS=PRON`, `POS=ADJ`, `POS=PROPN`, `POS=CCONJ`, `POS=SYM`, `POS=NOUN\|Polarity=Neg`, `POS=AUX\|Polarity=Neg`, `POS=SPACE`, `POS=INTJ`, `POS=SCONJ\|Polarity=Neg` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `aux`, `case`, `cc`, `ccomp`, `compound`, `cop`, `csubj`, `dep`, `det`, `dislocated`, `fixed`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `punct` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `MOVEMENT`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PET_NAME`, `PHONE`, `PRODUCT`, `QUANTITY`, `TIME`, `TITLE_AFFIX`, `WORK_OF_ART` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.37 |
| `TOKEN_P` | 97.65 |
| `TOKEN_R` | 97.90 |
| `TOKEN_F` | 97.77 |
| `POS_ACC` | 96.09 |
| `MORPH_ACC` | 0.00 |
| `MORPH_MICRO_P` | 34.01 |
| `MORPH_MICRO_R` | 98.04 |
| `MORPH_MICRO_F` | 50.51 |
| `SENTS_P` | 98.63 |
| `SENTS_R` | 99.21 |
| `SENTS_F` | 98.92 |
| `DEP_UAS` | 91.91 |
| `DEP_LAS` | 90.34 |
| `TAG_ACC` | 97.12 |
| `LEMMA_ACC` | 96.71 |
| `ENTS_P` | 68.06 |
| `ENTS_R` | 55.22 |
| `ENTS_F` | 60.97 |
|
5729397ef4afe94bfc54eb8de70d493e
|
espnet/kan-bayashi_ljspeech_tts_train_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
|
espnet
| null | 19 | 1 |
espnet
| 0 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['en']
|
['ljspeech']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 1,862 | false |
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tts_train_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4039194/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
e152a4e3f1ef3211617ee49bf49d9557
|
Helsinki-NLP/opus-mt-sv-bzs
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-sv-bzs
* source languages: sv
* target languages: bzs
* OPUS readme: [sv-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-bzs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bzs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bzs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.bzs | 29.4 | 0.484 |
|
d2445b4db9e52cee665e5dfe62427cec
|
Gokulapriyan/vit-base-patch16-224-finetuned-eurosat
|
Gokulapriyan
|
vit
| 16 | 17 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,586 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0419
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.352 | 1.0 | 527 | 0.2383 | 0.9065 |
| 0.2104 | 2.0 | 1054 | 0.1154 | 0.9562 |
| 0.1764 | 3.0 | 1581 | 0.0837 | 0.9703 |
| 0.1646 | 4.0 | 2108 | 0.0570 | 0.9806 |
| 0.1284 | 5.0 | 2635 | 0.0419 | 0.9834 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
bd9e4e52927f133db89bdfddb03668ee
|
anas-awadalla/t5-small-few-shot-k-16-finetuned-squad-seed-2
|
anas-awadalla
|
t5
| 15 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 963 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-few-shot-k-16-finetuned-squad-seed-2
This model is a fine-tuned version of [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
2ec75e3ad4ae140f6adce3ad15da5e8a
|
SkyR/hing-mbert-ours-run-5
|
SkyR
|
bert
| 10 | 7 |
transformers
| 0 |
text-classification
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,081 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hing-mbert-ours-run-5
This model is a fine-tuned version of [l3cube-pune/hing-mbert](https://huggingface.co/l3cube-pune/hing-mbert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2437
- Accuracy: 0.665
- Precision: 0.6223
- Recall: 0.5991
- F1: 0.6039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9643 | 1.0 | 100 | 0.7996 | 0.69 | 0.6596 | 0.6593 | 0.6521 |
| 0.6951 | 2.0 | 200 | 1.0464 | 0.66 | 0.6597 | 0.5831 | 0.5734 |
| 0.4245 | 3.0 | 300 | 0.9640 | 0.64 | 0.6025 | 0.6033 | 0.6010 |
| 0.238 | 4.0 | 400 | 1.6744 | 0.68 | 0.7095 | 0.6445 | 0.6359 |
| 0.1477 | 5.0 | 500 | 1.7115 | 0.665 | 0.6362 | 0.6422 | 0.6360 |
| 0.1206 | 6.0 | 600 | 2.0459 | 0.635 | 0.5749 | 0.5752 | 0.5726 |
| 0.0528 | 7.0 | 700 | 2.5698 | 0.66 | 0.6230 | 0.5904 | 0.5985 |
| 0.0525 | 8.0 | 800 | 2.2729 | 0.625 | 0.5741 | 0.5860 | 0.5733 |
| 0.0174 | 9.0 | 900 | 2.6227 | 0.635 | 0.6099 | 0.6044 | 0.6019 |
| 0.0088 | 10.0 | 1000 | 2.8854 | 0.63 | 0.5699 | 0.5676 | 0.5680 |
| 0.0085 | 11.0 | 1100 | 3.2173 | 0.655 | 0.6043 | 0.5771 | 0.5821 |
| 0.0121 | 12.0 | 1200 | 3.1270 | 0.665 | 0.6214 | 0.5903 | 0.5971 |
| 0.0141 | 13.0 | 1300 | 2.6648 | 0.655 | 0.5981 | 0.5978 | 0.5961 |
| 0.0116 | 14.0 | 1400 | 3.1711 | 0.665 | 0.6192 | 0.5915 | 0.5971 |
| 0.007 | 15.0 | 1500 | 3.0954 | 0.66 | 0.6156 | 0.5961 | 0.6009 |
| 0.0037 | 16.0 | 1600 | 3.3065 | 0.65 | 0.6027 | 0.5791 | 0.5824 |
| 0.0031 | 17.0 | 1700 | 3.1715 | 0.665 | 0.6177 | 0.5999 | 0.6048 |
| 0.0021 | 18.0 | 1800 | 3.1602 | 0.665 | 0.6220 | 0.6029 | 0.6082 |
| 0.0021 | 19.0 | 1900 | 3.2027 | 0.655 | 0.6096 | 0.5893 | 0.5937 |
| 0.0018 | 20.0 | 2000 | 3.2437 | 0.665 | 0.6223 | 0.5991 | 0.6039 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
b76b85b095dcaaade357c025cd2615eb
|
cyycyy/xlm-roberta-base-finetuned-panx-en
|
cyycyy
|
xlm-roberta
| 10 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,314 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4130
- F1: 0.6851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1435 | 1.0 | 50 | 0.5604 | 0.5493 |
| 0.513 | 2.0 | 100 | 0.4557 | 0.6504 |
| 0.3744 | 3.0 | 150 | 0.4130 | 0.6851 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
5ade57eef36b599b9943283ac1b32c5a
|
ksabeh/bert-base-uncased-attribute-correction-mlm
|
ksabeh
|
bert
| 8 | 11 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,426 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/bert-base-uncased-mlm-electronics-attribute-correction
This model is a fine-tuned version of [ksabeh/bert-base-uncased-mlm-electronics](https://huggingface.co/ksabeh/bert-base-uncased-mlm-electronics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0524
- Validation Loss: 0.0520
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36848, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1459 | 0.0678 | 0 |
| 0.0524 | 0.0520 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
d6a392d5e6be2552323bc50b82c25a9d
|
carlosdanielhernandezmena/stt_fo_quartznet15x5_sp_ep163_100h
|
carlosdanielhernandezmena
| null | 4 | 1 |
nemo
| 0 |
automatic-speech-recognition
| true | false | false |
cc-by-4.0
|
['fo']
|
['carlosdanielhernandezmena/ravnursson_asr']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'speech', 'audio', 'CTC', 'pytorch', 'NeMo', 'QuartzNet', 'QuartzNet15x5', 'faroese', 'faroe islands']
| true | true | true | 1,825 | false |
# stt_fo_quartznet15x5_sp_ep163_100h
**NOTE! This model was trained with the NeMo version: nemo-toolkit==1.10.0**
The "stt_fo_quartznet15x5_sp_ep163_100h" is an acoustic model created with NeMo which is suitable for Automatic Speech Recognition in Faroese.
It is the result of fine-tuning the model ["QuartzNet15x5Base-En.nemo"](https://catalog.ngc.nvidia.com/orgs/nvidia/models/nemospeechmodels/files) with 100 hours of Faroese data developed by the [Ravnur Project](https://maltokni.fo/en/the-ravnur-project) from the Faroe Islands and curated by Carlos Mena during 2022. Most of the data is available at public repositories such as [Clarin.is](http://hdl.handle.net/20.500.12537/276) or [Hugging Face](https://huggingface.co/datasets/carlosdanielhernandezmena/ravnursson_asr).
The specific corpus used to fine-tune the model is:
- [The Ravnursson Corpus: Faroese Speech and Transcripts (100h34m)](http://hdl.handle.net/20.500.12537/276)
The fine-tuning process was perform during November (2022) in the servers of the [Language and Voice Laboratory](https://lvl.ru.is/) at [Reykjavík University](https://en.ru.is/) (Iceland) by Carlos Daniel Hernández Mena.
```bibtex
@misc{mena2022quartznet15x5faroese,
title={Acoustic Model in Faroese: stt_fo_quartznet15x5_sp_ep163_100h.},
author={Hernandez Mena, Carlos Daniel},
year={2022},
url={https://huggingface.co/carlosdanielhernandezmena/stt_fo_quartznet15x5_sp_ep163_100h},
}
```
# Acknowledgements
Special thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
|
08ebadde0d6dce0c6d2deaad127458c3
|
cammy/bart-large-cnn-weaksup-100-NOpad-early
|
cammy
|
bart
| 11 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,558 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-weaksup-100-NOpad-early
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0768
- Rouge1: 28.7908
- Rouge2: 10.6989
- Rougel: 20.534
- Rougelsum: 24.1294
- Gen Len: 68.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 1.8905 | 31.1534 | 13.7074 | 21.6489 | 27.0709 | 64.2 |
| No log | 2.0 | 200 | 2.0768 | 28.7908 | 10.6989 | 20.534 | 24.1294 | 68.5 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
bc94d442e2707208e7d5dd14bd04c706
|
jonatasgrosman/exp_w2v2t_pt_wav2vec2_s515
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['pt']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'pt']
| false | true | true | 456 | false |
# exp_w2v2t_pt_wav2vec2_s515
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
bd86329f70e86b4885680e7aeeac0c27
|
Salesforce/codegen-2B-nl
|
Salesforce
|
codegen
| 9 | 1,655 |
transformers
| 0 |
text-generation
| true | false | false |
bsd-3-clause
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,786 | false |
# CodeGen (CodeGen-NL 2B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-NL 2B** in the paper, where "NL" means it is pre-trained on the Pile and "2B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-NL 2B) was pre-trained on [the Pile](https://github.com/EleutherAI/the-pile), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai/). Parts of the dataset include code data.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-nl")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-nl")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
a4fd769d90a018e773a1cb9a0dd2c6ae
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.