repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
anas-awadalla/bert-large-uncased-prefix-tuning-squad
|
anas-awadalla
| null | 21 | 0 | null | 0 | null | false | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,048 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-prefix-tuning-squad
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
e342af630726d4762a4a625962545cae
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_rte
|
gokuls
|
mobilebert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,710 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_rte
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3910
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4088 | 1.0 | 20 | 0.3931 | 0.5271 |
| 0.4081 | 2.0 | 40 | 0.3922 | 0.5271 |
| 0.4076 | 3.0 | 60 | 0.3910 | 0.5271 |
| 0.4068 | 4.0 | 80 | 0.3941 | 0.5343 |
| 0.4069 | 5.0 | 100 | 0.3924 | 0.5343 |
| 0.4022 | 6.0 | 120 | 0.3975 | 0.5343 |
| 0.3801 | 7.0 | 140 | 0.4060 | 0.5415 |
| 0.3447 | 8.0 | 160 | 0.5080 | 0.4982 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
3d0d06da45d0bf7c51a1d8375556e17b
|
sgangireddy/whisper-medium-cv-fleurs-tr-3k
|
sgangireddy
|
whisper
| 22 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,410 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2406
- Wer: 10.0333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0241 | 1.06 | 1000 | 0.1996 | 10.4543 |
| 0.009 | 2.12 | 2000 | 0.2156 | 10.1152 |
| 0.0045 | 3.19 | 3000 | 0.2406 | 10.0333 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
9e43b0f6d283c21f4754e2a0ff0ade04
|
Alred/t5-small-finetuned-summarization-cnn-ver2
|
Alred
|
t5
| 23 | 3 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null |
['cnn_dailymail']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 2,193 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-summarization-cnn-ver2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0084
- Bertscore-mean-precision: 0.8859
- Bertscore-mean-recall: 0.8592
- Bertscore-mean-f1: 0.8721
- Bertscore-median-precision: 0.8855
- Bertscore-median-recall: 0.8578
- Bertscore-median-f1: 0.8718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bertscore-mean-precision | Bertscore-mean-recall | Bertscore-mean-f1 | Bertscore-median-precision | Bertscore-median-recall | Bertscore-median-f1 |
|:-------------:|:-----:|:----:|:---------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:|
| 2.0422 | 1.0 | 718 | 2.0139 | 0.8853 | 0.8589 | 0.8717 | 0.8857 | 0.8564 | 0.8715 |
| 1.9481 | 2.0 | 1436 | 2.0085 | 0.8863 | 0.8591 | 0.8723 | 0.8858 | 0.8577 | 0.8718 |
| 1.9231 | 3.0 | 2154 | 2.0084 | 0.8859 | 0.8592 | 0.8721 | 0.8855 | 0.8578 | 0.8718 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
92343c5b8bc4d92da908fe01561334c6
|
sentence-transformers/paraphrase-albert-small-v2
|
sentence-transformers
|
albert
| 14 | 100,938 |
sentence-transformers
| 3 |
sentence-similarity
| true | true | false |
apache-2.0
| null |
['flax-sentence-embeddings/stackexchange_xml', 's2orc', 'ms_marco', 'wiki_atomic_edits', 'snli', 'multi_nli', 'embedding-data/altlex', 'embedding-data/simple-wiki', 'embedding-data/flickr30k-captions', 'embedding-data/coco_captions', 'embedding-data/sentence-compression', 'embedding-data/QQP', 'yahoo_answers_topics']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
| false | true | true | 3,560 | false |
# sentence-transformers/paraphrase-albert-small-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-albert-small-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-albert-small-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-albert-small-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-albert-small-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
26c4452604428d047681ddd0748f88ed
|
ZJUzpy/mt5-small-finetuned-amazon-en-es
|
ZJUzpy
|
mt5
| 10 | 1 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 1,995 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0346
- Rouge1: 16.8527
- Rouge2: 8.331
- Rougel: 16.4475
- Rougelsum: 16.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 6.7536 | 1.0 | 1209 | 3.2881 | 13.6319 | 5.4635 | 13.0552 | 13.1093 |
| 3.9312 | 2.0 | 2418 | 3.1490 | 16.8402 | 8.3559 | 16.1876 | 16.2869 |
| 3.5987 | 3.0 | 3627 | 3.1043 | 17.9887 | 9.3136 | 17.3034 | 17.4313 |
| 3.4261 | 4.0 | 4836 | 3.0573 | 17.0089 | 8.7389 | 16.5351 | 16.5023 |
| 3.3221 | 5.0 | 6045 | 3.0569 | 16.8461 | 8.0988 | 16.4898 | 16.4927 |
| 3.2549 | 6.0 | 7254 | 3.0511 | 17.3428 | 8.2234 | 16.7312 | 16.8749 |
| 3.2067 | 7.0 | 8463 | 3.0334 | 16.268 | 7.9729 | 15.9342 | 16.0065 |
| 3.1842 | 8.0 | 9672 | 3.0346 | 16.8527 | 8.331 | 16.4475 | 16.6421 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
4129b1202b3d1edc400c071bdd96f0c3
|
haesun/xlm-roberta-base-finetuned-panx-de-fr
|
haesun
|
xlm-roberta
| 10 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1654
- F1: 0.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2845 | 1.0 | 715 | 0.1831 | 0.8249 |
| 0.1449 | 2.0 | 1430 | 0.1643 | 0.8479 |
| 0.0929 | 3.0 | 2145 | 0.1654 | 0.8590 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
5baea09f342fbf1390f4d3c00310a9f2
|
rpv/distilbert-base-uncased-finetuned-squad
|
rpv
|
distilbert
| 10 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 929 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
db8179ee4e670c9f263140f5175fb228
|
Helsinki-NLP/opus-mt-tc-base-gmw-gmw
|
Helsinki-NLP
|
marian
| 13 | 6 |
transformers
| 0 |
translation
| true | true | false |
cc-by-4.0
|
['af', 'de', 'en', 'fy', 'gmw', 'gos', 'hrx', 'lb', 'nds', 'nl', 'pdc', 'yi']
| null | null | 2 | 1 | 1 | 0 | 0 | 0 | 0 |
['translation', 'opus-mt-tc']
| true | true | true | 10,601 | false |
# opus-mt-tc-base-gmw-gmw
Neural machine translation model for translating from West Germanic languages (gmw) to West Germanic languages (gmw).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2021-02-23
* source language(s): afr deu eng fry gos hrx ltz nds nld pdc yid
* target language(s): afr deu eng fry nds nld
* valid target language labels: >>afr<< >>ang_Latn<< >>deu<< >>eng<< >>fry<< >>ltz<< >>nds<< >>nld<< >>sco<< >>yid<<
* model: transformer (base)
* data: opus ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opus-2021-02-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2021-02-23.zip)
* more information released models: [OPUS-MT gmw-gmw README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-gmw/README.md)
* more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>afr<<`
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>nld<< You need help.",
">>afr<< I love your son."
]
model_name = "pytorch-models/opus-mt-tc-base-gmw-gmw"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Je hebt hulp nodig.
# Ek is lief vir jou seun.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-gmw-gmw")
print(pipe(>>nld<< You need help.))
# expected output: Je hebt hulp nodig.
```
## Benchmarks
* test set translations: [opus-2021-02-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2021-02-23.test.txt)
* test set scores: [opus-2021-02-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2021-02-23.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| afr-deu | tatoeba-test-v2021-08-07 | 0.674 | 48.1 | 1583 | 9105 |
| afr-eng | tatoeba-test-v2021-08-07 | 0.728 | 58.8 | 1374 | 9622 |
| afr-nld | tatoeba-test-v2021-08-07 | 0.711 | 54.5 | 1056 | 6710 |
| deu-afr | tatoeba-test-v2021-08-07 | 0.696 | 52.4 | 1583 | 9507 |
| deu-eng | tatoeba-test-v2021-08-07 | 0.609 | 42.1 | 17565 | 149462 |
| deu-nds | tatoeba-test-v2021-08-07 | 0.442 | 18.6 | 9999 | 76137 |
| deu-nld | tatoeba-test-v2021-08-07 | 0.672 | 48.7 | 10218 | 75235 |
| eng-afr | tatoeba-test-v2021-08-07 | 0.735 | 56.5 | 1374 | 10317 |
| eng-deu | tatoeba-test-v2021-08-07 | 0.580 | 35.9 | 17565 | 151568 |
| eng-nds | tatoeba-test-v2021-08-07 | 0.412 | 16.6 | 2500 | 18264 |
| eng-nld | tatoeba-test-v2021-08-07 | 0.663 | 48.3 | 12696 | 91796 |
| fry-eng | tatoeba-test-v2021-08-07 | 0.500 | 32.5 | 220 | 1573 |
| fry-nld | tatoeba-test-v2021-08-07 | 0.633 | 43.1 | 260 | 1854 |
| gos-nld | tatoeba-test-v2021-08-07 | 0.405 | 15.6 | 1852 | 9903 |
| hrx-deu | tatoeba-test-v2021-08-07 | 0.484 | 24.7 | 471 | 2805 |
| hrx-eng | tatoeba-test-v2021-08-07 | 0.362 | 20.4 | 221 | 1235 |
| ltz-deu | tatoeba-test-v2021-08-07 | 0.556 | 37.2 | 347 | 2208 |
| ltz-eng | tatoeba-test-v2021-08-07 | 0.485 | 32.4 | 293 | 1840 |
| ltz-nld | tatoeba-test-v2021-08-07 | 0.534 | 39.3 | 292 | 1685 |
| nds-deu | tatoeba-test-v2021-08-07 | 0.572 | 34.5 | 9999 | 74564 |
| nds-eng | tatoeba-test-v2021-08-07 | 0.493 | 29.9 | 2500 | 17589 |
| nds-nld | tatoeba-test-v2021-08-07 | 0.621 | 42.3 | 1657 | 11490 |
| nld-afr | tatoeba-test-v2021-08-07 | 0.755 | 58.8 | 1056 | 6823 |
| nld-deu | tatoeba-test-v2021-08-07 | 0.686 | 50.4 | 10218 | 74131 |
| nld-eng | tatoeba-test-v2021-08-07 | 0.690 | 53.1 | 12696 | 89978 |
| nld-fry | tatoeba-test-v2021-08-07 | 0.478 | 25.1 | 260 | 1857 |
| nld-nds | tatoeba-test-v2021-08-07 | 0.462 | 21.4 | 1657 | 11711 |
| afr-deu | flores101-devtest | 0.524 | 21.6 | 1012 | 25094 |
| afr-eng | flores101-devtest | 0.693 | 46.8 | 1012 | 24721 |
| afr-nld | flores101-devtest | 0.509 | 18.4 | 1012 | 25467 |
| deu-afr | flores101-devtest | 0.534 | 21.4 | 1012 | 25740 |
| deu-eng | flores101-devtest | 0.616 | 33.8 | 1012 | 24721 |
| deu-nld | flores101-devtest | 0.516 | 19.2 | 1012 | 25467 |
| eng-afr | flores101-devtest | 0.628 | 33.8 | 1012 | 25740 |
| eng-deu | flores101-devtest | 0.581 | 29.1 | 1012 | 25094 |
| eng-nld | flores101-devtest | 0.533 | 21.0 | 1012 | 25467 |
| ltz-afr | flores101-devtest | 0.430 | 12.9 | 1012 | 25740 |
| ltz-deu | flores101-devtest | 0.482 | 17.1 | 1012 | 25094 |
| ltz-eng | flores101-devtest | 0.468 | 18.8 | 1012 | 24721 |
| ltz-nld | flores101-devtest | 0.409 | 10.7 | 1012 | 25467 |
| nld-afr | flores101-devtest | 0.494 | 16.8 | 1012 | 25740 |
| nld-deu | flores101-devtest | 0.501 | 17.9 | 1012 | 25094 |
| nld-eng | flores101-devtest | 0.551 | 25.6 | 1012 | 24721 |
| deu-eng | multi30k_test_2016_flickr | 0.546 | 32.2 | 1000 | 12955 |
| eng-deu | multi30k_test_2016_flickr | 0.582 | 28.8 | 1000 | 12106 |
| deu-eng | multi30k_test_2017_flickr | 0.561 | 32.7 | 1000 | 11374 |
| eng-deu | multi30k_test_2017_flickr | 0.573 | 27.6 | 1000 | 10755 |
| deu-eng | multi30k_test_2017_mscoco | 0.499 | 25.5 | 461 | 5231 |
| eng-deu | multi30k_test_2017_mscoco | 0.514 | 22.0 | 461 | 5158 |
| deu-eng | multi30k_test_2018_flickr | 0.535 | 30.0 | 1071 | 14689 |
| eng-deu | multi30k_test_2018_flickr | 0.547 | 25.3 | 1071 | 13703 |
| deu-eng | newssyscomb2009 | 0.527 | 25.4 | 502 | 11818 |
| eng-deu | newssyscomb2009 | 0.504 | 19.3 | 502 | 11271 |
| deu-eng | news-test2008 | 0.518 | 23.8 | 2051 | 49380 |
| eng-deu | news-test2008 | 0.492 | 19.3 | 2051 | 47447 |
| deu-eng | newstest2009 | 0.516 | 23.4 | 2525 | 65399 |
| eng-deu | newstest2009 | 0.498 | 18.8 | 2525 | 62816 |
| deu-eng | newstest2010 | 0.546 | 25.8 | 2489 | 61711 |
| eng-deu | newstest2010 | 0.508 | 20.7 | 2489 | 61503 |
| deu-eng | newstest2011 | 0.524 | 23.7 | 3003 | 74681 |
| eng-deu | newstest2011 | 0.493 | 19.2 | 3003 | 72981 |
| deu-eng | newstest2012 | 0.532 | 24.8 | 3003 | 72812 |
| eng-deu | newstest2012 | 0.493 | 19.5 | 3003 | 72886 |
| deu-eng | newstest2013 | 0.548 | 27.7 | 3000 | 64505 |
| eng-deu | newstest2013 | 0.517 | 22.5 | 3000 | 63737 |
| deu-eng | newstest2014-deen | 0.548 | 27.3 | 3003 | 67337 |
| eng-deu | newstest2014-deen | 0.532 | 22.0 | 3003 | 62688 |
| deu-eng | newstest2015-deen | 0.553 | 28.6 | 2169 | 46443 |
| eng-deu | newstest2015-ende | 0.544 | 25.7 | 2169 | 44260 |
| deu-eng | newstest2016-deen | 0.596 | 33.3 | 2999 | 64119 |
| eng-deu | newstest2016-ende | 0.580 | 30.0 | 2999 | 62669 |
| deu-eng | newstest2017-deen | 0.561 | 29.5 | 3004 | 64399 |
| eng-deu | newstest2017-ende | 0.535 | 24.1 | 3004 | 61287 |
| deu-eng | newstest2018-deen | 0.610 | 36.1 | 2998 | 67012 |
| eng-deu | newstest2018-ende | 0.613 | 35.4 | 2998 | 64276 |
| deu-eng | newstest2019-deen | 0.582 | 32.3 | 2000 | 39227 |
| eng-deu | newstest2019-ende | 0.583 | 31.2 | 1997 | 48746 |
| deu-eng | newstest2020-deen | 0.604 | 32.0 | 785 | 38220 |
| eng-deu | newstest2020-ende | 0.542 | 23.9 | 1418 | 52383 |
| deu-eng | newstestB2020-deen | 0.598 | 31.2 | 785 | 37696 |
| eng-deu | newstestB2020-ende | 0.532 | 23.3 | 1418 | 53092 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.12.3
* OPUS-MT git hash: e56a06b
* port time: Sun Feb 13 14:42:10 EET 2022
* port machine: LM0-400-22516.local
|
0a0ce0c20bf31421179f511fde9c405e
|
jonatasgrosman/exp_w2v2r_es_xls-r_age_teens-0_sixties-10_s951
|
jonatasgrosman
|
wav2vec2
| 10 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['es']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'es']
| false | true | true | 476 | false |
# exp_w2v2r_es_xls-r_age_teens-0_sixties-10_s951
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
482744da6a2507a337b1995bb1113fa9
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qqp_256
|
gokuls
|
distilbert
| 17 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,896 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_data_aug_qqp_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7043
- Accuracy: 0.6343
- F1: 0.0148
- Combined Score: 0.3245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:|
| 0.8369 | 1.0 | 29671 | 0.7043 | 0.6343 | 0.0148 | 0.3245 |
| 0.7448 | 2.0 | 59342 | 0.7161 | 0.6355 | 0.0216 | 0.3286 |
| 0.7106 | 3.0 | 89013 | 0.7067 | 0.6466 | 0.0843 | 0.3655 |
| 0.6924 | 4.0 | 118684 | 0.7200 | 0.6401 | 0.0477 | 0.3439 |
| 0.6812 | 5.0 | 148355 | 0.7109 | 0.6424 | 0.0609 | 0.3517 |
| 0.6734 | 6.0 | 178026 | 0.7092 | 0.6440 | 0.0696 | 0.3568 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
09d695f938a86e5da407e42e1693251f
|
ryL/distilbert-base-uncased-finetuned-emotion
|
ryL
|
distilbert
| 14 | 19 |
transformers
| 1 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,356 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8152 | 1.0 | 250 | 0.3054 | 0.902 | 0.8992 |
| 0.2418 | 2.0 | 500 | 0.2175 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0.dev20221006+cu117
- Datasets 2.6.1
- Tokenizers 0.12.1
|
b7334d3597cddc3e46b0eae2084513fb
|
jeapaul/wav2vec2-large-xlsr-53-torgo-demo-f01-nolm
|
jeapaul
|
wav2vec2
| 15 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,446 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-torgo-demo-f01-nolm
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0153
- Wer: 0.4756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4166 | 0.81 | 500 | 4.5019 | 1.0 |
| 3.1088 | 1.62 | 1000 | 3.0459 | 1.0 |
| 2.8249 | 2.44 | 1500 | 3.0850 | 1.0 |
| 2.625 | 3.25 | 2000 | 2.6827 | 1.3656 |
| 1.9816 | 4.06 | 2500 | 1.6636 | 1.3701 |
| 1.3036 | 4.87 | 3000 | 0.9710 | 1.2504 |
| 0.9862 | 5.68 | 3500 | 0.6023 | 1.0519 |
| 0.7012 | 6.49 | 4000 | 0.4404 | 0.9342 |
| 0.6102 | 7.31 | 4500 | 0.3297 | 0.8491 |
| 0.5463 | 8.12 | 5000 | 0.2403 | 0.7773 |
| 0.4897 | 8.93 | 5500 | 0.1907 | 0.7335 |
| 0.4687 | 9.74 | 6000 | 0.1721 | 0.7095 |
| 0.41 | 10.55 | 6500 | 0.1382 | 0.6851 |
| 0.3277 | 11.36 | 7000 | 0.1189 | 0.6598 |
| 0.3182 | 12.18 | 7500 | 0.1040 | 0.6372 |
| 0.3279 | 12.99 | 8000 | 0.0961 | 0.6274 |
| 0.2735 | 13.8 | 8500 | 0.0806 | 0.5880 |
| 0.3153 | 14.61 | 9000 | 0.0821 | 0.5748 |
| 0.251 | 15.42 | 9500 | 0.0633 | 0.5437 |
| 0.2 | 16.23 | 10000 | 0.0534 | 0.5316 |
| 0.2134 | 17.05 | 10500 | 0.0475 | 0.5195 |
| 0.1727 | 17.86 | 11000 | 0.0435 | 0.5146 |
| 0.2143 | 18.67 | 11500 | 0.0406 | 0.5072 |
| 0.1679 | 19.48 | 12000 | 0.0386 | 0.5057 |
| 0.1836 | 20.29 | 12500 | 0.0359 | 0.4984 |
| 0.1542 | 21.1 | 13000 | 0.0284 | 0.4914 |
| 0.1672 | 21.92 | 13500 | 0.0289 | 0.4884 |
| 0.1526 | 22.73 | 14000 | 0.0256 | 0.4867 |
| 0.1263 | 23.54 | 14500 | 0.0247 | 0.4871 |
| 0.133 | 24.35 | 15000 | 0.0194 | 0.4816 |
| 0.1005 | 25.16 | 15500 | 0.0190 | 0.4798 |
| 0.1372 | 25.97 | 16000 | 0.0172 | 0.4786 |
| 0.1126 | 26.79 | 16500 | 0.0177 | 0.4773 |
| 0.0929 | 27.6 | 17000 | 0.0173 | 0.4775 |
| 0.1069 | 28.41 | 17500 | 0.0164 | 0.4773 |
| 0.0932 | 29.22 | 18000 | 0.0153 | 0.4756 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.0.0
- Tokenizers 0.13.2
|
9e6e253a9f5b7a2ab9d86a2938dd6b24
|
Gergoe/t5-small-booksum-finetuned-booksum-test
|
Gergoe
|
t5
| 15 | 3 |
transformers
| 0 |
summarization
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 2,018 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-booksum-finetuned-booksum-test
This model is a fine-tuned version of [cnicu/t5-small-booksum](https://huggingface.co/cnicu/t5-small-booksum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2739
- Rouge1: 22.7829
- Rouge2: 4.8349
- Rougel: 18.2465
- Rougelsum: 19.2417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.5123 | 1.0 | 8750 | 3.2816 | 21.7712 | 4.3046 | 17.4053 | 18.4707 |
| 3.2347 | 2.0 | 17500 | 3.2915 | 22.2938 | 4.7828 | 17.8567 | 18.9135 |
| 3.0892 | 3.0 | 26250 | 3.2568 | 22.4966 | 4.825 | 18.0344 | 19.1306 |
| 2.9837 | 4.0 | 35000 | 3.2952 | 22.6913 | 5.0322 | 18.176 | 19.2751 |
| 2.9028 | 5.0 | 43750 | 3.2626 | 22.3548 | 4.7521 | 17.8681 | 18.7815 |
| 2.8441 | 6.0 | 52500 | 3.2691 | 22.6279 | 4.932 | 18.1051 | 19.0763 |
| 2.8006 | 7.0 | 61250 | 3.2753 | 22.8911 | 4.8954 | 18.1204 | 19.1464 |
| 2.7742 | 8.0 | 70000 | 3.2739 | 22.7829 | 4.8349 | 18.2465 | 19.2417 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.7.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
5a8e062395280bb526c38b23383db5ef
|
xander71988/t5-base-finetuned-facet-driver-type
|
xander71988
|
t5
| 9 | 16 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,516 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xander71988/t5-base-finetuned-facet-driver-type
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0016
- Validation Loss: 0.0054
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 64768, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0178 | 0.0076 | 0 |
| 0.0068 | 0.0057 | 1 |
| 0.0042 | 0.0055 | 2 |
| 0.0025 | 0.0044 | 3 |
| 0.0016 | 0.0054 | 4 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
8f30bfcae39dd438995dbd2bb00169c3
|
zakria/NLP_Project
|
zakria
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,972 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP_Project
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5308
- Wer: 0.3428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5939 | 1.0 | 500 | 2.1356 | 1.0014 |
| 0.9126 | 2.01 | 1000 | 0.5469 | 0.5354 |
| 0.4491 | 3.01 | 1500 | 0.4636 | 0.4503 |
| 0.3008 | 4.02 | 2000 | 0.4269 | 0.4330 |
| 0.2229 | 5.02 | 2500 | 0.4164 | 0.4073 |
| 0.188 | 6.02 | 3000 | 0.4717 | 0.4107 |
| 0.1739 | 7.03 | 3500 | 0.4306 | 0.4031 |
| 0.159 | 8.03 | 4000 | 0.4394 | 0.3993 |
| 0.1342 | 9.04 | 4500 | 0.4462 | 0.3904 |
| 0.1093 | 10.04 | 5000 | 0.4387 | 0.3759 |
| 0.1005 | 11.04 | 5500 | 0.5033 | 0.3847 |
| 0.0857 | 12.05 | 6000 | 0.4805 | 0.3876 |
| 0.0779 | 13.05 | 6500 | 0.5269 | 0.3810 |
| 0.072 | 14.06 | 7000 | 0.5109 | 0.3710 |
| 0.0641 | 15.06 | 7500 | 0.4865 | 0.3638 |
| 0.0584 | 16.06 | 8000 | 0.5041 | 0.3646 |
| 0.0552 | 17.07 | 8500 | 0.4987 | 0.3537 |
| 0.0535 | 18.07 | 9000 | 0.4947 | 0.3586 |
| 0.0475 | 19.08 | 9500 | 0.5237 | 0.3647 |
| 0.042 | 20.08 | 10000 | 0.5338 | 0.3561 |
| 0.0416 | 21.08 | 10500 | 0.5068 | 0.3483 |
| 0.0358 | 22.09 | 11000 | 0.5126 | 0.3532 |
| 0.0334 | 23.09 | 11500 | 0.5213 | 0.3536 |
| 0.0331 | 24.1 | 12000 | 0.5378 | 0.3496 |
| 0.03 | 25.1 | 12500 | 0.5167 | 0.3470 |
| 0.0254 | 26.1 | 13000 | 0.5245 | 0.3418 |
| 0.0233 | 27.11 | 13500 | 0.5393 | 0.3456 |
| 0.0232 | 28.11 | 14000 | 0.5279 | 0.3425 |
| 0.022 | 29.12 | 14500 | 0.5308 | 0.3428 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
67a345f38fda06fe9b0db31c7a63bb70
|
juancopi81/bert-finetuned-ner
|
juancopi81
|
bert
| 8 | 9 |
transformers
| 0 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,428 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juancopi81/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0269
- Validation Loss: 0.0528
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1715 | 0.0734 | 0 |
| 0.0467 | 0.0535 | 1 |
| 0.0269 | 0.0528 | 2 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
8da68469c46033e81e8341427797ca48
|
KoichiYasuoka/roberta-small-japanese-luw-upos
|
KoichiYasuoka
|
roberta
| 9 | 32 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-sa-4.0
|
['ja']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['japanese', 'token-classification', 'pos', 'dependency-parsing']
| false | true | true | 1,177 | false |
# roberta-small-japanese-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-small-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-small-japanese-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-small-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
02a9defea89f7fc5a56d2e830aa18f1f
|
Helsinki-NLP/opus-mt-sn-en
|
Helsinki-NLP
|
marian
| 10 | 198 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-sn-en
* source languages: sn
* target languages: en
* OPUS readme: [sn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sn-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sn-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sn.en | 51.8 | 0.648 |
|
1831ed0a41ceedece35cd610ef91370e
|
mijwiz-laboratories/oud_diffusion_unconditional_256
|
mijwiz-laboratories
| null | 7 | 0 |
diffusers
| 1 | null | false | false | false |
openrail
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,547 | false |
# Oud (عود) Unconditional Diffusion
The Oud is one of the most foundational instruments to all of Arab music. It can be heard in nearly every song, whether the subgenre is rooted in pop or classical music.
Its distinguishing sound can be picked out of a crowd of string instruments with little to no training.
Our Unconditional Diffusion model ensures that we show respect to the sound and culture it has created.
This project could not have been done without [the following audio diffusion tools.](https://github.com/teticio/audio-diffusion)
## Usage
Usage of this model is no different from any other audio diffusion model from HuggingFace.
```python
import torch
from diffusers import DiffusionPipeline
# Setup device and create generator
device = "cuda" if torch.cuda.is_available() else "cpu"
generator = torch.Generator(device=device)
# Instantiate model
model_id = "mijwiz-laboratories/oud_diffusion_unconditional_256"
audio_diffusion = DiffusionPipeline.from_pretrained(model_id).to(device)
# Set seed for generator
seed = generator.seed()
generator.manual_seed(seed)
# Run inference
output = audio_diffusion(generator=generator)
image = output.images[0] # Mel spectrogram generated
audio = output.audios[0, 0] # Playable audio file
```
## Limitations of Model
The dataset used was very small, so the diversity of snippets that can be generated is rather limited. Furthermore, with high intensity segments (think a human playing the instrument with high intensity,)
the realism/naturalness of the generated oud samples degrades.
|
f9bdcbe7fe06ac3a5d73856ce0e243fd
|
AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana
|
AndrewMcDowell
|
wav2vec2
| 35 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ja']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'ja', 'hf-asr-leaderboard']
| true | true | true | 2,139 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5500
- Wer: 1.0132
- Cer: 0.1609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 1.7019 | 12.65 | 1000 | 1.0510 | 0.9832 | 0.2589 |
| 1.6385 | 25.31 | 2000 | 0.6670 | 0.9915 | 0.1851 |
| 1.4344 | 37.97 | 3000 | 0.6183 | 1.0213 | 0.1797 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs
```
2. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python ./eval.py --model_id AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
|
8101ce068b5902468c49ca316de2a3f3
|
facebook/convnext-tiny-224
|
facebook
|
convnext
| 6 | 10,692 |
transformers
| 7 |
image-classification
| true | true | false |
apache-2.0
| null |
['imagenet-1k']
| null | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
['vision', 'image-classification']
| false | true | true | 2,656 | false |
# ConvNeXT (tiny-sized model)
ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-tiny-224")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
2f64e41e0cd4e24a0230ea87d50c6459
|
Helsinki-NLP/opus-mt-sv-af
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-sv-af
* source languages: sv
* target languages: af
* OPUS readme: [sv-af](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-af/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-af/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-af/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-af/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.af | 44.4 | 0.623 |
|
9bd0756dbe442a039d11cc83497dca49
|
mutisya/fine-tune-xlsr-53-wav2vec2-on-swahili-sagemaker-2
|
mutisya
|
wav2vec2
| 27 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice_9_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,016 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-xlsr-53-wav2vec2-on-swahili-sagemaker-2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_9_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2089
- Wer: 0.2356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.3715 | 0.22 | 400 | 3.1337 | 1.0 |
| 1.7928 | 0.44 | 800 | 0.7137 | 0.6290 |
| 0.5382 | 0.66 | 1200 | 0.5686 | 0.4708 |
| 0.4263 | 0.89 | 1600 | 0.3693 | 0.4091 |
| 0.3705 | 1.11 | 2000 | 0.3925 | 0.3747 |
| 0.3348 | 1.33 | 2400 | 0.2908 | 0.3597 |
| 0.3151 | 1.55 | 2800 | 0.3403 | 0.3388 |
| 0.2977 | 1.77 | 3200 | 0.2698 | 0.3294 |
| 0.2901 | 1.99 | 3600 | 0.6100 | 0.3173 |
| 0.2432 | 2.22 | 4000 | 0.2893 | 0.3213 |
| 0.256 | 2.44 | 4400 | 0.2604 | 0.3087 |
| 0.2453 | 2.66 | 4800 | 0.2448 | 0.3077 |
| 0.2427 | 2.88 | 5200 | 0.2391 | 0.2925 |
| 0.2235 | 3.1 | 5600 | 0.8570 | 0.2907 |
| 0.2078 | 3.32 | 6000 | 0.2289 | 0.2884 |
| 0.199 | 3.55 | 6400 | 0.2303 | 0.2852 |
| 0.2092 | 3.77 | 6800 | 0.2270 | 0.2769 |
| 0.2 | 3.99 | 7200 | 0.2588 | 0.2823 |
| 0.1806 | 4.21 | 7600 | 0.2324 | 0.2757 |
| 0.1789 | 4.43 | 8000 | 0.2051 | 0.2721 |
| 0.1753 | 4.65 | 8400 | 0.2290 | 0.2695 |
| 0.1734 | 4.88 | 8800 | 0.2161 | 0.2686 |
| 0.1648 | 5.1 | 9200 | 0.2139 | 0.2695 |
| 0.158 | 5.32 | 9600 | 0.2218 | 0.2632 |
| 0.151 | 5.54 | 10000 | 0.2060 | 0.2594 |
| 0.1534 | 5.76 | 10400 | 0.2199 | 0.2638 |
| 0.1485 | 5.98 | 10800 | 0.2023 | 0.2584 |
| 0.1332 | 6.2 | 11200 | 0.2160 | 0.2547 |
| 0.1319 | 6.43 | 11600 | 0.2045 | 0.2547 |
| 0.1329 | 6.65 | 12000 | 0.2072 | 0.2545 |
| 0.1329 | 6.87 | 12400 | 0.2014 | 0.2502 |
| 0.1307 | 7.09 | 12800 | 0.2045 | 0.2487 |
| 0.1197 | 7.31 | 13200 | 0.1987 | 0.2491 |
| 0.118 | 7.53 | 13600 | 0.1947 | 0.2442 |
| 0.1194 | 7.76 | 14000 | 0.1863 | 0.2430 |
| 0.1157 | 7.98 | 14400 | 0.3602 | 0.2430 |
| 0.1095 | 8.2 | 14800 | 0.2074 | 0.2408 |
| 0.1051 | 8.42 | 15200 | 0.2113 | 0.2410 |
| 0.1073 | 8.64 | 15600 | 0.2064 | 0.2395 |
| 0.1025 | 8.86 | 16000 | 0.2012 | 0.2396 |
| 0.1027 | 9.09 | 16400 | 0.2342 | 0.2372 |
| 0.0998 | 9.31 | 16800 | 0.2206 | 0.2357 |
| 0.0935 | 9.53 | 17200 | 0.2151 | 0.2356 |
| 0.0959 | 9.75 | 17600 | 0.2096 | 0.2355 |
| 0.095 | 9.97 | 18000 | 0.2089 | 0.2354 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.13.0
|
08dc2963cc3cdc895837d6737c282860
|
lmqg/t5-base-subjqa-restaurants-qg
|
lmqg
|
t5
| 34 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['en']
|
['lmqg/qg_subjqa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation']
| true | true | true | 4,006 | false |
# Model Card of `lmqg/t5-base-subjqa-restaurants-qg`
This model is fine-tuned version of [lmqg/t5-base-squad](https://huggingface.co/lmqg/t5-base-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: restaurants) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [lmqg/t5-base-squad](https://huggingface.co/lmqg/t5-base-squad)
- **Language:** en
- **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (restaurants)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/t5-base-subjqa-restaurants-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/t5-base-subjqa-restaurants-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-base-subjqa-restaurants-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json)
| | Score | Type | Dataset |
|:-----------|--------:|:------------|:-----------------------------------------------------------------|
| BERTScore | 88.48 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_1 | 8.81 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_2 | 3.68 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_3 | 1.09 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_4 | 0 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| METEOR | 14.75 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| MoverScore | 56.19 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| ROUGE_L | 11.96 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_subjqa
- dataset_name: restaurants
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: lmqg/t5-base-squad
- max_length: 512
- max_length_output: 32
- epoch: 1
- batch: 16
- lr: 5e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 32
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-base-subjqa-restaurants-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
babdeebda8bbbb11b5865884c8ac755b
|
skr3178/xlm-roberta-base-finetuned-panx-all
|
skr3178
|
xlm-roberta
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1752
- F1: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3 | 1.0 | 835 | 0.1862 | 0.8114 |
| 0.1552 | 2.0 | 1670 | 0.1758 | 0.8426 |
| 0.1002 | 3.0 | 2505 | 0.1752 | 0.8557 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
534de402a4843138897acfd0b08ebcc7
|
Dinithi/BlueBERT
|
Dinithi
|
bert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
cc0-1.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,251 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BlueBERT
This model is a fine-tuned version of [bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12](https://huggingface.co/bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6525
- Accuracy: 0.83
- Precision: 0.8767
- Recall: 0.8889
- F1: 0.8828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6839 | 1.0 | 50 | 0.7208 | 0.39 | 0.9231 | 0.1667 | 0.2824 |
| 0.6594 | 2.0 | 100 | 0.5862 | 0.6 | 0.9211 | 0.4861 | 0.6364 |
| 0.539 | 3.0 | 150 | 0.5940 | 0.66 | 0.9318 | 0.5694 | 0.7069 |
| 0.4765 | 4.0 | 200 | 0.5675 | 0.65 | 0.9512 | 0.5417 | 0.6903 |
| 0.3805 | 5.0 | 250 | 0.4494 | 0.79 | 0.9322 | 0.7639 | 0.8397 |
| 0.279 | 6.0 | 300 | 0.4760 | 0.84 | 0.8784 | 0.9028 | 0.8904 |
| 0.2016 | 7.0 | 350 | 0.5514 | 0.82 | 0.8553 | 0.9028 | 0.8784 |
| 0.1706 | 8.0 | 400 | 0.5353 | 0.84 | 0.8889 | 0.8889 | 0.8889 |
| 0.1164 | 9.0 | 450 | 0.7676 | 0.82 | 0.8462 | 0.9167 | 0.8800 |
| 0.1054 | 10.0 | 500 | 0.6525 | 0.83 | 0.8767 | 0.8889 | 0.8828 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
f723c7cb645941499eb5f2984a240208
|
Herais/pred_timeperiod
|
Herais
|
bert
| 8 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['zh']
|
['Custom']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification']
| false | true | true | 1,487 | false |
This model predicts the time period given a synopsis of about 200 Chinese characters.
The model is trained on TV and Movie datasets and takes simplified Chinese as input.
We trained the model from the "hfl/chinese-bert-wwm-ext" checkpoint.
#### Sample Usage
from transformers import BertTokenizer, BertForSequenceClassification
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
checkpoint = "Herais/pred_timeperiod"
tokenizer = BertTokenizer.from_pretrained(checkpoint,
problem_type="single_label_classification")
model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)
label2id_timeperiod = {'古代': 0, '当代': 1, '现代': 2, '近代': 3, '重大': 4}
id2label_timeperiod = {0: '古代', 1: '当代', 2: '现代', 3: '近代', 4: '重大'}
synopsis = """加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\
他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\
成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\
为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\
也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\
继续为检察事业贡献自己的青春。 """
inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')
model.eval()
outputs = model(**input)
label_ids_pred = torch.argmax(outputs.logits, dim=1).to('cpu').numpy()
labels_pred = [id2label_timeperiod[label] for label in labels_pred]
print(labels_pred)
# ['当代']
Citation
{}
|
73fd9168466424f9906800ec181e1942
|
juancopi81/distilbert-finetuned-imdb
|
juancopi81
|
distilbert
| 8 | 2 |
transformers
| 0 |
fill-mask
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,539 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juancopi81/distilbert-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8630
- Validation Loss: 2.5977
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8630 | 2.5977 | 0 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
880da9e6a0e615bdd7b6be1b81f5faa1
|
bersanoenrico/movies-ita-classification-bertbased-v2
|
bersanoenrico
|
bert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,363 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# movies-ita-classification-bertbased-v2
This model is a fine-tuned version of [dbmdz/bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1995
- Accuracy: 0.6208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3416 | 1.0 | 1181 | 1.2574 | 0.5897 |
| 1.0583 | 2.0 | 2362 | 1.1978 | 0.6091 |
| 0.789 | 3.0 | 3543 | 1.1995 | 0.6208 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
f0de53a11c57998d497e14f65f697943
|
Keneston/xlm-roberta-base-finetuned-panx-de
|
Keneston
|
xlm-roberta
| 15 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
fb60876a566443dd73b2de2f0089be79
|
mastergruffly/temp
|
mastergruffly
| null | 18 | 3 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 612 | false |
### temp Dreambooth model trained by mastergruffly with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
8a380ad2c02808020084ac6a888ee4d6
|
DOOGLAK/Tagged_One_500v8_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
|
bert
| 13 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['tagged_one500v8_wikigold_split']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,565 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_One_500v8_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v8_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2761
- Precision: 0.6785
- Recall: 0.6773
- F1: 0.6779
- Accuracy: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 172 | 0.3004 | 0.5475 | 0.5128 | 0.5296 | 0.9050 |
| No log | 2.0 | 344 | 0.2752 | 0.6595 | 0.6422 | 0.6507 | 0.9201 |
| 0.112 | 3.0 | 516 | 0.2761 | 0.6785 | 0.6773 | 0.6779 | 0.9254 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
9aeef6d3e88d425d2783082162a341b9
|
Helsinki-NLP/opus-mt-hu-de
|
Helsinki-NLP
|
marian
| 10 | 20 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 | false |
### opus-mt-hu-de
* source languages: hu
* target languages: de
* OPUS readme: [hu-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hu-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/hu-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hu.de | 44.1 | 0.637 |
|
207425d4ff9891748db6fae025abef18
|
nlp04/kobart_64_3e-5_datav2_min30_lp5.0_temperature1.0
|
nlp04
|
bart
| 19 | 11 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 994 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobart_64_3e-5_datav2_min30_lp5.0_temperature1.0
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ef173c87272d9e752b9b5241dd92ab23
|
beliv3/albertbezdream
|
beliv3
| null | 31 | 4 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 1,815 | false |
### AlbertBezDream Dreambooth model trained by beliv3 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)

.jpg)
.jpg)
.jpg)
.jpg)
|
0479de45968299a026f0f02e4632f432
|
muhammaddjunas/cvt-13-finetuned-waste
|
muhammaddjunas
|
cvt
| 14 | 2 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,421 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cvt-13-finetuned-waste
This model is a fine-tuned version of [microsoft/cvt-13](https://huggingface.co/microsoft/cvt-13) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1715 | 0.99 | 117 | 0.0000 | 1.0 |
| 0.1194 | 1.99 | 234 | 0.0000 | 1.0 |
| 0.1496 | 2.99 | 351 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
e40ee70e6f9ea3f6a56357b33fc50990
|
omar47/wav2vec2-large-xls-r-300m-urdu-cv-10
|
omar47
|
wav2vec2
| 17 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice_10_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,912 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-urdu-cv-10
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5959
- Wer: 0.3946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 20.8724 | 0.25 | 32 | 18.0006 | 1.0 |
| 10.984 | 0.5 | 64 | 6.8001 | 1.0 |
| 5.7792 | 0.74 | 96 | 4.9273 | 1.0 |
| 4.2891 | 0.99 | 128 | 3.8379 | 1.0 |
| 3.4937 | 1.24 | 160 | 3.2877 | 1.0 |
| 3.1605 | 1.49 | 192 | 3.1198 | 1.0 |
| 3.0874 | 1.74 | 224 | 3.0542 | 1.0 |
| 3.0363 | 1.98 | 256 | 3.0063 | 0.9999 |
| 2.9776 | 2.23 | 288 | 2.9677 | 1.0 |
| 2.8168 | 2.48 | 320 | 2.4189 | 1.0000 |
| 2.0575 | 2.73 | 352 | 1.5330 | 0.8520 |
| 1.4248 | 2.98 | 384 | 1.1747 | 0.7519 |
| 1.1354 | 3.22 | 416 | 0.9837 | 0.7047 |
| 1.0049 | 3.47 | 448 | 0.9414 | 0.6631 |
| 0.956 | 3.72 | 480 | 0.8948 | 0.6606 |
| 0.8906 | 3.97 | 512 | 0.8381 | 0.6291 |
| 0.7587 | 4.22 | 544 | 0.7714 | 0.5898 |
| 0.7534 | 4.47 | 576 | 0.8237 | 0.5908 |
| 0.7203 | 4.71 | 608 | 0.7731 | 0.5758 |
| 0.6876 | 4.96 | 640 | 0.7467 | 0.5390 |
| 0.5825 | 5.21 | 672 | 0.6940 | 0.5401 |
| 0.5565 | 5.46 | 704 | 0.6826 | 0.5248 |
| 0.5598 | 5.71 | 736 | 0.6387 | 0.5204 |
| 0.5289 | 5.95 | 768 | 0.6432 | 0.4956 |
| 0.4565 | 6.2 | 800 | 0.6643 | 0.4876 |
| 0.4576 | 6.45 | 832 | 0.6295 | 0.4758 |
| 0.4265 | 6.7 | 864 | 0.6227 | 0.4673 |
| 0.4359 | 6.95 | 896 | 0.6077 | 0.4598 |
| 0.3576 | 7.19 | 928 | 0.5800 | 0.4477 |
| 0.3612 | 7.44 | 960 | 0.5837 | 0.4500 |
| 0.345 | 7.69 | 992 | 0.5892 | 0.4466 |
| 0.3707 | 7.94 | 1024 | 0.6217 | 0.4380 |
| 0.3269 | 8.19 | 1056 | 0.5964 | 0.4412 |
| 0.2974 | 8.43 | 1088 | 0.6116 | 0.4394 |
| 0.2932 | 8.68 | 1120 | 0.5764 | 0.4235 |
| 0.2854 | 8.93 | 1152 | 0.5757 | 0.4239 |
| 0.2651 | 9.18 | 1184 | 0.5798 | 0.4253 |
| 0.2508 | 9.43 | 1216 | 0.5750 | 0.4316 |
| 0.238 | 9.67 | 1248 | 0.6038 | 0.4232 |
| 0.2454 | 9.92 | 1280 | 0.5781 | 0.4078 |
| 0.2196 | 10.17 | 1312 | 0.5931 | 0.4178 |
| 0.2036 | 10.42 | 1344 | 0.6134 | 0.4116 |
| 0.2087 | 10.67 | 1376 | 0.5831 | 0.4146 |
| 0.1908 | 10.91 | 1408 | 0.5987 | 0.4159 |
| 0.1751 | 11.16 | 1440 | 0.5968 | 0.4065 |
| 0.1726 | 11.41 | 1472 | 0.6037 | 0.4119 |
| 0.1728 | 11.66 | 1504 | 0.5961 | 0.4011 |
| 0.1772 | 11.91 | 1536 | 0.5903 | 0.3972 |
| 0.1647 | 12.16 | 1568 | 0.5960 | 0.4024 |
| 0.1506 | 12.4 | 1600 | 0.5986 | 0.3933 |
| 0.1383 | 12.65 | 1632 | 0.5893 | 0.3938 |
| 0.1433 | 12.9 | 1664 | 0.5999 | 0.3975 |
| 0.1356 | 13.15 | 1696 | 0.6035 | 0.3982 |
| 0.1431 | 13.4 | 1728 | 0.5997 | 0.4042 |
| 0.1346 | 13.64 | 1760 | 0.6018 | 0.4003 |
| 0.1363 | 13.89 | 1792 | 0.5891 | 0.3969 |
| 0.1323 | 14.14 | 1824 | 0.5983 | 0.3925 |
| 0.1196 | 14.39 | 1856 | 0.6003 | 0.3939 |
| 0.1266 | 14.64 | 1888 | 0.5997 | 0.3941 |
| 0.1269 | 14.88 | 1920 | 0.5959 | 0.3946 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
864626db792e56b4bf45b4669f411691
|
sd-concepts-library/pokemon-modern-artwork
|
sd-concepts-library
| null | 1,176 | 0 | null | 5 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 147,371 | false |
### Pokemon modern artwork on Stable Diffusion
Pokémon modern artwork up to Hisui concept (re-scaled to max width and height 512 px)
Includes mega-evolutions, gigamax, regional and alternate forms.
Unown variants are excluded, as well as Arceus/Silvally recolours (to avoid same-species overrepresentation)
This is the `<pkmn-modern>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:



















































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































|
67cf5351961d267ddbdeb307d00a48fc
|
henryscheible/sst2_bert-base-uncased_81
|
henryscheible
| null | 13 | 0 | null | 0 | null | true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,016 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sst2_bert-base-uncased_81
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3565
- Accuracy: 0.9151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
7d0d8029900a2c88359478bc0c115eeb
|
marcel/wav2vec2-large-xlsr-german-demo
|
marcel
|
wav2vec2
| 20 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['de']
|
['common_voice', 'wer']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 4,102 | false |
# Wav2Vec2-Large-XLSR-53-German
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on German using 3% of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "de", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("marcel/wav2vec2-large-xlsr-german-demo")
model = Wav2Vec2ForCTC.from_pretrained("marcel/wav2vec2-large-xlsr-german-demo")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "de", split="test[:10%]")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("marcel/wav2vec2-large-xlsr-german-demo")
model = Wav2Vec2ForCTC.from_pretrained("marcel/wav2vec2-large-xlsr-german-demo")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\”\�\カ\æ\無\ན\カ\臣\ѹ\…\«\»\ð\ı\„\幺\א\ב\比\ш\ע\)\ứ\в\œ\ч\+\—\ш\‚\נ\м\ń\乡\$\=\ש\ф\支\(\°\и\к\̇]'
substitutions = {
'e' : '[\ə\é\ě\ę\ê\ế\ế\ë\ė\е]',
'o' : '[\ō\ô\ô\ó\ò\ø\ọ\ŏ\õ\ő\о]',
'a' : '[\á\ā\ā\ă\ã\å\â\à\ą\а]',
'c' : '[\č\ć\ç\с]',
'l' : '[\ł]',
'u' : '[\ú\ū\ứ\ů]',
'und' : '[\&]',
'r' : '[\ř]',
'y' : '[\ý]',
's' : '[\ś\š\ș\ş]',
'i' : '[\ī\ǐ\í\ï\î\ï]',
'z' : '[\ź\ž\ź\ż]',
'n' : '[\ñ\ń\ņ]',
'g' : '[\ğ]',
'ss' : '[\ß]',
't' : '[\ț\ť]',
'd' : '[\ď\đ]',
"'": '[\ʿ\་\’\`\´\ʻ\`\‘]',
'p': '\р'
}
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
for x in substitutions:
batch["sentence"] = re.sub(substitutions[x], x, batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.35 %
## Training
The first 3% of the Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found TODO
|
c8362e2e558c5181cd610d370238a9c3
|
gokuls/mobilebert_sa_GLUE_Experiment_data_aug_stsb
|
gokuls
|
mobilebert
| 17 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,046 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_data_aug_stsb
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8342
- Pearson: 0.1765
- Spearmanr: 0.1800
- Combined Score: 0.1782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------:|:--------------:|
| 1.0254 | 1.0 | 2518 | 2.8776 | 0.1575 | 0.1742 | 0.1659 |
| 0.5854 | 2.0 | 5036 | 3.1464 | 0.1591 | 0.1679 | 0.1635 |
| 0.4255 | 3.0 | 7554 | 2.8342 | 0.1765 | 0.1800 | 0.1782 |
| 0.2765 | 4.0 | 10072 | 2.8524 | 0.1815 | 0.1838 | 0.1827 |
| 0.1862 | 5.0 | 12590 | 2.9184 | 0.1736 | 0.1768 | 0.1752 |
| 0.1339 | 6.0 | 15108 | 2.9817 | 0.1688 | 0.1728 | 0.1708 |
| 0.1029 | 7.0 | 17626 | 2.9702 | 0.1618 | 0.1643 | 0.1631 |
| 0.0806 | 8.0 | 20144 | 3.0033 | 0.1588 | 0.1624 | 0.1606 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
9c72576f75dfc3c7f6e746911c9b786d
|
Salesforce/codegen-6B-multi
|
Salesforce
|
codegen
| 10 | 1,216 |
transformers
| 4 |
text-generation
| true | false | false |
bsd-3-clause
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,029 | false |
# CodeGen (CodeGen-Multi 6B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Multi 6B** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 6B* and further pre-trained on a dataset of multiple programming languages, and "6B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Multi 6B) was firstly initialized with *CodeGen-NL 6B*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-multi")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-multi")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
a76eae558073ce8da010561eaa4746db
|
abinternet143/t5-small-finetuned-xsum
|
abinternet143
|
t5
| 11 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['xsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 924 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 2.0.0
- Tokenizers 0.11.6
|
a2637894d1fbfab3c43e03f7b1554a06
|
Tahsin/distilbert-base-uncased-finetuned-emotion
|
Tahsin
|
bert
| 15 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,342 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1561
- Accuracy: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 0.1635 | 0.9295 |
| 0.111 | 2.0 | 500 | 0.1515 | 0.936 |
| 0.111 | 3.0 | 750 | 0.1561 | 0.9285 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
0c5dad389ab24e3c229b9d2f02487b64
|
Davlan/afro-xlmr-small
|
Davlan
|
xlm-roberta
| 9 | 567 |
transformers
| 0 |
fill-mask
| true | false | false |
afl-3.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,220 | false |
# afro-xlmr-small
AfroXLMR-small was created by [first reducing the vocabulary token size](https://aclanthology.org/2020.sustainlp-1.16/) of XLM-R-base from 250K to 70k, followed by MLM adaptation on 17 African languages (Afrikaans, Amharic, Hausa, Igbo, Malagasy, Chichewa, Oromo, Naija, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, and isiZulu) covering the major African language families and 3 high resource languages (Arabic, French, and English).
## Eval results on MasakhaNER (F-score)
language| XLM-R-miniLM| XLM-R-base |XLM-R-large| afro-xlmr-base | afro-xlmr-small | afro-xlmr-mini
-|-|-|-|-|-|-
amh |69.5|70.6|76.2|76.1|70.1|69.7
hau |74.5|89.5|90.5|91.2|91.4|87.7
ibo |81.9|84.8|84.1|87.4|86.6|83.5
kin |68.6|73.3|73.8|78.0|77.5|74.1
lug |64.7|79.7|81.6|82.9|83.2|77.4
luo |11.7|74.9|73.6|75.1|75.4|17.5
pcm |83.2|87.3|89.0|89.6|89.0|85.5
swa |86.3|87.4|89.4|88.6|88.7|86.0
wol |51.7|63.9|67.9|67.4|65.9|59.0
yor |72.0|78.3|78.9|82.1|81.3|75.1
### BibTeX entry and citation info
```
@inproceedings{alabi-etal-2022-adapting,
title = "Adapting Pre-trained Language Models to {A}frican Languages via Multilingual Adaptive Fine-Tuning",
author = "Alabi, Jesujoba O. and
Adelani, David Ifeoluwa and
Mosbach, Marius and
Klakow, Dietrich",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.382",
pages = "4336--4349",
abstract = "Multilingual pre-trained language models (PLMs) have demonstrated impressive performance on several downstream tasks for both high-resourced and low-resourced languages. However, there is still a large performance drop for languages unseen during pre-training, especially African languages. One of the most effective approaches to adapt to a new language is language adaptive fine-tuning (LAFT) {---} fine-tuning a multilingual PLM on monolingual texts of a language using the pre-training objective. However, adapting to target language individually takes large disk space and limits the cross-lingual transfer abilities of the resulting models because they have been specialized for a single language. In this paper, we perform multilingual adaptive fine-tuning on 17 most-resourced African languages and three other high-resource languages widely spoken on the African continent to encourage cross-lingual transfer learning. To further specialize the multilingual PLM, we removed vocabulary tokens from the embedding layer that corresponds to non-African writing scripts before MAFT, thus reducing the model size by around 50{\%}. Our evaluation on two multilingual PLMs (AfriBERTa and XLM-R) and three NLP tasks (NER, news topic classification, and sentiment classification) shows that our approach is competitive to applying LAFT on individual languages while requiring significantly less disk space. Additionally, we show that our adapted PLM also improves the zero-shot cross-lingual transfer abilities of parameter efficient fine-tuning methods.",
}
```
|
db061726eea9dddaaeb4a6599b5ce379
|
Slavka/bert-base-cased-finetuned-log-parser-winlogbeat_nowhitespace_large
|
Slavka
|
bert
| 8 | 4 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,375 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-log-parser-winlogbeat_nowhitespace_large
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 15321, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 15321, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-06, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
469736ca16b3cb1b523f5220d2215bdb
|
jonatasgrosman/exp_w2v2t_en_xlsr-53_s870
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'en']
| false | true | true | 467 | false |
# exp_w2v2t_en_xlsr-53_s870
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
f5b4d487fb7f20dd92cf4994a6f64277
|
Heldhy/wav2vec2-base-timit-demo-colab
|
Heldhy
|
wav2vec2
| 12 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,641 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4568
- Wer: 0.3422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3896 | 4.0 | 500 | 1.1573 | 0.8886 |
| 0.5667 | 8.0 | 1000 | 0.4841 | 0.4470 |
| 0.2126 | 12.0 | 1500 | 0.4201 | 0.3852 |
| 0.1235 | 16.0 | 2000 | 0.4381 | 0.3623 |
| 0.0909 | 20.0 | 2500 | 0.4784 | 0.3748 |
| 0.0611 | 24.0 | 3000 | 0.4390 | 0.3577 |
| 0.0454 | 28.0 | 3500 | 0.4568 | 0.3422 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
723f9b6cd3c29028e33a1633723ae646
|
dixipi9178/MyCoolModel
|
dixipi9178
| null | 11 | 0 | null | 0 | null | false | false | false |
other
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 494 | false |
https://huggingface.co/dixipi9178/MyCoolModel/resolve/main/corneos7thHeavenMix_v2.safetensors
https://huggingface.co/dixipi9178/MyCoolModel/resolve/main/novelai%20f111%20sd1.4%20add%20difference%201.0.ckpt
https://huggingface.co/dixipi9178/MyCoolModel/resolve/main/Anything-V3.0-pruned-fp16.ckpt
!gdown https://huggingface.co/dixipi9178/MyCoolModel/resolve/main/novelai%20f111%20sd1.4%20add%20difference%201.0.ckpt -O /content/stable-diffusion-webui/models/Stable-diffusion/nai_f111.ckpt
|
9f65e986a8956fb3368c1d7075a0b438
|
LeoFelix/bert-finetuned-squad
|
LeoFelix
|
bert
| 8 | 3 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,522 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LeoFelix/bert-finetuned-squad
This model is a fine-tuned version of [pierreguillou/bert-base-cased-squad-v1.1-portuguese](https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0193
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 852, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.3702 | 0 |
| 0.0471 | 1 |
| 0.0193 | 2 |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
8b6e9722a57c3c9280420f4e8e36d01c
|
peter2000/wav2vec2-large-xls-r-300m-kinyarwanda
|
peter2000
|
wav2vec2
| 15 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 5,335 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kinyarwanda
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3917
- Wer: 0.3246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 9.0634 | 0.12 | 400 | 3.0554 | 1.0 |
| 2.8009 | 0.24 | 800 | 1.5927 | 0.9554 |
| 0.9022 | 0.36 | 1200 | 0.7328 | 0.6445 |
| 0.6213 | 0.48 | 1600 | 0.6138 | 0.5510 |
| 0.5299 | 0.6 | 2000 | 0.6072 | 0.5223 |
| 0.4999 | 0.72 | 2400 | 0.5449 | 0.4969 |
| 0.4731 | 0.84 | 2800 | 0.5261 | 0.4828 |
| 0.458 | 0.96 | 3200 | 0.5058 | 0.4607 |
| 0.4158 | 1.09 | 3600 | 0.4892 | 0.4463 |
| 0.4037 | 1.21 | 4000 | 0.4759 | 0.4429 |
| 0.4021 | 1.33 | 4400 | 0.4615 | 0.4330 |
| 0.3934 | 1.45 | 4800 | 0.4593 | 0.4315 |
| 0.3808 | 1.57 | 5200 | 0.4736 | 0.4344 |
| 0.3838 | 1.69 | 5600 | 0.4569 | 0.4249 |
| 0.3726 | 1.81 | 6000 | 0.4473 | 0.4140 |
| 0.3623 | 1.93 | 6400 | 0.4403 | 0.4097 |
| 0.3517 | 2.05 | 6800 | 0.4389 | 0.4061 |
| 0.333 | 2.17 | 7200 | 0.4383 | 0.4104 |
| 0.3354 | 2.29 | 7600 | 0.4360 | 0.3955 |
| 0.3257 | 2.41 | 8000 | 0.4226 | 0.3942 |
| 0.3275 | 2.53 | 8400 | 0.4206 | 0.4040 |
| 0.3262 | 2.65 | 8800 | 0.4172 | 0.3875 |
| 0.3206 | 2.77 | 9200 | 0.4209 | 0.3877 |
| 0.323 | 2.89 | 9600 | 0.4177 | 0.3825 |
| 0.3099 | 3.01 | 10000 | 0.4101 | 0.3691 |
| 0.3008 | 3.14 | 10400 | 0.4055 | 0.3709 |
| 0.2918 | 3.26 | 10800 | 0.4085 | 0.3800 |
| 0.292 | 3.38 | 11200 | 0.4089 | 0.3713 |
| 0.292 | 3.5 | 11600 | 0.4092 | 0.3730 |
| 0.2785 | 3.62 | 12000 | 0.4151 | 0.3687 |
| 0.2941 | 3.74 | 12400 | 0.4004 | 0.3639 |
| 0.2838 | 3.86 | 12800 | 0.4108 | 0.3703 |
| 0.2854 | 3.98 | 13200 | 0.3911 | 0.3596 |
| 0.2683 | 4.1 | 13600 | 0.3944 | 0.3575 |
| 0.2647 | 4.22 | 14000 | 0.3836 | 0.3538 |
| 0.2704 | 4.34 | 14400 | 0.4006 | 0.3540 |
| 0.2664 | 4.46 | 14800 | 0.3974 | 0.3553 |
| 0.2662 | 4.58 | 15200 | 0.3890 | 0.3470 |
| 0.2615 | 4.7 | 15600 | 0.3856 | 0.3507 |
| 0.2553 | 4.82 | 16000 | 0.3814 | 0.3497 |
| 0.2587 | 4.94 | 16400 | 0.3837 | 0.3440 |
| 0.2522 | 5.06 | 16800 | 0.3834 | 0.3486 |
| 0.2451 | 5.19 | 17200 | 0.3897 | 0.3414 |
| 0.2423 | 5.31 | 17600 | 0.3864 | 0.3481 |
| 0.2434 | 5.43 | 18000 | 0.3808 | 0.3416 |
| 0.2525 | 5.55 | 18400 | 0.3795 | 0.3408 |
| 0.2427 | 5.67 | 18800 | 0.3841 | 0.3411 |
| 0.2411 | 5.79 | 19200 | 0.3804 | 0.3366 |
| 0.2404 | 5.91 | 19600 | 0.3800 | 0.3328 |
| 0.2372 | 6.03 | 20000 | 0.3749 | 0.3335 |
| 0.2244 | 6.15 | 20400 | 0.3820 | 0.3327 |
| 0.2381 | 6.27 | 20800 | 0.3789 | 0.3325 |
| 0.2294 | 6.39 | 21200 | 0.3867 | 0.3298 |
| 0.2378 | 6.51 | 21600 | 0.3843 | 0.3281 |
| 0.2312 | 6.63 | 22000 | 0.3813 | 0.3277 |
| 0.2411 | 6.75 | 22400 | 0.3780 | 0.3268 |
| 0.2315 | 6.87 | 22800 | 0.3790 | 0.3280 |
| 0.241 | 6.99 | 23200 | 0.3776 | 0.3281 |
| 0.2313 | 7.11 | 23600 | 0.3929 | 0.3283 |
| 0.2423 | 7.24 | 24000 | 0.3905 | 0.3280 |
| 0.2337 | 7.36 | 24400 | 0.3979 | 0.3249 |
| 0.2368 | 7.48 | 24800 | 0.3980 | 0.3257 |
| 0.2409 | 7.6 | 25200 | 0.3937 | 0.3229 |
| 0.2416 | 7.72 | 25600 | 0.3867 | 0.3237 |
| 0.2364 | 7.84 | 26000 | 0.3912 | 0.3253 |
| 0.234 | 7.96 | 26400 | 0.3917 | 0.3246 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
d14045d5cbb680f8883df76c15a2a49e
|
tau/bart-base-sled-govreport
|
tau
|
tau/sled
| 5 | 1 |
transformers
| 1 | null | true | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 5,008 | false |
# BART-SLED (SLiding-Encoder and Decoder, base-sized model)
SLED models use pretrained, short-range encoder-decoder models, and apply them over
long-text inputs by splitting the input into multiple overlapping chunks, encoding each independently and perform fusion-in-decoder
## Model description
This SLED model is based on the BART model, which is described in its [model card](https://huggingface.co/facebook/bart-base).
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works
well for comprehension tasks (e.g. text classification, question answering). When used as a BART-SLED model, it can be applied on long text tasks.
This model was finetuned on the [GovReport](https://arxiv.org/abs/2104.02112)
## Intended uses & limitations
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset.
### How to use
To use the model, you first need to install `py-sled` in your environment (or clone the code from the [official repository](https://github.com/Mivg/SLED/blob/main/README.md))
```
pip install py-sled
```
For more installation instructions, see [here](https://github.com/Mivg/SLED#Installation).
Once installed, SLED is fully compatible with HuggingFace's AutoClasses (AutoTokenizer, AutoConfig, AutoModel
and AutoModelForCausalLM) and can be loaded using the from_pretrained methods
```python
import sled # *** required so that SledModels will be registered for the AutoClasses ***
model = AutoModel.from_pretrained('tau/bart-base-sled')
```
Here is how to use this model in PyTorch:
```python
from sled import SledTokenizer, SledModel
tokenizer = SledTokenizer.from_pretrained('tau/bart-base-sled')
model = SledModel.from_pretrained('tau/bart-base-sled')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
You can also replace SledModel by SledModelForConditionalGeneration for Seq2Seq generation
```python
model = SledModelForConditionalGeneration.from_pretrained('tau/bart-base-sled')
```
In case you wish to apply SLED on a task containing a prefix (e.g. question) which should be given as a context to
every chunk, you can pass the `prefix_length` tensor input as well (A LongTensor in the length of the batch size).
```python
import torch
import sled # *** required so that SledModels will be registered for the AutoClasses ***
tokenizer = AutoTokenizer.from_pretrained('tau/bart-base-sled')
model = AutoModel.from_pretrained('tau/bart-base-sled')
document_input_ids = tokenizer("Dogs are great for you.", return_tensors="pt").input_ids
prefix_input_ids = tokenizer("Are dogs good for you?", return_tensors="pt").input_ids
input_ids = torch.cat((prefix_input_ids, document_input_ids), dim=-1)
attention_mask = torch.ones_like(input_ids)
prefix_length = torch.LongTensor([[prefix_input_ids.size(1)]])
outputs = model(input_ids=input_ids, attention_mask=attention_mask, prefix_length=prefix_length)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
Please cite both the SLED [paper](https://arxiv.org/abs/2208.00748.pdf) and the BART [paper](https://arxiv.org/abs/1910.13461) by Lewis et al as well as GovReport by Huang et al
```bibtex
@inproceedings{Ivgi2022EfficientLU,
title={Efficient Long-Text Understanding with Short-Text Models},
author={Maor Ivgi and Uri Shaham and Jonathan Berant},
year={2022}
}
```
```bibtex
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```bibtex
@inproceedings{huang2021govreport,
title = "Efficient Attentions for Long Document Summarization",
author = "Huang, Luyang and
Cao, Shuyang and
Parulian, Nikolaus and
Ji, Heng and
Wang, Lu",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.112",
doi = "10.18653/v1/2021.naacl-main.112",
pages = "1419--1436"
}
```
|
0a36a53071d980280c5be03099588988
|
Aalaa/opt-125m-finetuned-wikitext2
|
Aalaa
|
opt
| 13 | 15 |
transformers
| 1 |
text-generation
| true | false | false |
other
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,255 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-125m-finetuned-wikitext2
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4123 | 1.0 | 2370 | 3.3621 |
| 3.2096 | 2.0 | 4740 | 3.3452 |
| 3.0822 | 3.0 | 7110 | 3.3409 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
85784c63548542d948597041cb25f58a
|
the-bee/bert-finetuned-ner
|
the-bee
|
bert
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,512 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0594
- Precision: 0.9331
- Recall: 0.9529
- F1: 0.9429
- Accuracy: 0.9872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0872 | 1.0 | 1756 | 0.0631 | 0.9128 | 0.9359 | 0.9242 | 0.9827 |
| 0.0338 | 2.0 | 3512 | 0.0578 | 0.9322 | 0.9510 | 0.9415 | 0.9867 |
| 0.0174 | 3.0 | 5268 | 0.0594 | 0.9331 | 0.9529 | 0.9429 | 0.9872 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
42f48f6fa61251ad25dd38212b7474c3
|
Martin97Bozic/xlm-roberta-base-finetuned-squad
|
Martin97Bozic
|
xlm-roberta
| 13 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,264 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-squad
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.4107 | 1.0 | 3693 | 2.2321 |
| 2.1359 | 2.0 | 7386 | 2.1499 |
| 1.9214 | 3.0 | 11079 | 2.1433 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
352a2d7ff9501d866064c7dd961264e6
|
pedramyamini/distilbert-base-multilingual-cased-finetuned-mobile-banks-cafebazaar2022-09-12-08-14-58
|
pedramyamini
|
distilbert
| 8 | 1 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,699 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pedramyamini/distilbert-base-multilingual-cased-finetuned-mobile-banks-cafebazaar2022-09-12-08-14-58
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4986
- Validation Loss: 0.7589
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 21392, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.7544 | 0.7034 | 0 |
| 0.6815 | 0.6905 | 1 |
| 0.6463 | 0.6960 | 2 |
| 0.6135 | 0.6896 | 3 |
| 0.5764 | 0.7041 | 4 |
| 0.5447 | 0.7340 | 5 |
| 0.5170 | 0.7562 | 6 |
| 0.4986 | 0.7589 | 7 |
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ad0b6cda5090841ce5236eb04828a519
|
cuzeverynameistaken/wav2vec2-base-timit-demo-colab1
|
cuzeverynameistaken
|
wav2vec2
| 16 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,462 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7170
- Wer: 0.4784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1915 | 13.89 | 500 | 3.1318 | 1.0 |
| 1.4993 | 27.78 | 1000 | 0.6736 | 0.5485 |
| 0.3416 | 41.67 | 1500 | 0.7111 | 0.5092 |
| 0.1937 | 55.56 | 2000 | 0.7170 | 0.4784 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
85b9c640afb568c750016bbc74f726b8
|
mrm8488/santacoder-finetuned-the-stack-bash-shell
|
mrm8488
|
gpt2
| 17 | 10 |
transformers
| 2 |
text-generation
| true | false | false |
openrail
|
['code']
|
['bigcode/the-stack-dedup']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'bash', 'shell', 'code', 'codegen']
| true | true | true | 4,251 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SantaCoder 🎅 fine-tuned on bash/shell 🐚 scripts
This model is a fine-tuned version of [BigCode/SantaCoder](https://huggingface.co/bigcode/santacoder) on The Stack [bash/shell scripts](https://huggingface.co/datasets/bigcode/the-stack-dedup).
It achieves the following results on the evaluation set:
- Loss: 1.2272
## Model description
The [SantaCoder](https://huggingface.co/bigcode/santacoder) models are a series of 1.1B parameter models trained on the Python, Java, and JavaScript subset of [The Stack (v1.1)](https://huggingface.co/datasets/bigcode/the-stack) (which excluded opt-out requests).
The main model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), was trained using near-deduplication and comment-to-code ratio as filtering criteria and using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255).
In addition, there are several models that were trained on datasets with different filter parameters and with architecture and objective variations.
## Intended uses & limitations
The model has been trained on source code in Python, Java, and JavaScript and fine-tuned on bash/shell scripts. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits.
## Training and evaluation data
The Stack contains over 6TB of permissively-licensed source code files covering 358 programming languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets. **This is the near-deduplicated version with 3TB data.**
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6101 | 0.05 | 500 | 1.5078 |
| 1.6156 | 0.1 | 1000 | 1.4687 |
| 1.4916 | 0.15 | 1500 | 1.4728 |
| 1.4027 | 0.2 | 2000 | 1.4237 |
| 1.499 | 0.25 | 2500 | 1.4067 |
| 1.4378 | 0.3 | 3000 | 1.3838 |
| 1.3698 | 0.35 | 3500 | 1.3767 |
| 1.3021 | 0.4 | 4000 | 1.3562 |
| 4.0521 | 0.45 | 4500 | 1.3433 |
| 0.9722 | 0.5 | 5000 | 1.3461 |
| 1.3836 | 0.55 | 5500 | 1.2955 |
| 1.3727 | 0.6 | 6000 | 1.2809 |
| 1.3332 | 0.65 | 6500 | 1.2665 |
| 1.2232 | 0.7 | 7000 | 1.2573 |
| 1.2373 | 0.75 | 7500 | 1.2463 |
| 1.3759 | 0.8 | 8000 | 1.2391 |
| 1.3021 | 0.85 | 8500 | 1.2325 |
| 1.369 | 0.9 | 9000 | 1.2292 |
| 1.4911 | 0.95 | 9500 | 1.2275 |
| 1.1677 | 1.0 | 10000 | 1.2272 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
### Citation
```
@misc {manuel_romero_2023,
author = { {Manuel Romero} },
title = { santacoder-finetuned-the-stack-bash-shell (Revision d3e56a7) },
year = 2023,
url = { https://huggingface.co/mrm8488/santacoder-finetuned-the-stack-bash-shell },
doi = { 10.57967/hf/0320 },
publisher = { Hugging Face }
}
```
|
c68441cf7841f7b2af5c7cb6e2f77819
|
patrickvonplaten/sew-d-small-100k-timit
|
patrickvonplaten
|
sew-d
| 47 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['timit_asr']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'timit_asr', 'generated_from_trainer']
| true | true | true | 2,968 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-small-100k-timit
This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7541
- Wer: 0.8061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2068 | 0.69 | 100 | 4.0802 | 1.0 |
| 2.9805 | 1.38 | 200 | 2.9792 | 1.0 |
| 2.9781 | 2.07 | 300 | 2.9408 | 1.0 |
| 2.9655 | 2.76 | 400 | 2.9143 | 1.0 |
| 2.8953 | 3.45 | 500 | 2.8775 | 1.0 |
| 2.7718 | 4.14 | 600 | 2.7787 | 1.0 |
| 2.6711 | 4.83 | 700 | 2.6401 | 0.9786 |
| 2.6403 | 5.52 | 800 | 2.5435 | 1.0392 |
| 2.4052 | 6.21 | 900 | 2.4580 | 1.0706 |
| 2.1708 | 6.9 | 1000 | 2.2800 | 1.0090 |
| 2.2555 | 7.59 | 1100 | 2.1493 | 0.9579 |
| 2.3673 | 8.28 | 1200 | 2.0709 | 0.9051 |
| 2.091 | 8.97 | 1300 | 2.0258 | 0.8926 |
| 1.8433 | 9.66 | 1400 | 1.9645 | 0.8243 |
| 1.6824 | 10.34 | 1500 | 1.9211 | 0.8707 |
| 2.2282 | 11.03 | 1600 | 1.8914 | 0.8695 |
| 1.9027 | 11.72 | 1700 | 1.8718 | 0.8343 |
| 1.6303 | 12.41 | 1800 | 1.8646 | 0.8232 |
| 1.648 | 13.1 | 1900 | 1.8297 | 0.8177 |
| 2.0429 | 13.79 | 2000 | 1.8127 | 0.8642 |
| 1.8833 | 14.48 | 2100 | 1.8005 | 0.8307 |
| 1.5996 | 15.17 | 2200 | 1.7926 | 0.8467 |
| 1.4876 | 15.86 | 2300 | 1.7795 | 0.8341 |
| 1.8925 | 16.55 | 2400 | 1.7716 | 0.8199 |
| 1.814 | 17.24 | 2500 | 1.7846 | 0.8086 |
| 1.536 | 17.93 | 2600 | 1.7655 | 0.8019 |
| 1.4476 | 18.62 | 2700 | 1.7599 | 0.8070 |
| 1.7629 | 19.31 | 2800 | 1.7589 | 0.8119 |
| 1.7646 | 20.0 | 2900 | 1.7541 | 0.8061 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
a41b43bc7b9f6e84bb953280b42cd573
|
migueladarlo/distilbert-depression-mixed
|
migueladarlo
|
distilbert
| 5 | 1 |
transformers
| 1 |
text-classification
| true | false | false |
mit
|
['en']
|
['CLPsych 2015']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text', 'Twitter']
| true | true | true | 2,731 | false |
# distilbert-depression-mixed
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) trained on CLPsych 2015 and a scraped dataset, and evaluated on a scraped dataset from Twitter to detect potential users in Twitter for depression.
It achieves the following results on the evaluation set:
- Evaluation Loss: 0.71
- Accuracy: 0.63
- F1: 0.59
- Precision: 0.66
- Recall: 0.53
- AUC: 0.63
## Intended uses & limitations
Feed a corpus of tweets to the model to generate label if input is indicative of a depressed user or not. Label 1 is depressed, Label 0 is not depressed.
Limitation: All token sequences longer than 512 are automatically truncated. Also, training and test data may be contaminated with mislabeled users.
### How to use
You can use this model directly with a pipeline for sentiment analysis:
```python
>>> from transformers import DistilBertTokenizerFast, AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')
>>> from transformers import DistilBertForSequenceClassification
>>> model = DistilBertForSequenceClassification.from_pretrained(r"distilbert-depression-mixed")
>>> from transformers import pipeline
>>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
>>> tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512}
>>> result=classifier('pain peko',**tokenizer_kwargs) #For truncation to apply in the pipeline
>>> #Should note that the string passed as the input can be a corpus of tweets concatenated together into one document.
[{'label': 'LABEL_1', 'score': 0.5048992037773132}]
```
Otherwise, download the files and specify within the pipeline the path to the folder that contains the config.json, pytorch_model.bin, and training_args.bin
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.19e-05
- train_batch_size: 16
- eval_batch_size: 16
- weight_decay: 0.06
- num_epochs: 5.0
## Training results
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | AUC |
|:-----:|:-------------:|:---------------:|:--------:|:--------:|:---------:|:--------:|:--------:|
| 1.0 | 0.68 | 0.66 | 0.61 | 0.54 | 0.60 | 0.50 | 0.60 |
| 2.0 | 0.65 | 0.65 | 0.63 | 0.49 | 0.70 | 0.37 | 0.62 |
| 3.0 | 0.53 | 0.63 | 0.66 | 0.58 | 0.69 | 0.50 | 0.65 |
| 4.0 | 0.39 | 0.66 | 0.67 | 0.61 | 0.69 | 0.54 | 0.67 |
| 5.0 | 0.27 | 0.72 | 0.65 | 0.61 | 0.63 | 0.60 | 0.64 |
|
434f18c78e00d54225c013a1fbda0b45
|
gchhablani/wav2vec2-large-xlsr-rm-sursilv
|
gchhablani
|
wav2vec2
| 10 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['rm-sursilv']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 3,511 | false |
# Wav2Vec2-Large-XLSR-53-Romansh-Sursilvan
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Romansh Sursilvan using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "rm-sursilv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-rm-sursilv")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-rm-sursilv")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "rm-sursilv", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-rm-sursilv")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-rm-sursilv")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\…\\«\\»\\–]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 25.16 %
## Training
The Common Voice `train` and `validation` datasets were used for training. The code can be found [here](https://colab.research.google.com/drive/1dpZr_GzRowCciUbzM3GnW04TNKnB7vrP?usp=sharing).
|
68f8762d20f5946505cb8fdbc69b90d4
|
laituan245/molt5-small-smiles2caption
|
laituan245
|
t5
| 8 | 24 |
transformers
| 1 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 851 | false |
This model can be used to generate an input caption from a SMILES string.
## Example Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-small-smiles2caption", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-small-smiles2caption')
input_text = 'C1=CC2=C(C(=C1)[O-])NC(=CC2=O)C(=O)O'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, num_beams=5, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
|
062b2e15a45d53dfaaac9b66c2d89766
|
piyusharma/bert-base-uncased-finetuned-lex
|
piyusharma
|
bert
| 10 | 15 |
transformers
| 0 |
text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,114 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# piyusharma/bert-base-uncased-finetuned-lex
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2112
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.2112 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
2358de434bb834f31427f7a6f8f48292
|
recklessrecursion/Heresy-clustered
|
recklessrecursion
|
distilbert
| 8 | 18 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,869 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# recklessrecursion/Heresy-clustered
This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1793
- Train End Logits Accuracy: 0.9618
- Train Start Logits Accuracy: 0.9549
- Validation Loss: 0.7725
- Validation End Logits Accuracy: 0.6667
- Validation Start Logits Accuracy: 0.3333
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1793 | 0.9618 | 0.9549 | 0.7725 | 0.6667 | 0.3333 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
204c60b53ae2eddc6f59b9f2fa2b80db
|
gokuls/distilbert_add_GLUE_Experiment_qnli
|
gokuls
|
distilbert
| 17 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,615 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_qnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6648
- Accuracy: 0.6066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6886 | 1.0 | 410 | 0.6648 | 0.6066 |
| 0.6569 | 2.0 | 820 | 0.6677 | 0.5999 |
| 0.6419 | 3.0 | 1230 | 0.6672 | 0.5914 |
| 0.6293 | 4.0 | 1640 | 0.6677 | 0.5977 |
| 0.6118 | 5.0 | 2050 | 0.6691 | 0.6002 |
| 0.5857 | 6.0 | 2460 | 0.6854 | 0.6077 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
3227794d9c795ea84355ac34c5cc3b61
|
veereshd/Berlinberger-berger
|
veereshd
| null | 17 | 9 |
diffusers
| 0 |
text-to-image
| true | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'food']
| false | true | true | 762 | false |
# DreamBooth model for the Berlinberger concept trained by veereshd on the veereshd/Dreambooth_food_dataset dataset.
This is a Stable Diffusion model fine-tuned on the Berlinberger concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of Berlinberger berger**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `berger` images for the food theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('veereshd/Berlinberger-berger')
image = pipeline().images[0]
image
```
|
f051385e92efe2f18d8c34d639fd1475
|
tomekkorbak/confident_knuth
|
tomekkorbak
|
gpt2
| 36 | 2 |
transformers
| 0 | null | true | false | false |
mit
|
['en']
|
['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 7,736 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# confident_knuth
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25177],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'value_head_config': {'is_detached': False}},
'path_or_name': 'gpt2'},
'objective': {'alpha': 0.5, 'beta': 0.1, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'confident_knuth',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/q3c975dt
|
6a88a70c15533118fd438b88ad5c5764
|
jamie613/distilbert-base-uncased-finetuned-emotion
|
jamie613
|
distilbert
| 20 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2240
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8488 | 1.0 | 250 | 0.3268 | 0.9055 | 0.9031 |
| 0.2532 | 2.0 | 500 | 0.2240 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
b6d3c61883639df4c42c29f8b1860d6a
|
lijingxin/distilbert-base-uncased-finetuned-emotion
|
lijingxin
|
distilbert
| 18 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,339 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8009 | 1.0 | 250 | 0.3027 | 0.9045 | 0.9015 |
| 0.2402 | 2.0 | 500 | 0.2161 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.16.1
- Tokenizers 0.10.3
|
5459c02bc005fc599a60f3d54d51ddd4
|
rishabhjain16/whisper_base_to_pf10h
|
rishabhjain16
|
whisper
| 23 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,698 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-base
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1929
- Wer: 4.3549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0326 | 10.0 | 500 | 0.1670 | 5.0398 |
| 0.0019 | 20.0 | 1000 | 0.1728 | 4.5113 |
| 0.0008 | 30.01 | 1500 | 0.1820 | 4.4071 |
| 0.0005 | 40.01 | 2000 | 0.1847 | 4.3773 |
| 0.0004 | 51.0 | 2500 | 0.1886 | 4.3549 |
| 0.0003 | 61.0 | 3000 | 0.1910 | 4.3475 |
| 0.0003 | 71.01 | 3500 | 0.1925 | 4.3549 |
| 0.0002 | 81.01 | 4000 | 0.1929 | 4.3549 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
94df78e12bf9d305df5928b21d67fa5b
|
tanvirkhan/distilbert-base-uncased-finetuned-imdb
|
tanvirkhan
|
distilbert
| 12 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,318 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
71ccf957693d94b0d04257190d54c569
|
jsnfly/wav2vec2-large-xlsr-53-german-gpt2
|
jsnfly
|
speech-encoder-decoder
| 20 | 6 |
transformers
| 2 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['de']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'de', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
| true | true | true | 1,143 | false |
# Wav2Vec2-Large-XLSR-53-German-GPT2
This is an encoder-decoder model for automatic speech recognition trained on on the
MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - DE dataset. The encoder was initialized from
[jonatasgrosman/wav2vec2-large-xlsr-53-german](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-german) and
the decoder from [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2).
It was trained using a two step process:
* fine-tuning only the cross-attention weights and the decoder using the pre-computed outputs of the Wav2Vec-Modell
* relatively fast training
* also works on small GPU (eg. 8 GB)
* but may take a lot of disk space
* should already yield decent results
* fine-tuning the model end-to-end
* much slower
* needs a bigger GPU
There is also one trick, which seemed to improve performance significantly: adding position embeddings to the
encoder outputs and initializing them with the pre-trained position embeddings of the GPT2 model (See `eval.py`).
The training notebooks are still early drafts. Also results can probably improved a lot by using for example a learning
rate schedule.
|
65e73c3669e140e5586179e4c9f60168
|
lsnoo/wav2vec2-large-xlsr-53k-russian
|
lsnoo
|
wav2vec2
| 11 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,851 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53k-russian
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2660
- Wer: 0.2052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 96
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2873 | 1.09 | 400 | 0.8580 | 0.8982 |
| 0.4728 | 2.19 | 800 | 0.3182 | 0.3892 |
| 0.1639 | 9.83 | 1200 | 0.2374 | 0.2646 |
| 0.1014 | 13.11 | 1600 | 0.2470 | 0.2467 |
| 0.0754 | 16.39 | 2000 | 0.2516 | 0.2337 |
| 0.0616 | 19.67 | 2400 | 0.2559 | 0.2237 |
| 0.0505 | 22.95 | 2800 | 0.2557 | 0.2155 |
| 0.0437 | 26.23 | 3200 | 0.2711 | 0.2099 |
| 0.0377 | 29.51 | 3600 | 0.2660 | 0.2052 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
849f08035a00ccce85baa15bf4e0b9e0
|
Helsinki-NLP/opus-mt-fi-ZH
|
Helsinki-NLP
|
marian
| 10 | 32 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,250 | false |
### opus-mt-fi-ZH
* source languages: fi
* target languages: cmn,cn,yue,ze_zh,zh_cn,zh_CN,zh_HK,zh_tw,zh_TW,zh_yue,zhs,zht,zh
* OPUS readme: [fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.fi.zh | 23.4 | 0.326 |
|
e68ab4a8eb2e12be6ad5e747064ea1df
|
MultiBertGunjanPatrick/multiberts-seed-0-1000k
|
MultiBertGunjanPatrick
|
bert
| 7 | 2 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert', 'multiberts', 'multiberts-seed-0']
| false | true | true | 6,487 | false |
# MultiBERTs Seed 0 Checkpoint 1000k (uncased)
Seed 0 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1000k')
model = BertModel.from_pretrained("multiberts-seed-0-1000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
39fa715d99c79a3acad38bf134539766
|
jeniakim/hedgehog
|
jeniakim
|
bert
| 9 | 10 |
transformers
| 1 |
token-classification
| true | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,816 | false |
🦔 HEDGEhog 🦔: BERT-based multi-class uncertainty cues recognition
====================================================================
# Description
A fine-tuned multi-class classification model that detects four different types of uncertainty cues (a.k.a hedges) on a token level.
# Uncertainty types
label | type | description | example
---| ---| ---| ---
E | Epistemic | The proposition is possible, but its truth-value cannot be decided at the moment. | She **may** be already asleep.
I | Investigation | The proposition is in the process of having its truth-value determined. | She **examined** the role of NF-kappaB in protein activation.
D | Doxatic | The proposition expresses beliefs and hypotheses, which may be known as true or false by others. | She **believes** that the Earth is flat.
N | Condition | The proposition is true or false based on the truth-value of another proposition. | **If** she gets the job, she will move to Utrecht.
C | *certain* | *n/a* | *n/a*
# Intended uses and limitations
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
# How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.ner import NERModel
model = NERModel(
'bert',
'jeniakim/hedgehog',
use_cuda=False,
labels=["C", "D", "E", "I", "N"],
)
example = "As much as I definitely enjoy solitude, I wouldn't mind perhaps spending little time with you (Björk)"
predictions, raw_outputs = model.predict([example])
```
The predictions look like this:
```
[[{'As': 'C'},
{'much': 'C'},
{'as': 'C'},
{'I': 'C'},
{'definitely': 'C'},
{'enjoy': 'C'},
{'solitude,': 'C'},
{'I': 'C'},
{"wouldn't": 'C'},
{'mind': 'C'},
{'perhaps': 'E'},
{'spending': 'C'},
{'little': 'C'},
{'time': 'C'},
{'with': 'C'},
{'you': 'C'},
{'(Björk)': 'C'}]]
```
In other words, the token 'perhaps' is recognized as an **epistemic uncertainty cue** and all the other tokens are not uncertainty cues.
# Training Data
HEDGEhog is trained and evaluated on the [Szeged Uncertainty Corpus](https://rgai.inf.u-szeged.hu/node/160) (Szarvas et al. 2012<sup>1</sup>). The original sentence-level XML version of this dataset is available [here](https://rgai.inf.u-szeged.hu/node/160).
The token-level version that was used for the training can be downloaded from [here](https://1drv.ms/u/s!AvPkt_QxBozXk7BiazucDqZkVxLo6g?e=IisuM6) in a form of pickled pandas DataFrame's. You can download either the split sets (```train.pkl``` 137MB, ```test.pkl``` 17MB, ```dev.pkl``` 17MB) or the full dataset (```szeged_fixed.pkl``` 172MB). Each row in the df contains a token, its features (these are not relevant for HEDGEhog; they were used to train the baseline CRF model, see [here](https://github.com/vanboefer/uncertainty_crf)), its sentence ID, and its label.
# Training Procedure
The following training parameters were used:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 16
# Evaluation Results
class | precision | recall | F1-score | support
---|---|---|---|---
Epistemic | 0.90 | 0.85 | 0.88 | 624
Doxatic | 0.88 | 0.92 | 0.90 | 142
Investigation | 0.83 | 0.86 | 0.84 | 111
Condition | 0.85 | 0.87 | 0.86 | 86
Certain | 1.00 | 1.00 | 1.00 | 104,751
**macro average** | **0.89** | **0.90** | **0.89** | 105,714
# References
<sup>1</sup> Szarvas, G., Vincze, V., Farkas, R., Móra, G., & Gurevych, I. (2012). Cross-genre and cross-domain detection of semantic uncertainty. *Computational Linguistics, 38*(2), 335-367.
|
b1097ead7b2384a065ba719fe9f45d30
|
RuudVelo/wav2vec2-large-xls-r-1b-nl
|
RuudVelo
|
wav2vec2
| 22 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['nl']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'nl', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 7,151 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset. This model is also available with a language model which improves these results. This model can be found at https://huggingface.co/RuudVelo/wav2vec2-large-xls-r-1b-nl-lm. The Common Voice 8 Dutch test Wer is 9.73 of that model.
It achieves the following results on the evaluation set:
- Loss: 0.1479
- Wer: 0.1156
## Model description
Model fine-tuned using the wav2vec-als-r-1b model architecture
## Intended uses & limitations
More information needed
## Training and evaluation data
Model has been trained on Common Voice 8 Dutch
## Training procedure
### Training hyperparameters
Model parameters can be found under Files and versions in the run.sh file.
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.2223 | 0.52 | 500 | 0.3866 | 0.3425 |
| 1.0748 | 1.03 | 1000 | 0.2574 | 0.2169 |
| 1.0416 | 1.55 | 1500 | 0.2177 | 0.1946 |
| 0.9951 | 2.06 | 2000 | 0.2008 | 0.1760 |
| 0.975 | 2.58 | 2500 | 0.1961 | 0.1751 |
| 0.9461 | 3.1 | 3000 | 0.1989 | 0.1782 |
| 0.9381 | 3.61 | 3500 | 0.1928 | 0.1699 |
| 0.934 | 4.13 | 4000 | 0.1923 | 0.1633 |
| 0.9322 | 4.64 | 4500 | 0.1871 | 0.1634 |
| 0.9012 | 5.16 | 5000 | 0.1890 | 0.1702 |
| 0.9045 | 5.68 | 5500 | 0.1882 | 0.1740 |
| 0.8826 | 6.19 | 6000 | 0.1856 | 0.1575 |
| 0.8848 | 6.71 | 6500 | 0.1861 | 0.1617 |
| 0.8723 | 7.22 | 7000 | 0.1927 | 0.1646 |
| 0.8725 | 7.74 | 7500 | 0.1798 | 0.1531 |
| 0.8573 | 8.26 | 8000 | 0.1781 | 0.1587 |
| 0.8633 | 8.77 | 8500 | 0.1852 | 0.1628 |
| 0.8603 | 9.29 | 9000 | 0.1833 | 0.1601 |
| 0.8421 | 9.8 | 9500 | 0.1788 | 0.1543 |
| 0.8404 | 10.32 | 10000 | 0.1844 | 0.1556 |
| 0.8342 | 10.84 | 10500 | 0.1770 | 0.1538 |
| 0.8161 | 11.35 | 11000 | 0.1821 | 0.1567 |
| 0.8371 | 11.87 | 11500 | 0.1909 | 0.1629 |
| 0.8083 | 12.38 | 12000 | 0.1778 | 0.1498 |
| 0.806 | 12.9 | 12500 | 0.1802 | 0.1547 |
| 0.8013 | 13.42 | 13000 | 0.1859 | 0.1584 |
| 0.7913 | 13.93 | 13500 | 0.1875 | 0.1517 |
| 0.8063 | 14.45 | 14000 | 0.1799 | 0.1571 |
| 0.7991 | 14.96 | 14500 | 0.1792 | 0.1538 |
| 0.7843 | 15.48 | 15000 | 0.1753 | 0.1464 |
| 0.7905 | 16.0 | 15500 | 0.1784 | 0.1508 |
| 0.7808 | 16.51 | 16000 | 0.1771 | 0.1485 |
| 0.7743 | 17.03 | 16500 | 0.1795 | 0.1491 |
| 0.7833 | 17.54 | 17000 | 0.1722 | 0.1484 |
| 0.7763 | 18.06 | 17500 | 0.1767 | 0.1518 |
| 0.7698 | 18.58 | 18000 | 0.1720 | 0.1460 |
| 0.7571 | 19.09 | 18500 | 0.1735 | 0.1478 |
| 0.7673 | 19.61 | 19000 | 0.1817 | 0.1511 |
| 0.7415 | 20.12 | 19500 | 0.1763 | 0.1481 |
| 0.751 | 20.64 | 20000 | 0.1742 | 0.1484 |
| 0.7563 | 21.16 | 20500 | 0.1810 | 0.1611 |
| 0.7423 | 21.67 | 21000 | 0.1817 | 0.1557 |
| 0.7242 | 22.19 | 21500 | 0.1690 | 0.1446 |
| 0.7251 | 22.7 | 22000 | 0.1684 | 0.1446 |
| 0.7302 | 23.22 | 22500 | 0.1735 | 0.1430 |
| 0.733 | 23.74 | 23000 | 0.1720 | 0.1454 |
| 0.7128 | 24.25 | 23500 | 0.1668 | 0.1383 |
| 0.7184 | 24.77 | 24000 | 0.1635 | 0.1377 |
| 0.7015 | 25.28 | 24500 | 0.1646 | 0.1389 |
| 0.7198 | 25.8 | 25000 | 0.1775 | 0.1462 |
| 0.7178 | 26.32 | 25500 | 0.1705 | 0.1419 |
| 0.7199 | 26.83 | 26000 | 0.1649 | 0.1416 |
| 0.6981 | 27.35 | 26500 | 0.1724 | 0.1418 |
| 0.6886 | 27.86 | 27000 | 0.1633 | 0.1382 |
| 0.6922 | 28.38 | 27500 | 0.1698 | 0.1420 |
| 0.6833 | 28.9 | 28000 | 0.1611 | 0.1351 |
| 0.6798 | 29.41 | 28500 | 0.1639 | 0.1365 |
| 0.6711 | 29.93 | 29000 | 0.1668 | 0.1358 |
| 0.6762 | 30.44 | 29500 | 0.1682 | 0.1355 |
| 0.6594 | 30.96 | 30000 | 0.1629 | 0.1345 |
| 0.6664 | 31.48 | 30500 | 0.1625 | 0.1321 |
| 0.6838 | 31.99 | 31000 | 0.1597 | 0.1372 |
| 0.6603 | 32.51 | 31500 | 0.1583 | 0.1302 |
| 0.6468 | 33.02 | 32000 | 0.1595 | 0.1322 |
| 0.6464 | 33.54 | 32500 | 0.1609 | 0.1315 |
| 0.6623 | 34.06 | 33000 | 0.1622 | 0.1366 |
| 0.6414 | 34.57 | 33500 | 0.1587 | 0.1330 |
| 0.6242 | 35.09 | 34000 | 0.1614 | 0.1337 |
| 0.632 | 35.6 | 34500 | 0.1568 | 0.1272 |
| 0.6346 | 36.12 | 35000 | 0.1583 | 0.1274 |
| 0.6143 | 36.64 | 35500 | 0.1576 | 0.1264 |
| 0.6208 | 37.15 | 36000 | 0.1621 | 0.1263 |
| 0.6185 | 37.67 | 36500 | 0.1623 | 0.1270 |
| 0.6128 | 38.18 | 37000 | 0.1604 | 0.1268 |
| 0.6151 | 38.7 | 37500 | 0.1593 | 0.1246 |
| 0.6082 | 39.22 | 38000 | 0.1532 | 0.1238 |
| 0.6 | 39.73 | 38500 | 0.1524 | 0.1224 |
| 0.6032 | 40.25 | 39000 | 0.1521 | 0.1212 |
| 0.6016 | 40.76 | 39500 | 0.1551 | 0.1215 |
| 0.6009 | 41.28 | 40000 | 0.1523 | 0.1215 |
| 0.5875 | 41.8 | 40500 | 0.1541 | 0.1216 |
| 0.608 | 42.31 | 41000 | 0.1536 | 0.1209 |
| 0.5876 | 42.83 | 41500 | 0.1567 | 0.1211 |
| 0.5714 | 43.34 | 42000 | 0.1532 | 0.1217 |
| 0.5756 | 43.86 | 42500 | 0.1516 | 0.1196 |
| 0.5719 | 44.38 | 43000 | 0.1491 | 0.1191 |
| 0.5829 | 44.89 | 43500 | 0.1497 | 0.1193 |
| 0.5664 | 45.41 | 44000 | 0.1487 | 0.1173 |
| 0.5707 | 45.92 | 44500 | 0.1470 | 0.1164 |
| 0.5696 | 46.44 | 45000 | 0.1479 | 0.1161 |
| 0.5767 | 46.96 | 45500 | 0.1492 | 0.1175 |
| 0.5573 | 47.47 | 46000 | 0.1471 | 0.1165 |
| 0.5625 | 47.99 | 46500 | 0.1484 | 0.1168 |
| 0.5671 | 48.5 | 47000 | 0.1474 | 0.1162 |
| 0.5484 | 49.02 | 47500 | 0.1479 | 0.1158 |
| 0.555 | 49.54 | 48000 | 0.1477 | 0.1157 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
dabf84e154c72cda3fe2e3fdf99d63b1
|
jonatasgrosman/exp_w2v2r_en_xls-r_accent_us-10_england-0_s253
|
jonatasgrosman
|
wav2vec2
| 10 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'en']
| false | true | true | 476 | false |
# exp_w2v2r_en_xls-r_accent_us-10_england-0_s253
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
cd12583a670a4bc3e3c36e6a0569c928
|
VietAI/envit5-base
|
VietAI
|
t5
| 8 | 15 |
transformers
| 0 |
question-answering
| true | true | true |
mit
|
['vi']
|
['cc100']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'translation', 'question-answering']
| false | true | true | 2,182 | false |
# EnViT5-base
State-of-the-art pretrained Transformer-based encoder-decoder model for Vietnamese and English used in [MTet's paper](https://arxiv.org/abs/2210.05610).
## How to use
For more details, do check out [our Github repo](https://github.com/vietai/mtet).
[Finetunning examples can be found here](https://github.com/vietai/ViT5/tree/main/finetunning_huggingface).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("VietAI/envit5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("VietAI/envit5-base")
model.cuda()
# need prefix for en: and vi: sentences
inputs = [
"vi: VietAI là tổ chức phi lợi nhuận với sứ mệnh ươm mầm tài năng về trí tuệ nhân tạo và xây dựng một cộng đồng các chuyên gia trong lĩnh vực trí tuệ nhân tạo đẳng cấp quốc tế tại Việt Nam.",
"vi: Theo báo cáo mới nhất của Linkedin về danh sách việc làm triển vọng với mức lương hấp dẫn năm 2020, các chức danh công việc liên quan đến AI như Chuyên gia AI (Artificial Intelligence Specialist), Kỹ sư ML (Machine Learning Engineer) đều xếp thứ hạng cao.",
"en: Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.",
"en: We're on a journey to advance and democratize artificial intelligence through open source and open science."
]
outputs = model.generate(tokenizer(inputs, return_tensors="pt", padding=True).input_ids.to('cuda'), max_length=512)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
## Citation
```
@misc{mtet,
doi = {10.48550/ARXIV.2210.05610},
url = {https://arxiv.org/abs/2210.05610},
author = {Ngo, Chinh and Trinh, Trieu H. and Phan, Long and Tran, Hieu and Dang, Tai and Nguyen, Hieu and Nguyen, Minh and Luong, Minh-Thang},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {MTet: Multi-domain Translation for English and Vietnamese},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
1046f80a8172b6a856f4f5e0fbbe7779
|
lunarfish/furrydiffusion
|
lunarfish
| null | 18 | 1,252 |
diffusers
| 12 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 0 | 1 | 0 | 1 | 1 | 0 |
['text-to-image', 'stable-diffusion', 'furry', 'anything-v3.0']
| false | true | true | 1,075 | false |

FurryDiffusion is a model made to generate furry art, this model is very much in beta still and will keep improoving! To use this please make sure to include `furry` in your prompt and to make a specific breed add the breed name only.
Example Prompts:
```
Positive: highres, furry, fox, orange fur, blue eyes
Negative: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, blurry
```
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
**NOTE**: Its better to run it in Google Colab since you can use google's powerful gpu's for free. Go ahead try it now!
|
99193babd3ebc26c279378896f50f2ab
|
theojolliffe/bart-large-cnn-pubmed1o3
|
theojolliffe
|
bart
| 13 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null |
['scientific_papers']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,464 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-pubmed1o3
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9359
- Rouge1: 36.7566
- Rouge2: 14.813
- Rougel: 22.4693
- Rougelsum: 33.4325
- Gen Len: 138.7332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:|
| 2.028 | 1.0 | 19988 | 1.9359 | 36.7566 | 14.813 | 22.4693 | 33.4325 | 138.7332 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
972f29669d4dd7f8d1d3f2847aae7dad
|
DonatoFrancioso/NLP2122_FranciosoDonato
|
DonatoFrancioso
|
distilbert
| 13 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,007 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP2122_FranciosoDonato
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.21.3
- Pytorch 1.11.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
610b11667d5954ad132f1f5f10bcd9f8
|
imfiba1991/gpt2-wikitext2
|
imfiba1991
|
gpt2
| 11 | 4 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,216 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 13 | 8.1476 |
| No log | 2.0 | 26 | 7.4435 |
| No log | 3.0 | 39 | 7.2082 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
e0437311c5d2265e96ba4ae6f5602989
|
bitsanlp/roberta-finetuned-DA-100k
|
bitsanlp
|
roberta
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 954 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-DA-100k
This model is a fine-tuned version of [bitsanlp/roberta-retrained_100k](https://huggingface.co/bitsanlp/roberta-retrained_100k) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
a535bc03dda9b5ec99b1afa9c0e26c46
|
jonatasgrosman/exp_w2v2t_ja_vp-100k_s219
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ja']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ja']
| false | true | true | 475 | false |
# exp_w2v2t_ja_vp-100k_s219
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
1e3fb32a3f84af763f95225905496890
|
neelrr/xlm-roberta-base-finetuned-panx-ta
|
neelrr
|
xlm-roberta
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,314 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-ta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- F1: 0.8145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5477 | 1.0 | 209 | 0.2732 | 0.7305 |
| 0.2506 | 2.0 | 418 | 0.2425 | 0.7626 |
| 0.168 | 3.0 | 627 | 0.2183 | 0.8145 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
bd37ec4839300a4eabe486689c8d3f06
|
lighteternal/fact-or-opinion-xlmr-el
|
lighteternal
|
xlm-roberta
| 11 | 7 |
transformers
| 2 |
text-classification
| true | false | false |
apache-2.0
|
['en', 'el', 'multilingual']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'fact-or-opinion', 'transformers']
| false | true | true | 1,491 | false |
# Fact vs. opinion binary classifier, trained on a mixed EN-EL annotated corpus.
### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
This is an XLM-Roberta-base model with a binary classification head. Given a sentence, it can classify it either as a fact or an opinion based on its content.
You can use this model in any of the XLM-R supported languages for the same task, taking advantage of its 0-shot learning capabilities. However, the model was trained only using English and Greek sentences.
Legend of HuggingFace API labels:
* Label 0: Opinion/Subjective sentence
* Label 1: Fact/Objective sentence
## Dataset training info
The original dataset (available here: https://github.com/1024er/cbert_aug/tree/crayon/datasets/subj) contained aprox. 9000 annotated sentences (classified as subjective or objective). It was translated to Greek using Google Translate. The Greek version was then concatenated with the original English one to create the mixed EN-EL dataset.
The model was trained for 5 epochs, using batch size = 8. Detailed metrics and hyperparameters available on the "Metrics" tab.
## Evaluation Results on test set
| accuracy | precision | recall | f1 |
| ----------- | ----------- | ----------- | ----------- |
|0.952 | 0.945 | 0.960 | 0.952 |
## Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
|
71abf5ffa1b1a0d36f333004363282e0
|
sd-concepts-library/schloss-mosigkau
|
sd-concepts-library
| null | 10 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,144 | false |
### schloss mosigkau on Stable Diffusion
This is the `<ralph>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
c9b40c7cc16abf2f541091aefdb1c799
|
suonbo/bert-finetuned-ner
|
suonbo
|
bert
| 12 | 3 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0637
- Precision: 0.9336
- Recall: 0.9488
- F1: 0.9412
- Accuracy: 0.9854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0897 | 1.0 | 1756 | 0.0648 | 0.9152 | 0.9408 | 0.9278 | 0.9837 |
| 0.0384 | 2.0 | 3512 | 0.0601 | 0.9277 | 0.9507 | 0.9391 | 0.9859 |
| 0.0201 | 3.0 | 5268 | 0.0637 | 0.9336 | 0.9488 | 0.9412 | 0.9854 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
48d9a903a9471a7bd4e179d2e731cc7f
|
facebook/mask2former-swin-tiny-cityscapes-semantic
|
facebook
|
mask2former
| 5 | 40 |
transformers
| 0 |
image-segmentation
| true | false | false |
other
| null |
['coco']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['vision', 'image-segmentation']
| false | true | true | 2,928 | false |
# Mask2Former
Mask2Former model trained on Cityscapes semantic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation
](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/).
Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA,
[MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without
without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

## Intended uses & limitations
You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
# load Mask2Former fine-tuned on Cityscapes semantic segmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-tiny-cityscapes-semantic")
model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-tiny-cityscapes-semantic")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to processor for postprocessing
predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former).
|
e622d93122371942b2caf9013e61eb71
|
andrewburns/clay-icon
|
andrewburns
| null | 56 | 14 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 4,267 | false |
### clay_icon Dreambooth model trained by andrewburns with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
clay (use that on your prompt)

|
c71145f09aa42bd853cdbc5359a3596c
|
gngpostalsrvc/w2v2-ami
|
gngpostalsrvc
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,770 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2-ami
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8686
- Wer: 0.2861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.6021 | 3.07 | 500 | 2.9176 | 0.9997 |
| 2.5006 | 6.13 | 1000 | 1.0535 | 0.3617 |
| 0.9926 | 9.2 | 1500 | 0.8614 | 0.3036 |
| 0.809 | 12.27 | 2000 | 0.8676 | 0.2921 |
| 0.73 | 15.34 | 2500 | 0.8190 | 0.2966 |
| 0.6658 | 18.4 | 3000 | 0.8707 | 0.2900 |
| 0.6295 | 21.47 | 3500 | 0.8660 | 0.2821 |
| 0.6089 | 24.54 | 4000 | 0.8767 | 0.2829 |
| 0.5974 | 27.61 | 4500 | 0.8686 | 0.2861 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
97016253d217a5591665ba2396466736
|
matteow/fin_sentiment
|
matteow
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,109 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fin_sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.4842 | 0.8129 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
4898c0a79ddaa0898d2266d073d50ea3
|
Helsinki-NLP/opus-mt-pis-fr
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-pis-fr
* source languages: pis
* target languages: fr
* OPUS readme: [pis-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.fr | 24.9 | 0.421 |
|
52cf37018c78ede5b7237fa456b3f3ed
|
timm/eca_nfnet_l0
|
timm
| null | 4 | 532 |
timm
| 1 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'timm', 'normalization-free', 'efficient-channel-attention']
| false | true | true | 4,295 | false |
# ECA-NFNet-L0
Pretrained model on [ImageNet](http://www.image-net.org/), this is a variant of the [NFNet (Normalization Free)](https://arxiv.org/abs/2102.06171) model family.
## Model description
This model variant was slimmed down from the original F0 variant in the paper for improved runtime characteristics (throughput, memory use) in PyTorch, on a GPU accelerator. It utilizes [Efficient Channel Attention (ECA)](https://arxiv.org/abs/1910.03151) instead of Squeeze-Excitation. It also features SiLU activations instead of the usual GELU.
Like other models in the NF family, this model contains no normalization layers (batch, group, etc). The models make use of [Weight Standardized](https://arxiv.org/abs/1903.10520) convolutions with additional scaling values in lieu of normalization layers.
## Intended uses & limitations
You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head
to fine-tune it on a downstream task (another classification task with different labels, image segmentation or
object detection, to name a few).
### How to use
You can use this model with the usual factory method in [`timm`](https://github.com/rwightman/pytorch-image-models):
```python
import PIL
import timm
import torch
model = timm.create_model("hf_hub:timm/eca_nfnet_l0")
config = model.default_cfg
img_size = config["test_input_size"][-1] if "test_input_size" in config else config["input_size"][-1]
transform = timm.data.transforms_factory.transforms_imagenet_eval(
img_size=img_size,
interpolation=config["interpolation"],
mean=config["mean"],
std=config["std"],
crop_pct=config["crop_pct"],
)
img = PIL.Image.open(path_to_an_image)
img = img.convert("RGB")
input_tensor = transform(cat_img)
input_tensor = input_tensor.unsqueeze(0)
# ^ batch size = 1
with torch.no_grad():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
### Limitations and bias
The training images in the dataset are usually photos clearly representing one of the 1,000 labels. The model will
probably not generalize well on drawings or images containing multiple objects with different labels.
The training images in the dataset come mostly from the US (45.4%) and Great Britain (7.6%). As such the model or
models created by fine-tuning this model will work better on images picturing scenes from these countries (see
[this paper](https://arxiv.org/abs/1906.02659) for examples).
More generally, [recent research](https://arxiv.org/abs/2010.15052) has shown that even models trained in an
unsupervised fashion on ImageNet (i.e. without using the labels) will pick up racial and gender bias represented in
the training images.
## Training data
This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 millions of
hand-annotated images with 1,000 categories.
## Training procedure
For stability during training it is highly recommended to train all NFNet variants with gradient clipping enabled. This model was trained with an Adaptive Gradient Clipping (AGC) factor of 0.015 as described in [the paper](https://arxiv.org/abs/2102.06171). Similar to the paper, a cosine learning rate decay was employed using SGD w/ nesterov. Moderate to heavy augmentation ([RandAugment](https://arxiv.org/abs/1909.13719)) and regularization (dropout, stochastic depth) is recommended for training.
### Preprocessing
The images are resized using bicubic interpolation to 288x288 and normalized with the usual ImageNet statistics.
## Evaluation results
This model has a top1-accuracy of 82.6% and a top-5 accuracy of 96.5% on the ImageNet evaluation set.
### BibTeX entry and citation info
NFNet model architecture:
```bibtex
@article{brock2021high,
author={Andrew Brock and Soham De and Samuel L. Smith and Karen Simonyan},
title={High-Performance Large-Scale Image Recognition Without Normalization},
journal={arXiv preprint arXiv:2102.06171},
year={2021}
}
```
L0 model variant & pretraining:
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
544877f14049e05670e9bf3f10fd10da
|
Helsinki-NLP/opus-mt-bzs-fi
|
Helsinki-NLP
|
marian
| 10 | 9 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-bzs-fi
* source languages: bzs
* target languages: fi
* OPUS readme: [bzs-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.fi | 24.7 | 0.464 |
|
aa2c307e6e9c48f2d113e0a41f572c52
|
Tinsae/beyaynetu
|
Tinsae
| null | 23 | 19 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 822 | false |
### beyaynetu" Dreambooth model trained by Tinsae with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.png)
.png)
.png)
.png)
|
b9b5a5237d4df8050efdf6a37a361b6d
|
bthomas/tuto-bert-finetuned-ner
|
bthomas
|
bert
| 10 | 6 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,517 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tuto-bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0827
- Precision: 0.9380
- Recall: 0.9525
- F1: 0.9452
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0218 | 1.0 | 1756 | 0.0714 | 0.9372 | 0.9524 | 0.9447 | 0.9862 |
| 0.0123 | 2.0 | 3512 | 0.0761 | 0.9347 | 0.9510 | 0.9428 | 0.9859 |
| 0.0063 | 3.0 | 5268 | 0.0827 | 0.9380 | 0.9525 | 0.9452 | 0.9867 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.11.0
|
bac342223c1599c252019dc83e4a93b6
|
Stricky/JellyCute7-Style-Hypernetwork
|
Stricky
| null | 5 | 0 | null | 0 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,060 | false |
# JellyCute7 (artist) Style [Hypernetwork]
Hypernetwork trained on art by artist [JellyCute7](https://www.pixiv.net/en/users/1053112).
[](https://www.buymeacoffee.com/stricky)
### Settings
```
Model: NAI
Layer structure: (1, 2, 1)
Activation function: relu
Layer normalization: False
Use dropout: False
Raw dataset size: 208 images
Final dataset size: 832 images
Size: 512x512
Create flipped copies: True
Split oversized images: True
Captions: DeepBooru
Learning rate: 0.000005 -> 13000 steps
Recommended: 13000 steps
```
### Steps comparison (recommended: 13000)

### Sample images

### Sample output (jellytits7-13000)

|
3e7c766a3050fa6bb8f333b96a47b208
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.