modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 12:33:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 12:33:10
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
javilonso/classificationPolEsp1
|
javilonso
| 2022-03-30T09:02:50Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T07:49:20Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: javilonso/classificationPolEsp1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/classificationPolEsp1
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3728
- Validation Loss: 0.6217
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 17958, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6282 | 0.6017 | 0 |
| 0.5129 | 0.6177 | 1 |
| 0.3728 | 0.6217 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
neibla/distilbert-base-uncased-finetuned-emotion
|
neibla
| 2022-03-30T08:56:26Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T08:22:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9254917237562972
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2187
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.855 | 1.0 | 250 | 0.3211 | 0.905 | 0.9017 |
| 0.2561 | 2.0 | 500 | 0.2187 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
shrishail/t5_paraphrase_msrp_paws
|
shrishail
| 2022-03-30T05:47:27Z | 38 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"paraphrase-generation",
"text-generation",
"Conditional Generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2022-03-29T13:13:11Z |
---
language: "en"
tags:
- paraphrase-generation
- text-generation
- Conditional Generation
inference: false
---
# Simple model for Paraphrase Generation
## Model description
T5-based model for generating paraphrased sentences. It is trained on the labeled [MSRP](https://www.microsoft.com/en-us/download/details.aspx?id=52398) and [Google PAWS](https://github.com/google-research-datasets/paws) dataset.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("shrishail/t5_paraphrase_msrp_paws")
model = AutoModelForSeq2SeqLM.from_pretrained("shrishail/t5_paraphrase_msrp_paws")
sentence = "This is something which i cannot understand at all"
text = "paraphrase: " + sentence + " </s>"
encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
do_sample=True,
top_k=120,
top_p=0.95,
early_stopping=True,
num_return_sequences=5
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
print(line)
```
|
lazyturtl/roomidentifier
|
lazyturtl
| 2022-03-30T04:10:41Z | 89 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-30T04:10:32Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: roomidentifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
# roomidentifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bathroom

#### Bedroom

#### DinningRoom

#### Kitchen

#### LivingRoom

|
samayash/finetuning-financial-news-sentiment
|
samayash
| 2022-03-30T03:36:40Z | 4 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T03:27:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-financial-news-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-financial-news-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3345
- Accuracy: 0.8751
- F1: 0.8751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_masked_audio
|
scasutt
| 2022-03-30T03:35:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-29T11:30:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_masked_audio
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_masked_audio
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6445
- Wer: 0.4938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3761 | 1.05 | 250 | 3.4022 | 0.9954 |
| 3.0858 | 2.1 | 500 | 3.4684 | 0.9954 |
| 2.6302 | 3.15 | 750 | 1.7989 | 0.9865 |
| 1.1292 | 4.2 | 1000 | 0.8558 | 0.7355 |
| 0.8371 | 5.25 | 1250 | 0.7319 | 0.6621 |
| 0.5992 | 6.3 | 1500 | 0.6848 | 0.6147 |
| 0.5189 | 7.35 | 1750 | 0.6522 | 0.5742 |
| 0.454 | 8.4 | 2000 | 0.6601 | 0.5531 |
| 0.3896 | 9.45 | 2250 | 0.6138 | 0.5439 |
| 0.3678 | 10.5 | 2500 | 0.6436 | 0.5320 |
| 0.3232 | 11.55 | 2750 | 0.5920 | 0.5174 |
| 0.2926 | 12.6 | 3000 | 0.6615 | 0.5107 |
| 0.3041 | 13.65 | 3250 | 0.6311 | 0.5015 |
| 0.2882 | 14.7 | 3500 | 0.6182 | 0.5004 |
| 0.2868 | 15.75 | 3750 | 0.6266 | 0.4943 |
| 0.2508 | 16.81 | 4000 | 0.6587 | 0.4965 |
| 0.2563 | 17.86 | 4250 | 0.6634 | 0.4939 |
| 0.2213 | 18.91 | 4500 | 0.6441 | 0.4925 |
| 0.2255 | 19.96 | 4750 | 0.6445 | 0.4938 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
javilonso/classificationEsp2_Attraction
|
javilonso
| 2022-03-30T03:04:09Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-29T23:17:31Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: javilonso/classificationEsp2_Attraction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# javilonso/classificationEsp2_Attraction
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9927
- Validation Loss: 0.9926
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35916, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.8200 | 0.9930 | 0 |
| 0.9942 | 0.9947 | 1 |
| 0.9927 | 0.9926 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
tharangahf/botcircuits_nlu
|
tharangahf
| 2022-03-30T02:32:47Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-30T02:32:47Z |
---
license: apache-2.0
---
|
ntt123/hifigan_ljs_22k
|
ntt123
| 2022-03-30T01:47:26Z | 0 | 0 | null |
[
"tensorboard",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-03-29T02:20:52Z |
---
license: cc-by-nc-sa-4.0
---
|
cammiemw/bert-marco-hdct
|
cammiemw
| 2022-03-30T01:21:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T01:09:55Z |
---
license: cc-by-nc-4.0
---
|
DrishtiSharma/poem-gen-spanish-t5-small-v6
|
DrishtiSharma
| 2022-03-29T23:45:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-29T18:58:46Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: poem-gen-spanish-t5-small-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-spanish-t5-small-v6
This model is a fine-tuned version of [hackathon-pln-es/poem-gen-spanish-t5-small](https://huggingface.co/hackathon-pln-es/poem-gen-spanish-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.8551 | 0.73 | 30000 | 2.9296 |
| 2.6961 | 1.46 | 60000 | 2.9005 |
| 2.5756 | 2.19 | 90000 | 2.8786 |
| 2.5095 | 2.93 | 120000 | 2.8621 |
| 2.4061 | 3.66 | 150000 | 2.8830 |
| 2.3161 | 4.39 | 180000 | 2.8865 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/PointsToSentence
|
BigSalmon
| 2022-03-29T23:11:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-29T22:58:46Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsToSentence")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsToSentence")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
It should also be able to do all that this can: https://huggingface.co/BigSalmon/InformalToFormalLincoln27
Keywords to sentences or sentence.
|
efederici/sentence-it5-base
|
efederici
| 2022-03-29T23:09:01Z | 35 | 4 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"t5",
"feature-extraction",
"sentence-similarity",
"transformers",
"it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-29T19:57:59Z |
---
pipeline_tag: sentence-similarity
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-IT5-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a T5 ([IT5](https://huggingface.co/gsarti/it5-base)) base model. It is trained on a dataset made from question/context pairs ([squad-it](https://github.com/crux82/squad-it)), tags/news-article pairs, headline/text pairs ([change-it](https://huggingface.co/datasets/gsarti/change_it)) and on [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/it/train).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/sentence-IT5-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-IT5-base')
model = AutoModel.from_pretrained('efederici/sentence-IT5-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
espnet/bur_openslr80_hubert
|
espnet
| 2022-03-29T22:19:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-28T22:04:54Z |
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Mar 21 22:59:35 UTC 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `7ae4efd81778436a98b822483e8123adba6aa430`
- Commit date: `Tue Mar 15 20:11:18 2022 -0400`
## asr_train_asr_hubert_transformer_adam_specaug_raw_bpe150
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|4227|39.1|50.4|10.5|6.1|67.0|99.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|33345|82.2|7.6|10.1|3.6|21.4|99.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|18237|70.7|17.7|11.6|2.5|31.8|99.8|
|
BigSalmon/PointsOneSent
|
BigSalmon
| 2022-03-29T21:26:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-29T21:19:54Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/PointsOneSent")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/PointsOneSent")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
-
```
It should also be able to do all that this can: https://huggingface.co/BigSalmon/InformalToFormalLincoln27
|
efederici/sentence-it5-small
|
efederici
| 2022-03-29T17:29:14Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"t5",
"feature-extraction",
"sentence-similarity",
"transformers",
"it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-27T15:19:10Z |
---
pipeline_tag: sentence-similarity
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-IT5-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a T5 ([IT5](https://huggingface.co/gsarti/it5-small)) small model trained for asymmetric semantic search. Query is a keyword, Paragraph is a short news article.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/sentence-IT5-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-IT5-small')
model = AutoModel.from_pretrained('efederici/sentence-IT5-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
GleamEyeBeast/ascend
|
GleamEyeBeast
| 2022-03-29T16:49:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-29T01:37:59Z |
---
tags:
- generated_from_trainer
model-index:
- name: ascend
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ascend
This model is a fine-tuned version of [GleamEyeBeast/ascend](https://huggingface.co/GleamEyeBeast/ascend) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3718
- Wer: 0.6412
- Cer: 0.2428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 0.5769 | 1.0 | 688 | 1.1864 | 0.7716 | 0.3159 |
| 0.5215 | 2.0 | 1376 | 1.1613 | 0.7504 | 0.2965 |
| 0.4188 | 3.0 | 2064 | 1.1644 | 0.7389 | 0.2950 |
| 0.3695 | 4.0 | 2752 | 1.1937 | 0.7184 | 0.2815 |
| 0.3404 | 5.0 | 3440 | 1.1947 | 0.7083 | 0.2719 |
| 0.2885 | 6.0 | 4128 | 1.2314 | 0.7108 | 0.2685 |
| 0.2727 | 7.0 | 4816 | 1.2243 | 0.6850 | 0.2616 |
| 0.2417 | 8.0 | 5504 | 1.2506 | 0.6767 | 0.2608 |
| 0.2207 | 9.0 | 6192 | 1.2804 | 0.6922 | 0.2595 |
| 0.2195 | 10.0 | 6880 | 1.2582 | 0.6818 | 0.2575 |
| 0.1896 | 11.0 | 7568 | 1.3101 | 0.6814 | 0.2545 |
| 0.1961 | 12.0 | 8256 | 1.2793 | 0.6706 | 0.2526 |
| 0.1752 | 13.0 | 8944 | 1.2643 | 0.6584 | 0.2509 |
| 0.1638 | 14.0 | 9632 | 1.3152 | 0.6588 | 0.2482 |
| 0.1522 | 15.0 | 10320 | 1.3098 | 0.6433 | 0.2439 |
| 0.1351 | 16.0 | 11008 | 1.3253 | 0.6537 | 0.2447 |
| 0.1266 | 17.0 | 11696 | 1.3394 | 0.6365 | 0.2418 |
| 0.1289 | 18.0 | 12384 | 1.3718 | 0.6412 | 0.2443 |
| 0.1204 | 19.0 | 13072 | 1.3708 | 0.6433 | 0.2433 |
| 0.1189 | 20.0 | 13760 | 1.3718 | 0.6412 | 0.2428 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
tbosse/bert-base-german-cased-finetuned-subj_v1
|
tbosse
| 2022-03-29T15:59:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-29T14:22:30Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v1
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1594
- Precision: 0.1875
- Recall: 0.0077
- F1: 0.0147
- Accuracy: 0.9508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 136 | 0.1591 | 1.0 | 0.0051 | 0.0102 | 0.9523 |
| No log | 2.0 | 272 | 0.1571 | 0.375 | 0.0077 | 0.015 | 0.9518 |
| No log | 3.0 | 408 | 0.1594 | 0.1875 | 0.0077 | 0.0147 | 0.9508 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
sayef/fsner-bert-base-uncased
|
sayef
| 2022-03-29T14:20:35Z | 9 | 6 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2008.10570",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
# FSNER
Implemented by [sayef](https://huggingface.co/sayef).
# Overview
The FSNER model was proposed in [Example-Based Named Entity Recognition](https://arxiv.org/abs/2008.10570) by Morteza
Ziyadi, Yuting Sun, Abhishek Goswami, Jade Huang, Weizhu Chen. To identify entity spans in a new domain, it uses a
train-free few-shot learning approach inspired by question-answering.
## Abstract
> We present a novel approach to named entity recognition (NER) in the presence of scarce data that we call example-based NER. Our train-free few-shot learning approach takes inspiration from question-answering to identify entity spans in a new and unseen domain. In comparison with the current state-of-the-art, the proposed method performs significantly better, especially when using a low number of support examples.
## Model Training Details
| identifier | epochs | datasets |
| ---------- |:------:|:-----------------------------------------------------------------------------------------------:|
| [sayef/fsner-bert-base-uncased](https://huggingface.co/sayef/fsner-bert-base-uncased) | 25 | ontonotes5, conll2003, wnut2017, mit_movie_trivia, mit_restaurant and fin (Alvarado et al.). |
## Installation and Example Usage
You can use the FSNER model in 3 ways:
1. Install directly from PyPI: `pip install fsner` and import the model as shown in the code example below
or
2. Install from source: `python install .` and import the model as shown in the code example below
or
3. Clone [repo](https://github.com/sayef/fsner) and add absolute path of `fsner/src` directory to your PYTHONPATH and
import the model as shown in the code example below
```python
import json
from fsner import FSNERModel, FSNERTokenizerUtils, pretty_embed
query_texts = [
"Does Luke's serve lunch?",
"Chang does not speak Taiwanese very well.",
"I like Berlin."
]
# Each list in supports are the examples of one entity type
# Wrap entities around with [E] and [/E] in the examples.
# Each sentence should have only one pair of [E] ... [/E]
support_texts = {
"Restaurant": [
"What time does [E] Subway [/E] open for breakfast?",
"Is there a [E] China Garden [/E] restaurant in newark?",
"Does [E] Le Cirque [/E] have valet parking?",
"Is there a [E] McDonalds [/E] on main street?",
"Does [E] Mike's Diner [/E] offer huge portions and outdoor dining?"
],
"Language": [
"Although I understood no [E] French [/E] in those days , I was prepared to spend the whole day with Chien - chien .",
"like what the hell 's that called in [E] English [/E] ? I have to register to be here like since I 'm a foreigner .",
"So , I 'm also working on an [E] English [/E] degree because that 's my real interest .",
"Al - Jazeera TV station , established in November 1996 in Qatar , is an [E] Arabic - language [/E] news TV station broadcasting global news and reports nonstop around the clock .",
"They think it 's far better for their children to be here improving their [E] English [/E] than sitting at home in front of a TV . \"",
"The only solution seemed to be to have her learn [E] French [/E] .",
"I have to read sixty pages of [E] Russian [/E] today ."
]
}
device = 'cpu'
tokenizer = FSNERTokenizerUtils("sayef/fsner-bert-base-uncased")
queries = tokenizer.tokenize(query_texts).to(device)
supports = tokenizer.tokenize(list(support_texts.values())).to(device)
model = FSNERModel("sayef/fsner-bert-base-uncased")
model.to(device)
p_starts, p_ends = model.predict(queries, supports)
# One can prepare supports once and reuse multiple times with different queries
# ------------------------------------------------------------------------------
# start_token_embeddings, end_token_embeddings = model.prepare_supports(supports)
# p_starts, p_ends = model.predict(queries, start_token_embeddings=start_token_embeddings,
# end_token_embeddings=end_token_embeddings)
output = tokenizer.extract_entity_from_scores(query_texts, queries, p_starts, p_ends,
entity_keys=list(support_texts.keys()), thresh=0.50)
print(json.dumps(output, indent=2))
# install displacy for pretty embed
pretty_embed(query_texts, output, list(support_texts.keys()))
```
<!DOCTYPE html>
<html lang="en">
<head>
<title>displaCy</title>
</head>
<body style="font-size: 16px; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', 'Segoe UI Symbol'; padding: 4rem 2rem; direction: ltr">
<figure style="margin-bottom: 6rem">
<div class="entities" style="line-height: 2.5; direction: ltr">
<div class="entities" style="line-height: 2.5; direction: ltr">Does
<mark class="entity" style="background: #7aecec; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Luke's
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">Restaurant</span>
</mark>
serve lunch?</div>
<div class="entities" style="line-height: 2.5; direction: ltr">Chang does not speak
<mark class="entity" style="background: #bfeeb7; padding: 0.45em 0.6em; margin: 0 0.25em; line-height: 1; border-radius: 0.35em;">
Taiwanese
<span style="font-size: 0.8em; font-weight: bold; line-height: 1; border-radius: 0.35em; vertical-align: middle; margin-left: 0.5rem">Language</span>
</mark>
very well.</div>
<div class="entities" style="line-height: 2.5; direction: ltr">I like Berlin.</div>
</div>
</figure>
</body>
</html>
## Datasets preparation
1. We need to convert dataset into the following format. Let's say we have a dataset file train.json like following.
2. Each list in supports are the examples of one entity type
3. Wrap entities around with [E] and [/E] in the examples.
4. Each example should have only one pair of [E] ... [/E].
```json
{
"CARDINAL_NUMBER": [
"Washington , cloudy , [E] 2 [/E] to 6 degrees .",
"New Dehli , sunny , [E] 6 [/E] to 19 degrees .",
"Well this is number [E] two [/E] .",
"....."
],
"LANGUAGE": [
"They do n't have the Quicken [E] Dutch [/E] version ?",
"they learned a lot of [E] German [/E] .",
"and then [E] Dutch [/E] it 's Mifrau",
"...."
],
"MONEY": [
"Per capita personal income ranged from $ [E] 11,116 [/E] in Mississippi to $ 23,059 in Connecticut ... .",
"The trade surplus was [E] 582 million US dollars [/E] .",
"It settled with a loss of 4.95 cents at $ [E] 1.3210 [/E] a pound .",
"...."
]
}
```
2. Converted ontonotes5 dataset can be found here:
1. [train](https://gist.githubusercontent.com/sayef/46deaf7e6c6e1410b430ddc8aff9c557/raw/ea7ae2ae933bfc9c0daac1aa52a9dc093d5b36f4/ontonotes5.train.json)
2. [dev](https://gist.githubusercontent.com/sayef/46deaf7e6c6e1410b430ddc8aff9c557/raw/ea7ae2ae933bfc9c0daac1aa52a9dc093d5b36f4/ontonotes5.dev.json)
3. Then trainer script can be used to train/evaluate your fsner model.
```bash
fsner trainer --pretrained-model bert-base-uncased --mode train --train-data train.json --val-data val.json \
--train-batch-size 6 --val-batch-size 6 --n-examples-per-entity 10 --neg-example-batch-ratio 1/3 --max-epochs 25 --device gpu \
--gpus -1 --strategy ddp
```
|
maretamasaeva/roberta-finetuned-freeform
|
maretamasaeva
| 2022-03-29T14:19:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-finetuned-freeform
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-freeform
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6989
- Accuracy: 0.4668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6919 | 1.0 | 8094 | 0.6910 | 0.4668 |
| 0.6912 | 2.0 | 16188 | 0.6934 | 0.4668 |
| 0.6904 | 3.0 | 24282 | 0.6976 | 0.4668 |
| 0.6918 | 4.0 | 32376 | 0.6989 | 0.4668 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ArtemChistyakov-2/f
|
ArtemChistyakov-2
| 2022-03-29T12:21:18Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-29T12:21:18Z |
---
license: apache-2.0
---
|
gayanin/bart-med-term-conditional-masking-0
|
gayanin
| 2022-03-29T12:03:56Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-28T22:12:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-med-term-conditional-masking-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-med-term-conditional-masking-0
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5041
- Rouge2 Precision: 0.7497
- Rouge2 Recall: 0.5246
- Rouge2 Fmeasure: 0.5986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.6381 | 1.0 | 13915 | 0.5595 | 0.734 | 0.5152 | 0.5873 |
| 0.5429 | 2.0 | 27830 | 0.5243 | 0.7441 | 0.5225 | 0.5956 |
| 0.5002 | 3.0 | 41745 | 0.5078 | 0.7482 | 0.5238 | 0.5976 |
| 0.4607 | 4.0 | 55660 | 0.5041 | 0.7497 | 0.5246 | 0.5986 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_masked_audio_10ms
|
scasutt
| 2022-03-29T11:29:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-28T18:54:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_masked_audio_10ms
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_masked_audio_10ms
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5945
- Wer: 0.4929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4049 | 1.05 | 250 | 3.3497 | 1.0 |
| 3.0851 | 2.1 | 500 | 3.4440 | 1.0 |
| 2.3512 | 3.15 | 750 | 1.5938 | 0.9317 |
| 1.1762 | 4.2 | 1000 | 0.8481 | 0.7333 |
| 0.903 | 5.25 | 1250 | 0.7180 | 0.6484 |
| 0.6754 | 6.3 | 1500 | 0.6603 | 0.6044 |
| 0.5961 | 7.35 | 1750 | 0.6410 | 0.5778 |
| 0.5325 | 8.4 | 2000 | 0.6245 | 0.5545 |
| 0.4685 | 9.45 | 2250 | 0.5925 | 0.5359 |
| 0.4526 | 10.5 | 2500 | 0.5991 | 0.5345 |
| 0.3975 | 11.55 | 2750 | 0.5916 | 0.5228 |
| 0.3672 | 12.6 | 3000 | 0.5882 | 0.5037 |
| 0.3774 | 13.65 | 3250 | 0.5693 | 0.5028 |
| 0.3489 | 14.7 | 3500 | 0.5645 | 0.5018 |
| 0.3593 | 15.75 | 3750 | 0.5977 | 0.5043 |
| 0.3167 | 16.81 | 4000 | 0.6049 | 0.5018 |
| 0.3225 | 17.86 | 4250 | 0.6172 | 0.4921 |
| 0.2807 | 18.91 | 4500 | 0.5937 | 0.4923 |
| 0.2889 | 19.96 | 4750 | 0.5945 | 0.4929 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
KeithHorgan/TweetClimateAnalysis
|
KeithHorgan
| 2022-03-29T10:01:24Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:KeithHorgan98/autotrain-data-TweetClimateAnalysis",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-29T10:16:42Z |
---
tags: autotrain
language: unk
widget:
- text: "Climate Change is a hoax"
- text: "It is freezing, where is global warming"
datasets:
- KeithHorgan98/autotrain-data-TweetClimateAnalysis
co2_eq_emissions: 133.19491276284793
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 678720226
- CO2 Emissions (in grams): 133.19491276284793
## Validation Metrics
- Loss: 0.4864234924316406
- Accuracy: 0.865424430641822
- Macro F1: 0.7665472174344069
- Micro F1: 0.8654244306418221
- Weighted F1: 0.8586375445115083
- Macro Precision: 0.8281449061702826
- Micro Precision: 0.865424430641822
- Weighted Precision: 0.8619727477790186
- Macro Recall: 0.736576343905098
- Micro Recall: 0.865424430641822
- Weighted Recall: 0.865424430641822
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/KeithHorgan98/autotrain-TweetClimateAnalysis-678720226
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("KeithHorgan98/autotrain-TweetClimateAnalysis-678720226", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("KeithHorgan98/autotrain-TweetClimateAnalysis-678720226", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
PereLluis13/wav2vec2-xls-r-300m-ca
|
PereLluis13
| 2022-03-29T08:43:53Z | 52 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"collectivat/tv3_parla",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"projecte-aina/parlament_parla",
"robust-speech-event",
"ca",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:collectivat/tv3_parla",
"dataset:projecte-aina/parlament_parla",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ca
license: apache-2.0
tags:
- automatic-speech-recognition
- collectivat/tv3_parla
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- projecte-aina/parlament_parla
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
- collectivat/tv3_parla
- projecte-aina/parlament_parla
model-index:
- name: wav2vec2-xls-r-300m-ca
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_8_0 ca
type: mozilla-foundation/common_voice_8_0
args: ca
metrics:
- name: Test WER
type: wer
value: 13.170091241317552
- name: Test CER
type: cer
value: 3.356726205534543
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: projecte-aina/parlament_parla ca
type: projecte-aina/parlament_parla
args: clean
metrics:
- name: Test WER
type: wer
value: 8.048005647723261
- name: Test CER
type: cer
value: 2.240912911020065
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: collectivat/tv3_parla ca
type: collectivat/tv3_parla
args: ca
metrics:
- name: Test WER
type: wer
value: 23.320629787889285
- name: Test CER
type: cer
value: 10.439216202089989
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: speech-recognition-community-v2/dev_data ca
type: speech-recognition-community-v2/dev_data
args: ca
metrics:
- name: Test WER
type: wer
value: 31.99671115046487
- name: Test CER
type: cer
value: 15.820020687277325
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ca
metrics:
- name: Test WER
type: wer
value: 22.04
---
# wav2vec2-xls-r-300m-ca
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA, the [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) and [parlament_parla](https://huggingface.co/datasets/projecte-aina/parlament_parla) datasets.
It achieves the following results on the evaluation set (for the three datasets):
- Loss: 0.2472
- Wer: 0.1499
## Model description
Please check the original [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) Model card. This is just a finetuned version of that model.
## Intended uses & limitations
As any model trained on crowdsourced data, this model can show the biases and particularities of the data and model used to train this model. Moreover, since this is a speech recognition model, it may underperform for some lower-resourced dialects for the catalan language.
## Training and evaluation data
More information needed
## Training procedure
The data is preprocessed to remove characters not on the catalan alphabet. Moreover, numbers are verbalized using code provided by [@ccoreilly](https://github.com/ccoreilly), which can be found on the text/ folder or [here](https://github.com/CollectivaT-dev/catotron-cpu/blob/master/text/numbers_ca.py).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 18.0
- mixed_precision_training: Native AMP
### Training results
Check the Tensorboard tab to check the training profile and evaluation results along training. The model was evaluated on the test splits for each of the datasets used during training.
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.2099 | 0.09 | 500 | 3.4125 | 1.0 |
| 2.9961 | 0.18 | 1000 | 2.9224 | 1.0 |
| 2.2147 | 0.26 | 1500 | 0.6521 | 0.5568 |
| 1.3017 | 0.35 | 2000 | 0.3153 | 0.2761 |
| 1.1196 | 0.44 | 2500 | 0.2444 | 0.2367 |
| 1.0712 | 0.53 | 3000 | 0.2324 | 0.2132 |
| 1.052 | 0.62 | 3500 | 0.2173 | 0.2032 |
| 1.2813 | 2.13 | 4000 | 0.3326 | 0.2099 |
| 1.2365 | 2.4 | 4500 | 0.3224 | 0.2003 |
| 1.2193 | 2.66 | 5000 | 0.3198 | 0.1957 |
| 1.2072 | 2.93 | 5500 | 0.3063 | 0.1933 |
| 1.213 | 3.2 | 6000 | 0.3051 | 0.1980 |
| 1.2074 | 3.46 | 6500 | 0.3012 | 0.1879 |
| 1.1918 | 3.73 | 7000 | 0.2947 | 0.1829 |
| 1.1893 | 4.0 | 7500 | 0.2895 | 0.1807 |
| 1.1751 | 4.26 | 8000 | 0.2878 | 0.1776 |
| 1.1628 | 4.53 | 8500 | 0.2835 | 0.1731 |
| 1.1577 | 4.79 | 9000 | 0.2816 | 0.1761 |
| 1.1448 | 5.06 | 9500 | 0.2757 | 0.1740 |
| 1.1407 | 5.33 | 10000 | 0.2768 | 0.1798 |
| 1.1401 | 5.59 | 10500 | 0.2780 | 0.1816 |
| 1.1333 | 5.86 | 11000 | 0.2748 | 0.1750 |
| 1.1571 | 6.13 | 11500 | 0.2808 | 0.1708 |
| 1.1505 | 6.39 | 12000 | 0.2726 | 0.1692 |
| 1.1519 | 6.66 | 12500 | 0.2749 | 0.1654 |
| 1.136 | 6.93 | 13000 | 0.2765 | 0.1643 |
| 1.1326 | 7.19 | 13500 | 0.2706 | 0.1668 |
| 1.1342 | 7.46 | 14000 | 0.2665 | 0.1638 |
| 1.1286 | 7.72 | 14500 | 0.2669 | 0.1636 |
| 1.1243 | 7.99 | 15000 | 0.2619 | 0.1623 |
| 1.1173 | 8.26 | 15500 | 0.2652 | 0.1604 |
| 1.1129 | 8.52 | 16000 | 0.2610 | 0.1598 |
| 1.1091 | 8.79 | 16500 | 0.2608 | 0.1584 |
| 1.1053 | 9.06 | 17000 | 0.2633 | 0.1664 |
| 1.1004 | 9.32 | 17500 | 0.2594 | 0.1662 |
| 1.0995 | 9.59 | 18000 | 0.2623 | 0.1569 |
| 1.0964 | 9.86 | 18500 | 0.2624 | 0.1597 |
| 1.09 | 10.12 | 19000 | 0.2577 | 0.1578 |
| 1.089 | 10.39 | 19500 | 0.2574 | 0.1531 |
| 1.0864 | 10.66 | 20000 | 0.2556 | 0.1546 |
| 1.0806 | 10.92 | 20500 | 0.2548 | 0.1583 |
| 1.0842 | 11.19 | 21000 | 0.2550 | 0.1542 |
| 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 |
| 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 |
| 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 |
| 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 |
| 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 |
| 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 |
| 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 |
| 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 |
| 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 |
| 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 |
| 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 |
| 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 |
| 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 |
| 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 |
| 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 |
| 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 |
| 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 |
| 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 |
| 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 |
| 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
# Thanks
Want to thank both [@ccoreilly](https://github.com/ccoreilly) and [@gullabi](https://github.com/gullabi) who have contributed with their own resources and knowledge into making this model possible.
|
STARBORN/MMC
|
STARBORN
| 2022-03-29T07:14:35Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-03-29T07:12:26Z |
---
license: mit
---
Metamodel Card (MMC) builds on MC and DC schemas by adding system level abstraction to the data. MMC instantiations follow
|
gayanin/t5-small-med-term-conditional-masking-0
|
gayanin
| 2022-03-29T03:19:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-28T22:04:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-med-term-conditional-masking-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-med-term-conditional-masking-0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6688
- Rouge2 Precision: 0.694
- Rouge2 Recall: 0.4781
- Rouge2 Fmeasure: 0.5479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.9525 | 1.0 | 13915 | 0.8148 | 0.6657 | 0.4581 | 0.5252 |
| 0.8541 | 2.0 | 27830 | 0.7562 | 0.6779 | 0.4694 | 0.5371 |
| 0.8183 | 3.0 | 41745 | 0.7268 | 0.6827 | 0.4722 | 0.5405 |
| 0.8033 | 4.0 | 55660 | 0.7074 | 0.6861 | 0.4729 | 0.5419 |
| 0.7727 | 5.0 | 69575 | 0.6934 | 0.6872 | 0.4726 | 0.5419 |
| 0.7704 | 6.0 | 83490 | 0.6832 | 0.6901 | 0.4742 | 0.544 |
| 0.7485 | 7.0 | 97405 | 0.6771 | 0.6926 | 0.4772 | 0.5469 |
| 0.7528 | 8.0 | 111320 | 0.6722 | 0.6934 | 0.4782 | 0.5478 |
| 0.7535 | 9.0 | 125235 | 0.6696 | 0.6944 | 0.4782 | 0.5481 |
| 0.7444 | 10.0 | 139150 | 0.6688 | 0.694 | 0.4781 | 0.5479 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DrishtiSharma/wav2vec2-base-finetuned-sentiment-mesd-v9
|
DrishtiSharma
| 2022-03-29T00:52:52Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-29T00:13:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-sentiment-mesd-v9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-sentiment-mesd-v9
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3500
- Accuracy: 0.9154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.86 | 3 | 1.7825 | 0.1846 |
| 1.9553 | 1.86 | 6 | 1.7212 | 0.4308 |
| 1.9553 | 2.86 | 9 | 1.6164 | 0.3769 |
| 2.002 | 3.86 | 12 | 1.4904 | 0.3769 |
| 1.6191 | 4.86 | 15 | 1.4426 | 0.4385 |
| 1.6191 | 5.86 | 18 | 1.3516 | 0.5231 |
| 1.6209 | 6.86 | 21 | 1.2176 | 0.5538 |
| 1.6209 | 7.86 | 24 | 1.1683 | 0.5692 |
| 1.371 | 8.86 | 27 | 1.0885 | 0.5923 |
| 1.1568 | 9.86 | 30 | 1.0152 | 0.6385 |
| 1.1568 | 10.86 | 33 | 0.9289 | 0.6385 |
| 1.1023 | 11.86 | 36 | 0.9141 | 0.6308 |
| 1.1023 | 12.86 | 39 | 0.8526 | 0.6462 |
| 0.9448 | 13.86 | 42 | 0.8420 | 0.6769 |
| 0.7972 | 14.86 | 45 | 0.7976 | 0.6692 |
| 0.7972 | 15.86 | 48 | 0.8192 | 0.7308 |
| 0.7793 | 16.86 | 51 | 0.7108 | 0.7615 |
| 0.7793 | 17.86 | 54 | 0.6712 | 0.7769 |
| 0.6468 | 18.86 | 57 | 0.6684 | 0.7923 |
| 0.5083 | 19.86 | 60 | 0.6922 | 0.7385 |
| 0.5083 | 20.86 | 63 | 0.6148 | 0.7923 |
| 0.4988 | 21.86 | 66 | 0.5846 | 0.7923 |
| 0.4988 | 22.86 | 69 | 0.6050 | 0.8154 |
| 0.4123 | 23.86 | 72 | 0.5506 | 0.7846 |
| 0.3511 | 24.86 | 75 | 0.6095 | 0.7846 |
| 0.3511 | 25.86 | 78 | 0.5916 | 0.8154 |
| 0.3268 | 26.86 | 81 | 0.5912 | 0.8077 |
| 0.3268 | 27.86 | 84 | 0.5142 | 0.8538 |
| 0.3036 | 28.86 | 87 | 0.5492 | 0.8077 |
| 0.3066 | 29.86 | 90 | 0.6007 | 0.8231 |
| 0.3066 | 30.86 | 93 | 0.5748 | 0.8231 |
| 0.2538 | 31.86 | 96 | 0.6027 | 0.7692 |
| 0.2538 | 32.86 | 99 | 0.6979 | 0.7462 |
| 0.2281 | 33.86 | 102 | 0.7002 | 0.7615 |
| 0.2183 | 34.86 | 105 | 0.6650 | 0.7769 |
| 0.2183 | 35.86 | 108 | 0.5192 | 0.8462 |
| 0.2202 | 36.86 | 111 | 0.5389 | 0.8308 |
| 0.2202 | 37.86 | 114 | 0.5050 | 0.8385 |
| 0.1906 | 38.86 | 117 | 0.5722 | 0.7769 |
| 0.154 | 39.86 | 120 | 0.5239 | 0.8308 |
| 0.154 | 40.86 | 123 | 0.4448 | 0.8615 |
| 0.1474 | 41.86 | 126 | 0.4623 | 0.8615 |
| 0.1474 | 42.86 | 129 | 0.4282 | 0.8615 |
| 0.1345 | 43.86 | 132 | 0.5087 | 0.8615 |
| 0.1567 | 44.86 | 135 | 0.4859 | 0.8385 |
| 0.1567 | 45.86 | 138 | 0.6603 | 0.8077 |
| 0.1731 | 46.86 | 141 | 0.5379 | 0.8385 |
| 0.1731 | 47.86 | 144 | 0.8666 | 0.7538 |
| 0.1606 | 48.86 | 147 | 0.7518 | 0.8 |
| 0.1484 | 49.86 | 150 | 0.5986 | 0.8385 |
| 0.1484 | 50.86 | 153 | 0.6368 | 0.8231 |
| 0.2256 | 51.86 | 156 | 0.4639 | 0.8692 |
| 0.2256 | 52.86 | 159 | 0.5533 | 0.8462 |
| 0.1178 | 53.86 | 162 | 0.5038 | 0.8615 |
| 0.0815 | 54.86 | 165 | 0.5052 | 0.8692 |
| 0.0815 | 55.86 | 168 | 0.4337 | 0.8846 |
| 0.0998 | 56.86 | 171 | 0.4422 | 0.8769 |
| 0.0998 | 57.86 | 174 | 0.4317 | 0.8692 |
| 0.0855 | 58.86 | 177 | 0.4025 | 0.8923 |
| 0.0962 | 59.86 | 180 | 0.4605 | 0.8769 |
| 0.0962 | 60.86 | 183 | 0.4356 | 0.8769 |
| 0.0763 | 61.86 | 186 | 0.4614 | 0.8769 |
| 0.0763 | 62.86 | 189 | 0.4382 | 0.8846 |
| 0.0902 | 63.86 | 192 | 0.4701 | 0.8692 |
| 0.0654 | 64.86 | 195 | 0.4922 | 0.8692 |
| 0.0654 | 65.86 | 198 | 0.5413 | 0.8538 |
| 0.0651 | 66.86 | 201 | 0.5759 | 0.8615 |
| 0.0651 | 67.86 | 204 | 0.4238 | 0.9 |
| 0.0822 | 68.86 | 207 | 0.3500 | 0.9154 |
| 0.0625 | 69.86 | 210 | 0.3878 | 0.8923 |
| 0.0625 | 70.86 | 213 | 0.4952 | 0.8615 |
| 0.0548 | 71.86 | 216 | 0.4544 | 0.8615 |
| 0.0548 | 72.86 | 219 | 0.5497 | 0.8769 |
| 0.054 | 73.86 | 222 | 0.4434 | 0.8846 |
| 0.0543 | 74.86 | 225 | 0.4732 | 0.8769 |
| 0.0543 | 75.86 | 228 | 0.4425 | 0.8923 |
| 0.0881 | 76.86 | 231 | 0.4788 | 0.8769 |
| 0.0881 | 77.86 | 234 | 0.5448 | 0.8769 |
| 0.061 | 78.86 | 237 | 0.4221 | 0.9077 |
| 0.0567 | 79.86 | 240 | 0.4404 | 0.8769 |
| 0.0567 | 80.86 | 243 | 0.4099 | 0.9 |
| 0.052 | 81.86 | 246 | 0.5259 | 0.8769 |
| 0.052 | 82.86 | 249 | 0.5874 | 0.8692 |
| 0.0444 | 83.86 | 252 | 0.5555 | 0.8846 |
| 0.0332 | 84.86 | 255 | 0.5156 | 0.8615 |
| 0.0332 | 85.86 | 258 | 0.4564 | 0.8615 |
| 0.0449 | 86.86 | 261 | 0.4826 | 0.8692 |
| 0.0449 | 87.86 | 264 | 0.4726 | 0.8615 |
| 0.0385 | 88.86 | 267 | 0.4206 | 0.8846 |
| 0.0356 | 89.86 | 270 | 0.4050 | 0.8769 |
| 0.0356 | 90.86 | 273 | 0.4161 | 0.8923 |
| 0.0391 | 91.86 | 276 | 0.4100 | 0.9077 |
| 0.0391 | 92.86 | 279 | 0.4047 | 0.9 |
| 0.0249 | 93.86 | 282 | 0.4044 | 0.9 |
| 0.0399 | 94.86 | 285 | 0.3968 | 0.8846 |
| 0.0399 | 95.86 | 288 | 0.3802 | 0.9 |
| 0.031 | 96.86 | 291 | 0.3689 | 0.9 |
| 0.031 | 97.86 | 294 | 0.3616 | 0.9077 |
| 0.036 | 98.86 | 297 | 0.3584 | 0.9077 |
| 0.0386 | 99.86 | 300 | 0.3574 | 0.9077 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
tbosse/bert-base-german-cased-finetuned-subj
|
tbosse
| 2022-03-28T22:50:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-28T20:51:21Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1424
- Precision: 0.6514
- Recall: 0.0186
- F1: 0.0363
- Accuracy: 0.9511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 140 | 0.1588 | 0.6 | 0.0016 | 0.0031 | 0.9507 |
| No log | 2.0 | 280 | 0.1466 | 0.75 | 0.0039 | 0.0078 | 0.9508 |
| No log | 3.0 | 420 | 0.1424 | 0.6514 | 0.0186 | 0.0363 | 0.9511 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
frtna/ted_mt-Spanish-to-Italian
|
frtna
| 2022-03-28T22:04:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:new_dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- new_dataset
model-index:
- name: ted_mt-Spanish-to-Italian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ted_mt-Spanish-to-Italian
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-it](https://huggingface.co/Helsinki-NLP/opus-mt-es-it) on the new_dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| No log | 1.0 | 46 | 1.4873 | 29.6133 | 26.9081 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Chikashi/t5-small-finetuned-cnndm1
|
Chikashi
| 2022-03-28T22:00:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-28T14:55:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.4246
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6853
- Rouge1: 24.4246
- Rouge2: 11.6944
- Rougel: 20.1717
- Rougelsum: 23.0424
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.912 | 0.14 | 5000 | 1.7167 | 24.4232 | 11.7049 | 20.1758 | 23.0345 | 18.9997 |
| 1.8784 | 0.28 | 10000 | 1.7018 | 24.4009 | 11.6918 | 20.1561 | 23.0073 | 18.9997 |
| 1.8628 | 0.42 | 15000 | 1.6934 | 24.385 | 11.683 | 20.1285 | 22.9823 | 18.9997 |
| 1.8594 | 0.56 | 20000 | 1.6902 | 24.4407 | 11.6835 | 20.1734 | 23.0369 | 18.9996 |
| 1.8537 | 0.7 | 25000 | 1.6864 | 24.3635 | 11.658 | 20.1318 | 22.9782 | 18.9993 |
| 1.8505 | 0.84 | 30000 | 1.6856 | 24.4267 | 11.6991 | 20.1629 | 23.0361 | 18.9994 |
| 1.8505 | 0.98 | 35000 | 1.6853 | 24.4246 | 11.6944 | 20.1717 | 23.0424 | 18.9996 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
jorge-henao/spanish-t5-small-disco-poetry
|
jorge-henao
| 2022-03-28T21:26:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-28T18:15:25Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: spanish-t5-small-disco-poetry
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanish-t5-small-disco-poetry
This model is a fine-tuned version of [flax-community/spanish-t5-small](https://huggingface.co/flax-community/spanish-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1417 | 1.0 | 1284 | 0.0577 |
| 0.0902 | 2.0 | 2568 | 0.0516 |
| 0.0803 | 3.0 | 3852 | 0.0494 |
| 0.0733 | 4.0 | 5136 | 0.0488 |
| 0.0683 | 5.0 | 6420 | 0.0480 |
| 0.067 | 6.0 | 7704 | 0.0477 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
DrishtiSharma/wav2vec2-base-finetuned-sentiment-mesd-v2
|
DrishtiSharma
| 2022-03-28T19:04:20Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-28T17:20:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-sentiment-mesd-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-sentiment-mesd-v2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7213
- Accuracy: 0.3923
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.25e-05
- train_batch_size: 64
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.86 | 3 | 1.7961 | 0.1462 |
| 1.9685 | 1.86 | 6 | 1.7932 | 0.1692 |
| 1.9685 | 2.86 | 9 | 1.7891 | 0.2 |
| 2.1386 | 3.86 | 12 | 1.7820 | 0.2923 |
| 1.9492 | 4.86 | 15 | 1.7750 | 0.2923 |
| 1.9492 | 5.86 | 18 | 1.7684 | 0.2846 |
| 2.1143 | 6.86 | 21 | 1.7624 | 0.3231 |
| 2.1143 | 7.86 | 24 | 1.7561 | 0.3308 |
| 2.0945 | 8.86 | 27 | 1.7500 | 0.3462 |
| 1.9121 | 9.86 | 30 | 1.7443 | 0.3385 |
| 1.9121 | 10.86 | 33 | 1.7386 | 0.3231 |
| 2.0682 | 11.86 | 36 | 1.7328 | 0.3231 |
| 2.0682 | 12.86 | 39 | 1.7272 | 0.3769 |
| 2.0527 | 13.86 | 42 | 1.7213 | 0.3923 |
| 1.8705 | 14.86 | 45 | 1.7154 | 0.3846 |
| 1.8705 | 15.86 | 48 | 1.7112 | 0.3846 |
| 2.0263 | 16.86 | 51 | 1.7082 | 0.3769 |
| 2.0263 | 17.86 | 54 | 1.7044 | 0.3846 |
| 2.0136 | 18.86 | 57 | 1.7021 | 0.3846 |
| 1.8429 | 19.86 | 60 | 1.7013 | 0.3846 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
aapot/wav2vec2-large-xlsr-53-finnish
|
aapot
| 2022-03-28T17:56:36Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fi",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fi
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Finnish by Aapo Tanskanen
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 32.378771
---
# NOTE: this is an old model and should not be used anymore!! There are a lot better newer models available at our orgnization hub: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2) and [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm)
# Wav2Vec2-Large-XLSR-53-Finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Finnish using the [Common Voice](https://huggingface.co/datasets/common_voice), [CSS10 Finnish](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset) and [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) datasets.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Finnish test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model = Wav2Vec2ForCTC.from_pretrained("aapot/wav2vec2-large-xlsr-53-finnish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\'\...\…\–\é]'
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 32.378771 %
## Training
The Common Voice `train`, `validation` and `other` datasets were used for training as well as `CSS10 Finnish` and `Finnish parliament session 2` datasets.
The script used for training can be found from [Google Colab](https://colab.research.google.com/drive/1vnEGC9BnNRmVyIHj-0UsVulh_cUYSGWA?usp=sharing)
|
aapot/wav2vec2-xlsr-1b-finnish-v2
|
aapot
| 2022-03-28T17:49:48Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-v2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 9.73
- name: Test CER
type: cer
value: 1.65
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
**Note**: there is a version with KenLM language model used in the decoding phase producing better transcriptions: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-v2/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.7778 | 0.17 | 500 | 0.2851 | 0.3572 |
| 0.5506 | 0.34 | 1000 | 0.1595 | 0.2130 |
| 0.6569 | 0.5 | 1500 | 0.1458 | 0.2046 |
| 0.5997 | 0.67 | 2000 | 0.1374 | 0.1975 |
| 0.542 | 0.84 | 2500 | 0.1390 | 0.1956 |
| 0.4815 | 1.01 | 3000 | 0.1266 | 0.1813 |
| 0.6982 | 1.17 | 3500 | 0.1441 | 0.1965 |
| 0.4522 | 1.34 | 4000 | 0.1232 | 0.1822 |
| 0.4655 | 1.51 | 4500 | 0.1209 | 0.1702 |
| 0.4069 | 1.68 | 5000 | 0.1149 | 0.1688 |
| 0.4226 | 1.84 | 5500 | 0.1121 | 0.1560 |
| 0.3993 | 2.01 | 6000 | 0.1091 | 0.1557 |
| 0.406 | 2.18 | 6500 | 0.1115 | 0.1553 |
| 0.4098 | 2.35 | 7000 | 0.1144 | 0.1560 |
| 0.3995 | 2.51 | 7500 | 0.1028 | 0.1476 |
| 0.4101 | 2.68 | 8000 | 0.1129 | 0.1511 |
| 0.3636 | 2.85 | 8500 | 0.1025 | 0.1517 |
| 0.3534 | 3.02 | 9000 | 0.1068 | 0.1480 |
| 0.3836 | 3.18 | 9500 | 0.1072 | 0.1459 |
| 0.3531 | 3.35 | 10000 | 0.0928 | 0.1367 |
| 0.3649 | 3.52 | 10500 | 0.1042 | 0.1426 |
| 0.3645 | 3.69 | 11000 | 0.0979 | 0.1433 |
| 0.3685 | 3.85 | 11500 | 0.0947 | 0.1346 |
| 0.3325 | 4.02 | 12000 | 0.0991 | 0.1352 |
| 0.3497 | 4.19 | 12500 | 0.0919 | 0.1358 |
| 0.3303 | 4.36 | 13000 | 0.0888 | 0.1272 |
| 0.3323 | 4.52 | 13500 | 0.0888 | 0.1277 |
| 0.3452 | 4.69 | 14000 | 0.0894 | 0.1279 |
| 0.337 | 4.86 | 14500 | 0.0917 | 0.1289 |
| 0.3114 | 5.03 | 15000 | 0.0942 | 0.1313 |
| 0.3099 | 5.19 | 15500 | 0.0902 | 0.1239 |
| 0.3079 | 5.36 | 16000 | 0.0871 | 0.1256 |
| 0.3293 | 5.53 | 16500 | 0.0861 | 0.1263 |
| 0.3123 | 5.7 | 17000 | 0.0876 | 0.1203 |
| 0.3093 | 5.86 | 17500 | 0.0848 | 0.1226 |
| 0.2903 | 6.03 | 18000 | 0.0914 | 0.1221 |
| 0.297 | 6.2 | 18500 | 0.0841 | 0.1185 |
| 0.2797 | 6.37 | 19000 | 0.0858 | 0.1165 |
| 0.2878 | 6.53 | 19500 | 0.0874 | 0.1161 |
| 0.2974 | 6.7 | 20000 | 0.0835 | 0.1173 |
| 0.3051 | 6.87 | 20500 | 0.0835 | 0.1178 |
| 0.2941 | 7.04 | 21000 | 0.0852 | 0.1155 |
| 0.258 | 7.21 | 21500 | 0.0832 | 0.1132 |
| 0.2778 | 7.37 | 22000 | 0.0829 | 0.1110 |
| 0.2751 | 7.54 | 22500 | 0.0822 | 0.1069 |
| 0.2887 | 7.71 | 23000 | 0.0819 | 0.1103 |
| 0.2509 | 7.88 | 23500 | 0.0787 | 0.1055 |
| 0.2501 | 8.04 | 24000 | 0.0807 | 0.1076 |
| 0.2399 | 8.21 | 24500 | 0.0784 | 0.1052 |
| 0.2539 | 8.38 | 25000 | 0.0772 | 0.1075 |
| 0.248 | 8.55 | 25500 | 0.0772 | 0.1055 |
| 0.2689 | 8.71 | 26000 | 0.0763 | 0.1027 |
| 0.2855 | 8.88 | 26500 | 0.0756 | 0.1035 |
| 0.2421 | 9.05 | 27000 | 0.0771 | 0.0998 |
| 0.2497 | 9.22 | 27500 | 0.0756 | 0.0971 |
| 0.2367 | 9.38 | 28000 | 0.0741 | 0.0974 |
| 0.2473 | 9.55 | 28500 | 0.0739 | 0.0982 |
| 0.2396 | 9.72 | 29000 | 0.0756 | 0.0991 |
| 0.2602 | 9.89 | 29500 | 0.0737 | 0.0975 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-v2 --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the first row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
aapot/wav2vec2-xlsr-1b-finnish-lm
|
aapot
| 2022-03-28T17:31:03Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-1b-finnish-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 5.65
- name: Test CER
type: cer
value: 1.2
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 259.57 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization.
**Note**: there is a better V2 version of this model which has been fine-tuned longer with 16 hours of more data: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects. It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 259.57 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:----------------------------------------------------------------------------------------------------------------------------------|:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.74 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 5.94 h | 2.29 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.98 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 87.84 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 2.07 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.968 | 0.18 | 500 | 0.4870 | 0.4720 |
| 0.6557 | 0.36 | 1000 | 0.2450 | 0.2931 |
| 0.647 | 0.54 | 1500 | 0.1818 | 0.2255 |
| 0.5297 | 0.72 | 2000 | 0.1698 | 0.2354 |
| 0.5802 | 0.9 | 2500 | 0.1581 | 0.2355 |
| 0.6351 | 1.07 | 3000 | 0.1689 | 0.2336 |
| 0.4626 | 1.25 | 3500 | 0.1719 | 0.3099 |
| 0.4526 | 1.43 | 4000 | 0.1434 | 0.2069 |
| 0.4692 | 1.61 | 4500 | 0.1645 | 0.2192 |
| 0.4584 | 1.79 | 5000 | 0.1483 | 0.1987 |
| 0.4234 | 1.97 | 5500 | 0.1499 | 0.2178 |
| 0.4243 | 2.15 | 6000 | 0.1345 | 0.2070 |
| 0.4108 | 2.33 | 6500 | 0.1383 | 0.1850 |
| 0.4048 | 2.51 | 7000 | 0.1338 | 0.1811 |
| 0.4085 | 2.69 | 7500 | 0.1290 | 0.1780 |
| 0.4026 | 2.87 | 8000 | 0.1239 | 0.1650 |
| 0.4033 | 3.04 | 8500 | 0.1346 | 0.1657 |
| 0.3986 | 3.22 | 9000 | 0.1310 | 0.1850 |
| 0.3867 | 3.4 | 9500 | 0.1273 | 0.1741 |
| 0.3658 | 3.58 | 10000 | 0.1219 | 0.1672 |
| 0.382 | 3.76 | 10500 | 0.1306 | 0.1698 |
| 0.3847 | 3.94 | 11000 | 0.1230 | 0.1577 |
| 0.3691 | 4.12 | 11500 | 0.1310 | 0.1615 |
| 0.3593 | 4.3 | 12000 | 0.1296 | 0.1622 |
| 0.3619 | 4.48 | 12500 | 0.1285 | 0.1601 |
| 0.3361 | 4.66 | 13000 | 0.1261 | 0.1569 |
| 0.3603 | 4.84 | 13500 | 0.1235 | 0.1533 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
aapot/wav2vec2-xlsr-300m-finnish-lm
|
aapot
| 2022-03-28T17:22:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fi",
"finnish",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_7_0",
"arxiv:2111.09296",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
language: fi
metrics:
- wer
- cer
tags:
- automatic-speech-recognition
- fi
- finnish
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xlsr-300m-finnish-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: fi
metrics:
- name: Test WER
type: wer
value: 8.16
- name: Test CER
type: cer
value: 1.97
---
# Wav2Vec2 XLS-R for Finnish ASR
This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for Finnish ASR. The model has been fine-tuned with 275.6 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
[this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
**Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization.
## Model description
Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
This model is fine-tuned version of the pretrained model (300 million parameter variant) for Finnish ASR.
## Intended uses & limitations
You can use this model for Finnish ASR (speech-to-text) task.
### How to use
Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-300m-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
### Limitations and bias
This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions and from a subset of Finnish Wikipedia. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects (because especially the Wikipedia contains mostly formal Finnish language). It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
## Training data
This model was fine-tuned with 275.6 hours of Finnish transcribed speech data from following datasets:
| Dataset | Hours | % of total hours |
|:------------------------------------------------------------------------------------------------------------------------------ |:--------:|:----------------:|
| [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.52 % |
| [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
| [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 21.97 h | 7.97 % |
| [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.74 % |
| [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 82.73 % |
| [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 1.95 % |
Datasets were filtered to include maximum length of 20 seconds long audio samples.
## Training procedure
This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data and 100k random samples of cleaned [Finnish Wikipedia](https://huggingface.co/datasets/wikipedia) (August 2021) dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-04
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: [8-bit Adam](https://github.com/facebookresearch/bitsandbytes) with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
The pretrained `facebook/wav2vec2-xls-r-300m` model was initialized with following hyperparameters:
- attention_dropout: 0.094
- hidden_dropout: 0.047
- feat_proj_dropout: 0.04
- mask_time_prob: 0.082
- layerdrop: 0.041
- activation_dropout: 0.055
- ctc_loss_reduction: "mean"
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.973 | 0.17 | 500 | 0.5750 | 0.6844 |
| 0.713 | 0.34 | 1000 | 0.3356 | 0.4518 |
| 0.6563 | 0.5 | 1500 | 0.3007 | 0.4039 |
| 0.642 | 0.67 | 2000 | 0.2619 | 0.3674 |
| 0.6203 | 0.84 | 2500 | 0.2488 | 0.3558 |
| 0.6016 | 1.01 | 3000 | 0.2795 | 0.3835 |
| 0.5423 | 1.17 | 3500 | 0.2652 | 0.3310 |
| 0.5639 | 1.34 | 4000 | 0.2479 | 0.3462 |
| 0.586 | 1.51 | 4500 | 0.2409 | 0.3295 |
| 0.5169 | 1.68 | 5000 | 0.2728 | 0.3352 |
| 0.5176 | 1.84 | 5500 | 0.2254 | 0.3149 |
| 0.4983 | 2.01 | 6000 | 0.2169 | 0.3009 |
| 0.4982 | 2.18 | 6500 | 0.2215 | 0.3079 |
| 0.4898 | 2.35 | 7000 | 0.2174 | 0.3023 |
| 0.4922 | 2.51 | 7500 | 0.2217 | 0.3081 |
| 0.5025 | 2.68 | 8000 | 0.2002 | 0.2710 |
| 0.4745 | 2.85 | 8500 | 0.1935 | 0.2783 |
| 0.4377 | 3.02 | 9000 | 0.1859 | 0.2742 |
| 0.4511 | 3.18 | 9500 | 0.2038 | 0.2786 |
| 0.4411 | 3.35 | 10000 | 0.1863 | 0.2651 |
| 0.4501 | 3.52 | 10500 | 0.1948 | 0.2605 |
| 0.4557 | 3.69 | 11000 | 0.1872 | 0.2695 |
| 0.4493 | 3.85 | 11500 | 0.1888 | 0.2632 |
| 0.4047 | 4.02 | 12000 | 0.1818 | 0.2559 |
| 0.4319 | 4.19 | 12500 | 0.1896 | 0.2648 |
| 0.4162 | 4.36 | 13000 | 0.1953 | 0.2595 |
| 0.4046 | 4.52 | 13500 | 0.1864 | 0.2606 |
| 0.4195 | 4.69 | 14000 | 0.1843 | 0.2467 |
| 0.4146 | 4.86 | 14500 | 0.1686 | 0.2450 |
| 0.378 | 5.03 | 15000 | 0.1731 | 0.2401 |
| 0.3792 | 5.19 | 15500 | 0.1676 | 0.2325 |
| 0.3855 | 5.36 | 16000 | 0.1740 | 0.2326 |
| 0.4029 | 5.53 | 16500 | 0.1674 | 0.2345 |
| 0.386 | 5.7 | 17000 | 0.1735 | 0.2280 |
| 0.3811 | 5.86 | 17500 | 0.1692 | 0.2258 |
| 0.3607 | 6.03 | 18000 | 0.1797 | 0.2279 |
| 0.3604 | 6.2 | 18500 | 0.1651 | 0.2206 |
| 0.3362 | 6.37 | 19000 | 0.1627 | 0.2199 |
| 0.3611 | 6.53 | 19500 | 0.1652 | 0.2172 |
| 0.3671 | 6.7 | 20000 | 0.1564 | 0.2140 |
| 0.3769 | 6.87 | 20500 | 0.1525 | 0.2101 |
| 0.3539 | 7.04 | 21000 | 0.1639 | 0.2096 |
| 0.3225 | 7.21 | 21500 | 0.1611 | 0.2087 |
| 0.3323 | 7.37 | 22000 | 0.1633 | 0.2008 |
| 0.3327 | 7.54 | 22500 | 0.1692 | 0.1975 |
| 0.3456 | 7.71 | 23000 | 0.1555 | 0.1991 |
| 0.3058 | 7.88 | 23500 | 0.1590 | 0.1959 |
| 0.3034 | 8.04 | 24000 | 0.1531 | 0.1973 |
| 0.2925 | 8.21 | 24500 | 0.1583 | 0.1978 |
| 0.2967 | 8.38 | 25000 | 0.1546 | 0.1906 |
| 0.2974 | 8.55 | 25500 | 0.1540 | 0.1869 |
| 0.3131 | 8.71 | 26000 | 0.1534 | 0.1850 |
| 0.3306 | 8.88 | 26500 | 0.1482 | 0.1844 |
| 0.2842 | 9.05 | 27000 | 0.1490 | 0.1854 |
| 0.2879 | 9.22 | 27500 | 0.1463 | 0.1799 |
| 0.27 | 9.38 | 28000 | 0.1454 | 0.1798 |
| 0.2874 | 9.55 | 28500 | 0.1504 | 0.1787 |
| 0.2757 | 9.72 | 29000 | 0.1512 | 0.1784 |
| 0.3017 | 9.89 | 29500 | 0.1484 | 0.1800 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
## Evaluation results
Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
To evaluate this model, run the `eval.py` script in this repository:
```bash
python3 eval.py --model_id aapot/wav2vec2-xlsr-300m-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
```
This model (the third row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
| | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
|-----------------------------------------|---------------|------------------|---------------|------------------|
|aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
|aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
|aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
Chikashi/t5-small-finetuned-cnndm
|
Chikashi
| 2022-03-28T14:04:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-28T09:07:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnndm
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 24.417
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnndm
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6854
- Rouge1: 24.417
- Rouge2: 11.6924
- Rougel: 20.1756
- Rougelsum: 23.0414
- Gen Len: 18.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:-------:|:---------:|:-------:|
| 1.8522 | 1.0 | 35890 | 1.6854 | 24.417 | 11.6924 | 20.1756 | 23.0414 | 18.9996 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dennisowusuk/wav2vec2-large-xls-r-300m-turkish-colab
|
dennisowusuk
| 2022-03-28T13:28:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-28T05:29:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3863
- Wer: 0.3095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.8284 | 3.67 | 400 | 0.6782 | 0.6739 |
| 0.4174 | 7.34 | 800 | 0.4524 | 0.4811 |
| 0.2015 | 11.01 | 1200 | 0.4736 | 0.4311 |
| 0.1371 | 14.68 | 1600 | 0.4254 | 0.3929 |
| 0.0997 | 18.35 | 2000 | 0.4254 | 0.3636 |
| 0.082 | 22.02 | 2400 | 0.3807 | 0.3474 |
| 0.0665 | 25.69 | 2800 | 0.3987 | 0.3236 |
| 0.0523 | 29.36 | 3200 | 0.3863 | 0.3095 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/abeshinzo
|
huggingtweets
| 2022-03-28T12:19:48Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-28T12:19:01Z |
---
language: en
thumbnail: http://www.huggingtweets.com/abeshinzo/1648469983562/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1765776666/s-abetwitter1_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">安倍晋三</div>
<div style="text-align: center; font-size: 14px;">@abeshinzo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 安倍晋三.
| Data | 安倍晋三 |
| --- | --- |
| Tweets downloaded | 2365 |
| Retweets | 77 |
| Short tweets | 1629 |
| Tweets kept | 659 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/37uwbwzs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @abeshinzo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ib1nsfa1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ib1nsfa1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/abeshinzo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
VincentC12/rh_classification_kara
|
VincentC12
| 2022-03-28T11:53:41Z | 9 | 0 |
pytorch
|
[
"pytorch",
"distilbert",
"sentiment-analysis",
"en",
"region:us"
] | null | 2022-03-23T16:19:02Z |
---
language:
- en
library_name: pytorch
metrics:
- satisfaction
- culture organisationnelle
- leadership
- conditions de travail
tags:
- sentiment-analysis
widget:
- text: "My work is recognized by my superiors and I would even say that I feel like I have more recognition since we are on telework."
example_title: "Exemple leadership"
- text: "For Working conditions and wages in particular."
example_title: "Exemple conditions de travail"
- text: "A climate of overperformance is in place in the company."
example_title: "Exemple culture organisationnelle"
- text: "With regard to telework, I look forward to setting up the hybrid week, so 2 3 days at home and at the office."
example_title: "Exemple satisfaction"
---
Ce modèle est développé pour KARA.
Ce modèle est :
- Un outil de classification thématique des commentaires RH
- Entrainé pour être utilisé en ANGLAIS (les commentaires doivent êtres traduits)
- Spécialisé pour des commentaires entre 10 et 512 charactères
Ce modèle n'est pas :
- Utilisable pour détecter un discours haineux ou bien une lettre de suicide
Étiquettes :
- Label_0 = Satisfaction
- Label_1 = Culture Organisationnelle
- Label_2 = Leadership
- Label_3 = Conditions de travail
version 0.0.1
Performances sur le jeux de données du HRM : 84.3% de précision
|
VincentC12/sentiment_analysis_kara
|
VincentC12
| 2022-03-28T11:52:03Z | 21 | 0 |
pytorch
|
[
"pytorch",
"distilbert",
"sentiment-analysis",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
library_name: pytorch
metrics:
- negative
- positive
tags:
- sentiment-analysis
widget:
- text: "Thank you for listening to the recommendations of the telephone team for teleworking. we have a strong expertise in this field and accurate listening to Our management!!!!"
example_title: "Exemple positif"
- text: "working conditions and wages are less than average more part of the time it is not a hierarchical system Our opinion counts"
example_title: "Exemple négatif"
---
Ce modèle est développé pour KARA.
Ce modèle est :
- Un outil d'analyse de sentiment associé à un commentaire de sondage RH
- Entrainé pour être utilisé en ANGLAIS (les commentaires doivent êtres traduits)
- Spécialisé pour des commentaires entre 10 et 512 charactères
Ce modèle n'est pas :
- Utilisable pour détecter un discours haineux ou bien une lettre de suicide
Étiquettes :
- Label_0 = Négatif
- Label_1 = Positif
version 1.1.0
Performances sur le jeux de données du HRM : 91.5% de précision
|
mrm8488/t5-base-iterater
|
mrm8488
| 2022-03-28T11:00:41Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"IteraTeR",
"en",
"dataset:wanyu/IteraTeR_full_sent",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-27T18:48:43Z |
---
license: apache-2.0
language:
- en
datasets:
- wanyu/IteraTeR_full_sent
tags:
- generated_from_trainer
- IteraTeR
widget:
- text: "<clarity> Delay-based schemes have the potential to resolve this last packet problem by scheduling the link based on the delay for the packet has encountered."
model-index:
- name: t5-base-iterater
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5 (base) fine-tuned on IteraTeR
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an [IteraTeR](https://huggingface.co/datasets/wanyu/IteraTeR_full_sent) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3286 | 0.09 | 2000 | 0.3010 |
| 0.3194 | 0.18 | 4000 | 0.2872 |
| 0.3208 | 0.27 | 6000 | 0.2792 |
| 0.3091 | 0.36 | 8000 | 0.2731 |
| 0.3164 | 0.45 | 10000 | 0.2678 |
| 0.2941 | 0.54 | 12000 | 0.2682 |
| 0.2981 | 0.63 | 14000 | 0.2696 |
| 0.2975 | 0.72 | 16000 | 0.2643 |
| 0.3109 | 0.81 | 18000 | 0.2624 |
| 0.2965 | 0.9 | 20000 | 0.2648 |
| 0.3053 | 0.99 | 22000 | 0.2627 |
| 0.2779 | 1.08 | 24000 | 0.2632 |
| 0.2692 | 1.17 | 26000 | 0.2608 |
| 0.2755 | 1.26 | 28000 | 0.2600 |
| 0.2771 | 1.35 | 30000 | 0.2584 |
| 0.2774 | 1.44 | 32000 | 0.2609 |
| 0.2976 | 1.53 | 34000 | 0.2593 |
| 0.2646 | 1.62 | 36000 | 0.2616 |
| 0.2705 | 1.71 | 38000 | 0.2574 |
| 0.2714 | 1.8 | 40000 | 0.2577 |
| 0.2857 | 1.9 | 42000 | 0.2576 |
| 0.2832 | 1.99 | 44000 | 0.2580 |
### How to use
```py
from transformers import T5ForConditionalGeneration, T5TokenizerFast
MODEL_CKPT = 'mrm8488/t5-base-iterater'
tokenizer = T5TokenizerFast.from_pretrained(MODEL_CKPT)
model = T5ForConditionalGeneration.from_pretrained(MODEL_CKPT)
def predict(intent, text):
input_text = f"<{intent}> {text}"
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'], max_length=128, num_beams=8)
return tokenizer.decode(output[0], skip_special_tokens=True)
text = "Delay-based schemes have the potential to resolve this last packet problem by scheduling the link based on the delay for the packet has encountered."
intent = "clarity"
predict(intent, text)
# Delay-based schemes have the potential to resolve this last packet problem by scheduling the link based on the delay the packet has encountered.
```
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mideind/tokenizer-mbart-25-enis
|
mideind
| 2022-03-28T10:11:08Z | 0 | 0 | null |
[
"translation",
"is",
"en",
"license:mit",
"region:us"
] |
translation
| 2022-03-28T10:01:18Z |
---
language:
- is
- en
tags:
- translation
license: mit
---
# mBART 25 SentencePiece tokenizer
This tokenizer is used for Mideind's mBART translation models.
It is based on Facebooks mBART-25 SentencePiece model.
A language token from the original model has been replaced with "is_IS".
Usage example (for debugging):
```python
import sys
from transformers.models import mbart
MODEL_DIR = sys.argv[1]
tokenizer: mbart.MBartTokenizerFast = mbart.MBartTokenizerFast.from_pretrained(
MODEL_DIR, src_lang="en_XX"
)
is_lang_idx = tokenizer.convert_tokens_to_ids("is_IS")
model = mbart.MBartForConditionalGeneration.from_pretrained(MODEL_DIR)
test_sentence = "This is a test."
input_ids = tokenizer(test_sentence, return_tensors="pt")
print(input_ids)
outputs = model.generate(
**input_ids, decoder_start_token_id=is_lang_idx
)
print(outputs)
print(tokenizer.batch_decode(outputs))
```
|
mart/ivan
|
mart
| 2022-03-28T08:40:12Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2022-03-28T08:40:12Z |
---
license: artistic-2.0
---
|
jkhan447/sentiment-model-sample-offline-goemotion
|
jkhan447
| 2022-03-28T06:50:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-28T06:33:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentiment-model-sample-offline-goemotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample-offline-goemotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0183
- Accuracy: 0.7109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
aps/flava_full_pretrained_encoders_torchmm
|
aps
| 2022-03-28T06:03:42Z | 0 | 0 | null |
[
"pytorch",
"license:bsd-3-clause",
"region:us"
] | null | 2022-03-28T05:35:04Z |
---
license: bsd-3-clause
---
|
huggingtweets/freudwarrior123
|
huggingtweets
| 2022-03-28T04:26:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-28T04:23:45Z |
---
language: en
thumbnail: http://www.huggingtweets.com/freudwarrior123/1648441457881/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1443547125770559488/QNDa_bi1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">freudwarrior123</div>
<div style="text-align: center; font-size: 14px;">@freudwarrior123</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from freudwarrior123.
| Data | freudwarrior123 |
| --- | --- |
| Tweets downloaded | 859 |
| Retweets | 274 |
| Short tweets | 34 |
| Tweets kept | 551 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3798mw2s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @freudwarrior123's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2n7ltssk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2n7ltssk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/freudwarrior123')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
aihijo/transformers4ime-pinyingpt-concat
|
aihijo
| 2022-03-28T03:57:51Z | 53 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2203.00249",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-26T09:54:26Z |
---
license: cc-by-nc-sa-4.0
---

# Transformers4IME
Transformers4IME is repo for exploring and adapting transformer-based models to IME.
## PinyinGPT
PinyinGPT is a model from [Exploring and Adapting Chinese GPT to Pinyin Input Method](https://arxiv.org/abs/2203.00249)
which appears in ACL2022.
```bibtex
@article{tan2022exploring,
title={Exploring and Adapting Chinese GPT to Pinyin Input Method},
author={Tan, Minghuan and Dai, Yong and Tang, Duyu and Feng, Zhangyin and Huang, Guoping and Jiang, Jing and Li, Jiwei and Shi, Shuming},
journal={arXiv preprint arXiv:2203.00249},
year={2022}
}
```
The code can be found at
* [Gitee](https://gitee.com/visualjoyce/Transformers4IME)
* [Github](https://github.com/visualjoyce/Transformers4IME)
|
YXHugging/autotrain-xlm-roberta-base-reviews-672119799
|
YXHugging
| 2022-03-28T01:30:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain",
"unk",
"dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-27T00:52:19Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 1583.7188188958198
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119799
- CO2 Emissions (in grams): 1583.7188188958198
## Validation Metrics
- Loss: 0.9590993523597717
- Accuracy: 0.5827541666666667
- Macro F1: 0.5806748283026683
- Micro F1: 0.5827541666666667
- Weighted F1: 0.5806748283026683
- Macro Precision: 0.5834325027348383
- Micro Precision: 0.5827541666666667
- Weighted Precision: 0.5834325027348383
- Macro Recall: 0.5827541666666667
- Micro Recall: 0.5827541666666667
- Weighted Recall: 0.5827541666666667
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119799
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119799", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119799", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
BigSalmon/InformalToFormalLincoln31
|
BigSalmon
| 2022-03-28T00:48:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T23:08:12Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln31")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln31")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
|
minimaxir/imgbeddings
|
minimaxir
| 2022-03-28T00:36:28Z | 0 | 3 |
transformers
|
[
"transformers",
"onnx",
"ai",
"images",
"image-processing",
"embeddings",
"clip",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-27T17:23:51Z |
---
language:
- en
tags:
- ai
- transformers
- onnx
- images
- image-processing
- embeddings
- clip
license: mit
---
# imgbeddings
The HF repo where the models for [imgbeddings](https://github.com/minimaxir/imgbeddings) are loaded.
The ONNX files were generated using [this export Notebook](https://github.com/minimaxir/imgbeddings/blob/main/examples/export.ipynb).
## License
MIT
|
huggingtweets/jacobe
|
huggingtweets
| 2022-03-27T23:02:12Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T23:01:35Z |
---
language: en
thumbnail: http://www.huggingtweets.com/jacobe/1648422127637/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1025926108984664064/2ZHTSIof_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rowel Atienza</div>
<div style="text-align: center; font-size: 14px;">@jacobe</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rowel Atienza.
| Data | Rowel Atienza |
| --- | --- |
| Tweets downloaded | 100 |
| Retweets | 29 |
| Short tweets | 4 |
| Tweets kept | 67 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1uzq4b7w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jacobe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ouo6sis) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ouo6sis/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jacobe')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/baguioni
|
huggingtweets
| 2022-03-27T22:55:21Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T22:54:40Z |
---
language: en
thumbnail: http://www.huggingtweets.com/baguioni/1648421716784/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506662013707046914/hVtCPrPL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">baguio</div>
<div style="text-align: center; font-size: 14px;">@baguioni</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from baguio.
| Data | baguio |
| --- | --- |
| Tweets downloaded | 3012 |
| Retweets | 1090 |
| Short tweets | 527 |
| Tweets kept | 1395 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1z9nh9v8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @baguioni's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2s53fr1o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2s53fr1o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/baguioni')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/baguioni-elonmusk-jacobe
|
huggingtweets
| 2022-03-27T22:44:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T22:43:39Z |
---
language: en
thumbnail: http://www.huggingtweets.com/baguioni-elonmusk-jacobe/1648421056394/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1025926108984664064/2ZHTSIof_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1506662013707046914/hVtCPrPL_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Rowel Atienza & baguio</div>
<div style="text-align: center; font-size: 14px;">@baguioni-elonmusk-jacobe</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Rowel Atienza & baguio.
| Data | Elon Musk | Rowel Atienza | baguio |
| --- | --- | --- | --- |
| Tweets downloaded | 1621 | 100 | 3012 |
| Retweets | 69 | 29 | 1090 |
| Short tweets | 520 | 4 | 527 |
| Tweets kept | 1032 | 67 | 1395 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xuj1tda/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @baguioni-elonmusk-jacobe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3fpkbu3i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3fpkbu3i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/baguioni-elonmusk-jacobe')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Splend1dchan/t5small4-squad1024
|
Splend1dchan
| 2022-03-27T22:26:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-27T14:15:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5small4-squad1024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5small4-squad1024
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: tpu
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.9.0+cu102
- Tokenizers 0.11.6
|
theResearchNinja/Cybonto-distilbert-base-uncased-finetuned-ner-v0.1
|
theResearchNinja
| 2022-03-27T21:51:10Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:few_nerd",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-27T20:34:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- few_nerd
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Cybonto-distilbert-base-uncased-finetuned-ner-v0.1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: few_nerd
type: few_nerd
args: supervised
metrics:
- name: Precision
type: precision
value: 0.7377633209417596
- name: Recall
type: recall
value: 0.7817648386368765
- name: F1
type: f1
value: 0.7591269959856158
- name: Accuracy
type: accuracy
value: 0.9383331648547562
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Cybonto-distilbert-base-uncased-finetuned-ner-v0.1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the few_nerd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1930
- Precision: 0.7378
- Recall: 0.7818
- F1: 0.7591
- Accuracy: 0.9383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 36
- eval_batch_size: 36
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2001 | 1.0 | 3661 | 0.1954 | 0.7244 | 0.7750 | 0.7488 | 0.9360 |
| 0.1717 | 2.0 | 7322 | 0.1898 | 0.7392 | 0.7767 | 0.7575 | 0.9384 |
| 0.1485 | 3.0 | 10983 | 0.1930 | 0.7378 | 0.7818 | 0.7591 | 0.9383 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
leonadase/bert-base-chinese-finetuned-fdRE
|
leonadase
| 2022-03-27T20:52:06Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:sem_eval2010_task8",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-27T19:04:51Z |
---
tags:
- generated_from_trainer
datasets:
- sem_eval2010_task8
metrics:
- accuracy
model-index:
- name: bert-base-chinese-finetuned-fdRE
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sem_eval2010_task8
type: sem_eval2010_task8
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9080962800875274
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-fdRE
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the sem_eval2010_task8 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2716
- Accuracy: 0.9081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 46 | 0.5571 | 0.7812 |
| No log | 2.0 | 92 | 0.4030 | 0.8621 |
| No log | 3.0 | 138 | 0.3139 | 0.8928 |
| No log | 4.0 | 184 | 0.2716 | 0.9081 |
| No log | 5.0 | 230 | 0.2564 | 0.9081 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
willcai/wav2vec2_common_voice_accents_indian_only_rerun
|
willcai
| 2022-03-27T18:00:16Z | 2 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-27T06:51:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_indian_only_rerun
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_indian_only_rerun
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 588
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.6205 | 25.0 | 400 | 1.4584 |
| 0.3427 | 50.0 | 800 | 1.8377 |
| 0.1213 | 75.0 | 1200 | 1.6086 |
| 0.0643 | 100.0 | 1600 | 1.5136 |
| 0.0433 | 125.0 | 2000 | 1.4882 |
| 0.0323 | 150.0 | 2400 | 1.2204 |
| 0.0265 | 175.0 | 2800 | 1.3034 |
| 0.0206 | 200.0 | 3200 | 1.2866 |
| 0.0191 | 225.0 | 3600 | 1.2337 |
| 0.0148 | 250.0 | 4000 | 1.1729 |
| 0.0121 | 275.0 | 4400 | 1.2059 |
| 0.0105 | 300.0 | 4800 | 1.1246 |
| 0.01 | 325.0 | 5200 | 1.1397 |
| 0.0098 | 350.0 | 5600 | 1.1684 |
| 0.0073 | 375.0 | 6000 | 1.1030 |
| 0.0061 | 400.0 | 6400 | 1.2077 |
| 0.0049 | 425.0 | 6800 | 1.2653 |
| 0.0044 | 450.0 | 7200 | 1.1587 |
| 0.0037 | 475.0 | 7600 | 1.2283 |
| 0.0033 | 500.0 | 8000 | 1.1897 |
| 0.0026 | 525.0 | 8400 | 1.2633 |
| 0.0023 | 550.0 | 8800 | 1.2571 |
| 0.002 | 575.0 | 9200 | 1.2807 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_augment_0.1
|
scasutt
| 2022-03-27T17:07:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-25T17:45:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_augment_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_augment_0.1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4658
- Wer: 0.5037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.447 | 1.05 | 250 | 3.3799 | 1.0 |
| 3.089 | 2.1 | 500 | 3.4868 | 1.0 |
| 3.063 | 3.15 | 750 | 3.3155 | 1.0 |
| 2.4008 | 4.2 | 1000 | 1.2934 | 0.8919 |
| 1.618 | 5.25 | 1250 | 0.7847 | 0.7338 |
| 1.3038 | 6.3 | 1500 | 0.6459 | 0.6712 |
| 1.2074 | 7.35 | 1750 | 0.5705 | 0.6269 |
| 1.1062 | 8.4 | 2000 | 0.5267 | 0.5843 |
| 1.026 | 9.45 | 2250 | 0.5108 | 0.5683 |
| 0.9505 | 10.5 | 2500 | 0.5066 | 0.5568 |
| 0.893 | 11.55 | 2750 | 0.5161 | 0.5532 |
| 0.8535 | 12.6 | 3000 | 0.4994 | 0.5341 |
| 0.8462 | 13.65 | 3250 | 0.4626 | 0.5262 |
| 0.8334 | 14.7 | 3500 | 0.4593 | 0.5197 |
| 0.842 | 15.75 | 3750 | 0.4651 | 0.5126 |
| 0.7678 | 16.81 | 4000 | 0.4687 | 0.5120 |
| 0.7873 | 17.86 | 4250 | 0.4716 | 0.5070 |
| 0.7486 | 18.91 | 4500 | 0.4657 | 0.5033 |
| 0.7073 | 19.96 | 4750 | 0.4658 | 0.5037 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
YXHugging/autotrain-xlm-roberta-base-reviews-672119801
|
YXHugging
| 2022-03-27T16:53:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain",
"unk",
"dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-27T01:21:43Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 999.5670927087938
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119801
- CO2 Emissions (in grams): 999.5670927087938
## Validation Metrics
- Loss: 0.9767692685127258
- Accuracy: 0.5738333333333333
- Macro F1: 0.5698748846905103
- Micro F1: 0.5738333333333333
- Weighted F1: 0.5698748846905102
- Macro Precision: 0.5734242161804903
- Micro Precision: 0.5738333333333333
- Weighted Precision: 0.5734242161804902
- Macro Recall: 0.5738333333333333
- Micro Recall: 0.5738333333333333
- Weighted Recall: 0.5738333333333333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119801
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119801", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119801", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
EMBO/bio-lm
|
EMBO
| 2022-03-27T15:46:51Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"language model",
"dataset:EMBO/biolang",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- english
thumbnail:
tags:
- language model
license:
datasets:
- EMBO/biolang
metrics:
-
---
# bio-lm
## Model description
This model is a [RoBERTa base pre-trained model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang).
## Intended uses & limitations
#### How to use
The intended use of this model is to be fine-tuned for downstream tasks, token classification in particular.
To have a quick check of the model as-is in a fill-mask task:
```python
from transformers import pipeline, RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
text = "Let us try this model to see if it <mask>."
fill_mask = pipeline(
"fill-mask",
model='EMBO/bio-lm',
tokenizer=tokenizer
)
fill_mask(text)
```
#### Limitations and bias
This model should be fine-tuned on a specifi task like token classification.
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained with a masked language modeling taskon the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang) wich includes 12Mio examples from abstracts and figure legends extracted from papers published in life sciences.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Command: `python -m lm.train /data/json/oapmc_abstracts_figs/ MLM`
- Tokenizer vocab size: 50265
- Training data: EMBO/biolang MLM
- Training with: 12005390 examples
- Evaluating on: 36713 examples
- Epochs: 3.0
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- tensorboard run: lm-MLM-2021-01-27T15-17-43.113766
End of training:
```
trainset: 'loss': 0.8653350830078125
validation set: 'eval_loss': 0.8192330598831177, 'eval_recall': 0.8154601116513597
```
## Eval results
Eval on test set:
```
recall: 0.814471959728645
```
|
perevalov/query-validation-lcquad
|
perevalov
| 2022-03-27T14:04:19Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"kgqa",
"question answering",
"sparql",
"bert-base-cased",
"en",
"license:apache-2.0",
"region:us"
] | null | 2022-03-27T09:51:36Z |
---
language: en
tags:
- kgqa
- question answering
- sparql
- bert-base-cased
license: apache-2.0
---
# SPARQL Query Validation model
## Model description
## Intended uses & limitations
### How to use
|
EMBO/sd-smallmol-roles
|
EMBO
| 2022-03-27T13:28:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"token classification",
"dataset:EMBO/sd-nlp",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-19T11:14:58Z |
---
language:
- english
thumbnail:
tags:
- token classification
license: agpl-3.0
datasets:
- EMBO/sd-nlp
metrics:
-
---
# sd-smallmol-roles
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It has then been fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `SMALL_MOL_ROLES` configuration to perform pure context-dependent semantic role classification of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is to infer the semantic role of small molecules with regard to the causal hypotheses tested in experiments reported in scientific papers.
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """<s>The <mask> overexpression in cells caused an increase in <mask> expression.</s>"""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-smallmol-roles')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res:
print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [EMBO/sd-nlp dataset](https://huggingface.co/datasets/EMBO/sd-nlp) which includes manually annotated examples.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Model fine tuned: EMBL/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: SMALL_MOL_ROLES
- Training with 48771 examples.
- Evaluating on 13801 examples.
- Training on 15 features: O, I-CONTROLLED_VAR, B-CONTROLLED_VAR, I-MEASURED_VAR, B-MEASURED_VAR
- Epochs: 0.33
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
On 7178 example of test set with `sklearn.metrics`:
```
precision recall f1-score support
CONTROLLED_VAR 0.76 0.90 0.83 2946
MEASURED_VAR 0.60 0.71 0.65 852
micro avg 0.73 0.86 0.79 3798
macro avg 0.68 0.80 0.74 3798
weighted avg 0.73 0.86 0.79 3798
{'test_loss': 0.011743436567485332, 'test_accuracy_score': 0.9951612532624371, 'test_precision': 0.7261345852895149, 'test_recall': 0.8551869404949973, 'test_f1': 0.7853947527505744, 'test_runtime': 58.0378, 'test_samples_per_second': 123.678, 'test_steps_per_second': 1.947}
```
|
EMBO/sd-geneprod-roles
|
EMBO
| 2022-03-27T13:23:03Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"token classification",
"dataset:EMBO/sd-nlp",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-19T10:38:53Z |
---
language:
- english
thumbnail:
tags:
- token classification
license: agpl-3.0
datasets:
- EMBO/sd-nlp
metrics:
-
---
# sd-geneprod-roles
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of English scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `GENEPROD_ROLES` configuration to perform pure context-dependent semantic role classification of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is to infer the semantic role of gene products (genes and proteins) with regard to the causal hypotheses tested in experiments reported in scientific papers.
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """<s>The <mask> overexpression in cells caused an increase in <mask> expression.</s>"""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-geneprod-roles')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res:
print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [EMBO/sd-nlp dataset](https://huggingface.co/datasets/EMBO/sd-nlp) which includes manually annotated examples.
## Training procedure
The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Model fine-tuned: EMBL/bio-lm
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp
- Dataset configuration: GENEPROD_ROLES
- Training with 48771 examples.
- Evaluating on 13801 examples.
- Training on 15 features: O, I-CONTROLLED_VAR, B-CONTROLLED_VAR, I-MEASURED_VAR, B-MEASURED_VAR
- Epochs: 0.9
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
On 7178 example of test set with `sklearn.metrics`:
```
precision recall f1-score support
CONTROLLED_VAR 0.81 0.86 0.83 7835
MEASURED_VAR 0.82 0.85 0.84 9330
micro avg 0.82 0.85 0.83 17165
macro avg 0.82 0.85 0.83 17165
weighted avg 0.82 0.85 0.83 17165
{'test_loss': 0.03846803680062294, 'test_accuracy_score': 0.9854472664459946, 'test_precision': 0.8156312625250501, 'test_recall': 0.8535974366443344, 'test_f1': 0.8341825841897008, 'test_runtime': 58.7369, 'test_samples_per_second': 122.206, 'test_steps_per_second': 1.924}
```
|
YXHugging/autotrain-xlm-roberta-base-reviews-672119798
|
YXHugging
| 2022-03-27T12:58:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain",
"unk",
"dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-26T21:07:59Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 1013.8825767332373
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119798
- CO2 Emissions (in grams): 1013.8825767332373
## Validation Metrics
- Loss: 0.9646632075309753
- Accuracy: 0.5789333333333333
- Macro F1: 0.5775792001871465
- Micro F1: 0.5789333333333333
- Weighted F1: 0.5775792001871465
- Macro Precision: 0.5829444191847423
- Micro Precision: 0.5789333333333333
- Weighted Precision: 0.5829444191847424
- Macro Recall: 0.5789333333333333
- Micro Recall: 0.5789333333333333
- Weighted Recall: 0.5789333333333333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119798
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119798", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119798", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
YXHugging/autotrain-xlm-roberta-base-reviews-672119797
|
YXHugging
| 2022-03-27T12:55:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain",
"unk",
"dataset:YXHugging/autotrain-data-xlm-roberta-base-reviews",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-26T21:05:03Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- YXHugging/autotrain-data-xlm-roberta-base-reviews
co2_eq_emissions: 1019.0229633198007
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 672119797
- CO2 Emissions (in grams): 1019.0229633198007
## Validation Metrics
- Loss: 0.9898674488067627
- Accuracy: 0.5688083333333334
- Macro F1: 0.5640966271895913
- Micro F1: 0.5688083333333334
- Weighted F1: 0.5640966271895913
- Macro Precision: 0.5673737438011194
- Micro Precision: 0.5688083333333334
- Weighted Precision: 0.5673737438011194
- Macro Recall: 0.5688083333333334
- Micro Recall: 0.5688083333333334
- Weighted Recall: 0.5688083333333334
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/YXHugging/autotrain-xlm-roberta-base-reviews-672119797
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119797", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("YXHugging/autotrain-xlm-roberta-base-reviews-672119797", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Danik51002/NewModel
|
Danik51002
| 2022-03-27T12:52:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-13T16:51:09Z |
---
tags:
- generated_from_trainer
model-index:
- name: NewModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NewModel
This model is a fine-tuned version of [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 42
- eval_batch_size: 42
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 840
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- num_epochs: 200
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data
|
scasutt
| 2022-03-27T11:32:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-27T08:49:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6357
- Wer: 0.5496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6073 | 2.1 | 250 | 3.5111 | 1.0 |
| 3.0828 | 4.2 | 500 | 3.5133 | 1.0 |
| 1.9969 | 6.3 | 750 | 1.3924 | 0.9577 |
| 0.9279 | 8.4 | 1000 | 0.8378 | 0.7243 |
| 0.6692 | 10.5 | 1250 | 0.7367 | 0.6394 |
| 0.5273 | 12.6 | 1500 | 0.6703 | 0.5907 |
| 0.4314 | 14.7 | 1750 | 0.6594 | 0.5718 |
| 0.3809 | 16.8 | 2000 | 0.6138 | 0.5559 |
| 0.3934 | 18.9 | 2250 | 0.6357 | 0.5496 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
yy642/bert-base-uncased-finetuned-mnli-512-10
|
yy642
| 2022-03-27T11:06:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-27T01:55:50Z |
---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-mnli-512-10
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9355947399880454
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mnli-512-10
This model is a fine-tuned version of [yy642/bert-base-uncased-finetuned-mnli-512-5](https://huggingface.co/yy642/bert-base-uncased-finetuned-mnli-512-5) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4991
- Accuracy: 0.9356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0514 | 1.0 | 16363 | 0.4557 | 0.9265 |
| 0.0369 | 2.0 | 32726 | 0.4548 | 0.9323 |
| 0.0249 | 3.0 | 49089 | 0.4376 | 0.9320 |
| 0.0197 | 4.0 | 65452 | 0.4991 | 0.9356 |
| 0.0135 | 5.0 | 81815 | 0.5424 | 0.9341 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Danik51002/Example
|
Danik51002
| 2022-03-27T08:55:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T07:58:16Z |
---
tags:
- generated_from_trainer
model-index:
- name: Example
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Example
This model is a fine-tuned version of [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 42
- eval_batch_size: 42
- seed: 42
- gradient_accumulation_steps: 20
- total_train_batch_size: 840
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- num_epochs: 300
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
ItsMe1111/EDSR
|
ItsMe1111
| 2022-03-27T06:18:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-27T06:18:32Z |
---
license: apache-2.0
---
|
huggingtweets/psimon365
|
huggingtweets
| 2022-03-27T02:56:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T02:56:02Z |
---
language: en
thumbnail: http://www.huggingtweets.com/psimon365/1648349798068/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1507859834107879426/d5Jqrb7Y_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Psimon 🌐</div>
<div style="text-align: center; font-size: 14px;">@psimon365</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Psimon 🌐.
| Data | Psimon 🌐 |
| --- | --- |
| Tweets downloaded | 181 |
| Retweets | 0 |
| Short tweets | 34 |
| Tweets kept | 147 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/q7gcbo7v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @psimon365's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kyaiz92o) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kyaiz92o/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/psimon365')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
scasutt/wav2vec2-base_toy_train_data_random_noise
|
scasutt
| 2022-03-27T02:27:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-27T00:14:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_random_noise
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_random_noise
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0909
- Wer: 0.7351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.128 | 2.1 | 250 | 3.5052 | 1.0 |
| 3.0423 | 4.2 | 500 | 2.9312 | 1.0 |
| 1.4109 | 6.3 | 750 | 1.2618 | 0.8915 |
| 0.9132 | 8.4 | 1000 | 1.1074 | 0.8436 |
| 0.7146 | 10.5 | 1250 | 1.0397 | 0.7876 |
| 0.5418 | 12.6 | 1500 | 1.0359 | 0.7662 |
| 0.4649 | 14.7 | 1750 | 1.0469 | 0.7467 |
| 0.4127 | 16.8 | 2000 | 1.0655 | 0.7404 |
| 0.3881 | 18.9 | 2250 | 1.0909 | 0.7351 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-base_toy_train_data_random_noise_0.1
|
scasutt
| 2022-03-27T00:13:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-26T22:03:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_random_noise_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_random_noise_0.1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9263
- Wer: 0.7213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1296 | 2.1 | 250 | 3.5088 | 1.0 |
| 3.0728 | 4.2 | 500 | 3.1694 | 1.0 |
| 1.8686 | 6.3 | 750 | 1.3414 | 0.9321 |
| 1.1241 | 8.4 | 1000 | 1.0196 | 0.8321 |
| 0.8704 | 10.5 | 1250 | 0.9387 | 0.7962 |
| 0.6734 | 12.6 | 1500 | 0.9309 | 0.7640 |
| 0.5832 | 14.7 | 1750 | 0.9329 | 0.7346 |
| 0.5207 | 16.8 | 2000 | 0.9060 | 0.7247 |
| 0.4857 | 18.9 | 2250 | 0.9263 | 0.7213 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/mkobach-naval-shaneaparrish
|
huggingtweets
| 2022-03-27T00:07:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-27T00:04:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/mkobach-naval-shaneaparrish/1648339620049/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374075536595505154/1_1jV_AF_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1253758424292171778/48gD7Hne_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1256841238298292232/ycqwaMI2_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matthew Kobach & Shane Parrish & Naval</div>
<div style="text-align: center; font-size: 14px;">@mkobach-naval-shaneaparrish</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matthew Kobach & Shane Parrish & Naval.
| Data | Matthew Kobach | Shane Parrish | Naval |
| --- | --- | --- | --- |
| Tweets downloaded | 3248 | 3197 | 3249 |
| Retweets | 135 | 102 | 181 |
| Short tweets | 444 | 147 | 617 |
| Tweets kept | 2669 | 2948 | 2451 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17cy2tt4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mkobach-naval-shaneaparrish's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1zkb00dh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1zkb00dh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mkobach-naval-shaneaparrish')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Mnauel/wav2vec2-base-finetuned-ks
|
Mnauel
| 2022-03-26T20:53:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-12T10:51:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5766
- Accuracy: 0.8308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.7247 | 0.7462 |
| No log | 2.0 | 14 | 0.6844 | 0.7615 |
| 0.4279 | 3.0 | 21 | 0.7254 | 0.7462 |
| 0.4279 | 4.0 | 28 | 0.5891 | 0.8 |
| 0.4279 | 5.0 | 35 | 0.6991 | 0.7462 |
| 0.4478 | 6.0 | 42 | 0.6579 | 0.7615 |
| 0.4478 | 7.0 | 49 | 0.6164 | 0.8 |
| 0.4478 | 8.0 | 56 | 0.6191 | 0.8077 |
| 0.4194 | 9.0 | 63 | 0.5766 | 0.8308 |
| 0.4194 | 10.0 | 70 | 0.5704 | 0.8154 |
| 0.4194 | 11.0 | 77 | 0.6518 | 0.8 |
| 0.3833 | 12.0 | 84 | 0.6190 | 0.8077 |
| 0.3833 | 13.0 | 91 | 0.5693 | 0.8231 |
| 0.3833 | 14.0 | 98 | 0.5628 | 0.8231 |
| 0.3607 | 15.0 | 105 | 0.5741 | 0.8154 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.10.3
|
dannyvas23/electricidad-small-discriminator-finetuned-clasificacion-texto-suicida
|
dannyvas23
| 2022-03-26T19:22:14Z | 25 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"sentiment",
"emotion",
"es",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-26T17:19:56Z |
---
license: afl-3.0
language: "es"
tags:
- generated_from_trainer
- sentiment
- emotion
widget:
- text: "La vida no merece la pena"
example_title: "Ejemplo 1"
- text: "Para vivir así lo mejor es estar muerto"
example_title: "Ejemplo 2"
- text: "me siento triste por no poder viajar"
example_title: "Ejemplo 3"
- text: "Quiero terminar con todo"
example_title: "Ejemplo 4"
- text: "Disfruto de la vista"
example_title: "Ejemplo 5"
metrics:
- accuracy
model-index:
- name: electricidad-small-discriminator-finetuned-clasificacion-texto-suicida
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-discriminator-finetuned-clasificacion-texto-suicida
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0458
- Accuracy: 0.9916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Validation Loss | Accuracy |
|:-------------:|:-----:|:---------------:|:--------:|
| 0.161100 | 1.0 | 0.133057 | 0.952718 |
| 0.134500 | 2.0 | 0.110966 | 0.960804 |
| 0.108500 | 3.0 | 0.086417 | 0.970835 |
| 0.099400 | 4.0 | 0.073618 | 0.974856 |
| 0.090500 | 5.0 | 0.065231 | 0.979629 |
| 0.080700 | 6.0 | 0.060849 | 0.982324 |
| 0.069200 | 7.0 | 0.054718 | 0.986125 |
| 0.060400 | 8.0 | 0.051153 | 0.985948 |
| 0.048200 | 9.0 | 0.045747 | 0.989748 |
| 0.045500 | 10.0 | 0.049992 | 0.988069 |
| 0.043400 | 11.0 | 0.046325 | 0.990234 |
| 0.034300 | 12.0 | 0.050746 | 0.989792 |
| 0.032900 | 13.0 | 0.043434 | 0.991737 |
| 0.028400 | 14.0 | 0.045003 | 0.991869 |
| 0.022300 | 15.0 | 0.045819 | 0.991648 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
world-wide/is-legit-kwd-march-27
|
world-wide
| 2022-03-26T18:44:40Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain",
"en",
"dataset:bozelosp/autotrain-data-legit-keyword",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-26T18:44:03Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- bozelosp/autotrain-data-legit-keyword
co2_eq_emissions: 0.5745216001459987
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 668419758
- CO2 Emissions (in grams): 0.5745216001459987
## Validation Metrics
- Loss: 0.5012844800949097
- Accuracy: 0.8057228915662651
- Precision: 0.7627627627627628
- Recall: 0.8355263157894737
- AUC: 0.868530701754386
- F1: 0.7974882260596545
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/bozelosp/autotrain-legit-keyword-668419758
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bozelosp/autotrain-legit-keyword-668419758", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bozelosp/autotrain-legit-keyword-668419758", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
dannyvas23/clasificacion-texto-suicida-finetuned-amazon-review
|
dannyvas23
| 2022-03-26T17:12:23Z | 24 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"sentiment",
"emotion",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-21T19:26:40Z |
---
language: "es"
tags:
- generated_from_trainer
- sentiment
- emotion
widget:
- text: "no me gusta esta vida."
example_title: "Ejemplo 1"
- text: "odio estar ahi"
example_title: "Ejemplo 2"
- text: "me siento triste por no poder viajar"
example_title: "Ejemplo 3"
metrics:
- accuracy
model-index:
- name: clasificacion-texto-suicida-finetuned-amazon-review
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificacion-texto-suicida-finetuned-amazon-review
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1546
- Accuracy: 0.9488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1643 | 1.0 | 12022 | 0.1546 | 0.9488 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
bigmorning/distilgpt2-500e
|
bigmorning
| 2022-03-26T16:37:42Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-26T16:31:57Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilgpt2-500e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilgpt2-500e
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
zuppif/versioning-test
|
zuppif
| 2022-03-26T13:35:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-26T13:34:47Z |
| | uid | hidden_size |
|---:|:------------------------------------------------------------------------------------------------------------------------|--------------:|
| 0 | [e87a4e028b11ec7bf770c6f3ab5c6349](https://huggingface.co/zuppif/versioning-test/tree/e87a4e028b11ec7bf770c6f3ab5c6349) | 8 |
| 1 | [48f2a327cfb7cb0f9b519d9abf73a9be](https://huggingface.co/zuppif/versioning-test/tree/48f2a327cfb7cb0f9b519d9abf73a9be) | 16 |
| 2 | [1c9d18df9ec06b5f7e2f49b2ef1cb826](https://huggingface.co/zuppif/versioning-test/tree/1c9d18df9ec06b5f7e2f49b2ef1cb826) | 32 |
|
Roshan777/finetuning-sentiment-model-300-samples
|
Roshan777
| 2022-03-26T12:54:48Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-24T13:02:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-300-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.6833333333333333
- name: F1
type: f1
value: 0.6153846153846154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-300-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6567
- Accuracy: 0.6833
- F1: 0.6154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Mr-Wick/Roberta
|
Mr-Wick
| 2022-03-26T12:39:55Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-23T16:08:46Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Roberta
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Roberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16476, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-base_toy_train_data_fast_10pct
|
scasutt
| 2022-03-26T12:28:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-26T10:09:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_fast_10pct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_fast_10pct
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3087
- Wer: 0.7175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1309 | 1.05 | 250 | 3.4541 | 0.9982 |
| 3.0499 | 2.1 | 500 | 3.0231 | 0.9982 |
| 1.4839 | 3.15 | 750 | 1.4387 | 0.9257 |
| 1.1697 | 4.2 | 1000 | 1.3729 | 0.8792 |
| 0.9353 | 5.25 | 1250 | 1.2608 | 0.8445 |
| 0.7298 | 6.3 | 1500 | 1.1867 | 0.8052 |
| 0.6418 | 7.35 | 1750 | 1.2414 | 0.7997 |
| 0.5698 | 8.4 | 2000 | 1.2240 | 0.7766 |
| 0.5084 | 9.45 | 2250 | 1.1910 | 0.7687 |
| 0.4912 | 10.5 | 2500 | 1.2241 | 0.7617 |
| 0.4144 | 11.55 | 2750 | 1.2412 | 0.7477 |
| 0.4153 | 12.6 | 3000 | 1.2736 | 0.7511 |
| 0.405 | 13.65 | 3250 | 1.2827 | 0.7328 |
| 0.3852 | 14.7 | 3500 | 1.1981 | 0.7331 |
| 0.3829 | 15.75 | 3750 | 1.3035 | 0.7347 |
| 0.3538 | 16.81 | 4000 | 1.3003 | 0.7240 |
| 0.3385 | 17.86 | 4250 | 1.3354 | 0.7304 |
| 0.3108 | 18.91 | 4500 | 1.2983 | 0.7229 |
| 0.3037 | 19.96 | 4750 | 1.3087 | 0.7175 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
donyd/distilbert-finetuned-imdb
|
donyd
| 2022-03-26T10:29:06Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-26T00:32:31Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: donyd/distilbert-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# donyd/distilbert-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8432
- Validation Loss: 2.6247
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8432 | 2.6247 | 0 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.7.0
- Tokenizers 0.11.6
|
lighteternal/wav2vec2-large-xlsr-53-greek
|
lighteternal
| 2022-03-26T10:12:37Z | 2,071 | 8 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"speech",
"xlsr-fine-tuning-week",
"el",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: el
datasets:
- common_voice
tags:
- audio
- hf-asr-leaderboard
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Greek by Lighteternal
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: CommonVoice (EL), CSS10 (EL)
type: CCS10 + mozilla-foundation/common_voice_7_0
args: el
metrics:
- name: Test WER
type: wer
value: 10.497628
- name: Test CER
type: cer
value: 2.875260
---
# Greek (el) version of the XLSR-Wav2Vec2 automatic speech recognition (ASR) model
### By the Hellenic Army Academy and the Technical University of Crete
* language: el
* licence: apache-2.0
* dataset: CommonVoice (EL), 364MB: https://commonvoice.mozilla.org/el/datasets + CSS10 (EL), 1.22GB: https://github.com/Kyubyong/css10
* model: XLSR-Wav2Vec2, trained for 50 epochs
* metrics: Word Error Rate (WER)
## Model description
UPDATE: We repeated the fine-tuning process using an additional 1.22GB dataset from CSS10.
Wav2Vec2 is a pretrained model for Automatic Speech Recognition (ASR) and was released in September 2020 by Alexei Baevski, Michael Auli, and Alex Conneau. Soon after the superior performance of Wav2Vec2 was demonstrated on the English ASR dataset LibriSpeech, Facebook AI presented XLSR-Wav2Vec2. XLSR stands for cross-lingual speech representations and refers to XLSR-Wav2Vec2`s ability to learn speech representations that are useful across multiple languages.
Similar to Wav2Vec2, XLSR-Wav2Vec2 learns powerful speech representations from hundreds of thousands of hours of speech in more than 50 languages of unlabeled speech. Similar, to BERT's masked language modeling, the model learns contextualized speech representations by randomly masking feature vectors before passing them to a transformer network.
This model was trained for 50 epochs on a single NVIDIA RTX 3080, for aprox. 8hrs.
## How to use for inference:
For live demo, make sure that speech files are sampled at 16kHz.
Instructions to test on CommonVoice extracts are provided in the ASR_Inference.ipynb. Snippet also available below:
```python
#!/usr/bin/env python
# coding: utf-8
# Loading dependencies and defining preprocessing functions
from transformers import Wav2Vec2ForCTC
from transformers import Wav2Vec2Processor
from datasets import load_dataset, load_metric
import re
import torchaudio
import librosa
import numpy as np
from datasets import load_dataset, load_metric
import torch
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�]'
def remove_special_characters(batch):
batch["text"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["text"]
return batch
def resample(batch):
batch["speech"] = librosa.resample(np.asarray(batch["speech"]), 48_000, 16_000)
batch["sampling_rate"] = 16_000
return batch
def prepare_dataset(batch):
# check that all files have the correct sampling rate
assert (
len(set(batch["sampling_rate"])) == 1
), f"Make sure all inputs have the same sampling rate of {processor.feature_extractor.sampling_rate}."
batch["input_values"] = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0]).input_values
with processor.as_target_processor():
batch["labels"] = processor(batch["target_text"]).input_ids
return batch
# Loading model and dataset processor
model = Wav2Vec2ForCTC.from_pretrained("lighteternal/wav2vec2-large-xlsr-53-greek").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("lighteternal/wav2vec2-large-xlsr-53-greek")
# Preparing speech dataset to be suitable for inference
common_voice_test = load_dataset("common_voice", "el", split="test")
common_voice_test = common_voice_test.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
common_voice_test = common_voice_test.map(remove_special_characters, remove_columns=["sentence"])
common_voice_test = common_voice_test.map(speech_file_to_array_fn, remove_columns=common_voice_test.column_names)
common_voice_test = common_voice_test.map(resample, num_proc=8)
common_voice_test = common_voice_test.map(prepare_dataset, remove_columns=common_voice_test.column_names, batch_size=8, num_proc=8, batched=True)
# Loading test dataset
common_voice_test_transcription = load_dataset("common_voice", "el", split="test")
#Performing inference on a random sample. Change the "example" value to try inference on different CommonVoice extracts
example = 123
input_dict = processor(common_voice_test["input_values"][example], return_tensors="pt", sampling_rate=16_000, padding=True)
logits = model(input_dict.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
print("Prediction:")
print(processor.decode(pred_ids[0]))
# πού θέλεις να πάμε ρώτησε φοβισμένα ο βασιλιάς
print("\\\\
Reference:")
print(common_voice_test_transcription["sentence"][example].lower())
# πού θέλεις να πάμε; ρώτησε φοβισμένα ο βασιλιάς.
```
## Evaluation
The model can be evaluated as follows on the Greek test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "el", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("lighteternal/wav2vec2-large-xlsr-53-greek")
model = Wav2Vec2ForCTC.from_pretrained("lighteternal/wav2vec2-large-xlsr-53-greek")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 10.497628 %
### How to use for training:
Instructions and code to replicate the process are provided in the Fine_Tune_XLSR_Wav2Vec2_on_Greek_ASR_with_🤗_Transformers.ipynb notebook.
## Metrics
| Metric | Value |
| ----------- | ----------- |
| Training Loss | 0.0545 |
| Validation Loss | 0.1661 |
| CER on CommonVoice Test (%) *| 2.8753 |
| WER on CommonVoice Test (%) *| 10.4976 |
* Reference transcripts were lower-cased and striped of punctuation and special characters.
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
Based on the tutorial of Patrick von Platen: https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
Original colab notebook here: https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb#scrollTo=V7YOT2mnUiea
|
scasutt/wav2vec2-base_toy_train_data_augmented
|
scasutt
| 2022-03-26T10:09:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-26T07:36:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base_toy_train_data_augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_augmented
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0238
- Wer: 0.6969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.12 | 1.05 | 250 | 3.3998 | 0.9982 |
| 3.0727 | 2.1 | 500 | 3.1261 | 0.9982 |
| 1.9729 | 3.15 | 750 | 1.4868 | 0.9464 |
| 1.3213 | 4.2 | 1000 | 1.2598 | 0.8833 |
| 1.0508 | 5.25 | 1250 | 1.0014 | 0.8102 |
| 0.8483 | 6.3 | 1500 | 0.9475 | 0.7944 |
| 0.7192 | 7.35 | 1750 | 0.9493 | 0.7686 |
| 0.6447 | 8.4 | 2000 | 0.9872 | 0.7573 |
| 0.6064 | 9.45 | 2250 | 0.9587 | 0.7447 |
| 0.5384 | 10.5 | 2500 | 0.9332 | 0.7320 |
| 0.4985 | 11.55 | 2750 | 0.9926 | 0.7315 |
| 0.4643 | 12.6 | 3000 | 1.0008 | 0.7292 |
| 0.4565 | 13.65 | 3250 | 0.9522 | 0.7171 |
| 0.449 | 14.7 | 3500 | 0.9685 | 0.7140 |
| 0.4307 | 15.75 | 3750 | 1.0080 | 0.7077 |
| 0.4239 | 16.81 | 4000 | 0.9950 | 0.7023 |
| 0.389 | 17.86 | 4250 | 1.0260 | 0.7007 |
| 0.3471 | 18.91 | 4500 | 1.0012 | 0.6966 |
| 0.3276 | 19.96 | 4750 | 1.0238 | 0.6969 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
calebcsjm/reversed_harrypotter_generation
|
calebcsjm
| 2022-03-26T05:02:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-25T20:58:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: reversed_harrypotter_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reversed_harrypotter_generation
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
nikhedward/t5-small-finetuned-multi-news
|
nikhedward
| 2022-03-26T04:31:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:multi_news",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-26T03:43:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: t5-small-finetuned-multi-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
args: default
metrics:
- name: Rouge1
type: rouge
value: 14.5549
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-multi-news
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7775
- Rouge1: 14.5549
- Rouge2: 4.5934
- Rougel: 11.1178
- Rougelsum: 12.8964
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.0211 | 1.0 | 1405 | 2.7775 | 14.5549 | 4.5934 | 11.1178 | 12.8964 | 19.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/atarifounders
|
huggingtweets
| 2022-03-26T03:45:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-10T18:31:26Z |
---
language: en
thumbnail: http://www.huggingtweets.com/atarifounders/1648266306699/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1507523916981583875/6n7ng67H_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">koala/claw/soppy</div>
<div style="text-align: center; font-size: 14px;">@atarifounders</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from koala/claw/soppy.
| Data | koala/claw/soppy |
| --- | --- |
| Tweets downloaded | 3239 |
| Retweets | 129 |
| Short tweets | 883 |
| Tweets kept | 2227 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2gsc0jwi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @atarifounders's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tl1eu60e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tl1eu60e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/atarifounders')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ahmeddbahaa/mt5-finetuned-en-ar
|
ahmeddbahaa
| 2022-03-26T02:24:12Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-25T19:26:01Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: mt5-finetuned-en-ar
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
args: arabic
metrics:
- name: Rouge1
type: rouge
value: 0.2824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-finetuned-en-ar
This model is a fine-tuned version of [ahmeddbahaa/mt5-small-finetuned-mt5-en](https://huggingface.co/ahmeddbahaa/mt5-small-finetuned-mt5-en) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2314
- Rouge1: 0.2824
- Rouge2: 0.0
- Rougel: 0.2902
- Rougelsum: 0.298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.1685 | 1.0 | 4130 | 2.4262 | 0.0941 | 0.0235 | 0.1098 | 0.1098 |
| 2.686 | 2.0 | 8260 | 2.2853 | 0.2824 | 0.0 | 0.298 | 0.298 |
| 2.481 | 3.0 | 12390 | 2.2314 | 0.2824 | 0.0 | 0.2902 | 0.298 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
pinecone/msmarco-distilbert-base-tas-b-covid
|
pinecone
| 2022-03-25T18:30:52Z | 152 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-25T18:20:41Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6250 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MarginMSELoss.MarginMSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6250,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
manandey/wav2vec2-large-xlsr-tamil
|
manandey
| 2022-03-25T16:52:49Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"ta",
"dataset:common_voice",
"doi:10.57967/hf/0191",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ta
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Tamil by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 56.44
---
# Wav2Vec2-Large-XLSR-53-Tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-tamil")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("manandey/wav2vec2-large-xlsr-tamil")
model = Wav2Vec2ForCTC.from_pretrained("manandey/wav2vec2-large-xlsr-tamil")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\’\–\(\)]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 56.44%
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
Wende/bert-finetuned-ner
|
Wende
| 2022-03-25T16:19:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-25T15:21:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9321670242614293
- name: Recall
type: recall
value: 0.9505217098619994
- name: F1
type: f1
value: 0.9412548954253812
- name: Accuracy
type: accuracy
value: 0.9860334373344322
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0575
- Precision: 0.9322
- Recall: 0.9505
- F1: 0.9413
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2219 | 1.0 | 878 | 0.0716 | 0.9076 | 0.9288 | 0.9181 | 0.9808 |
| 0.0453 | 2.0 | 1756 | 0.0597 | 0.9297 | 0.9477 | 0.9386 | 0.9852 |
| 0.0239 | 3.0 | 2634 | 0.0575 | 0.9322 | 0.9505 | 0.9413 | 0.9860 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.8.2+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
huggingtweets/rivatez
|
huggingtweets
| 2022-03-25T14:57:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-25T14:51:51Z |
---
language: en
thumbnail: http://www.huggingtweets.com/rivatez/1648220244511/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1421403684085374979/SoqYa6o3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Riva</div>
<div style="text-align: center; font-size: 14px;">@rivatez</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Riva.
| Data | Riva |
| --- | --- |
| Tweets downloaded | 3178 |
| Retweets | 780 |
| Short tweets | 405 |
| Tweets kept | 1993 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qe0i10s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rivatez's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2rspxzzv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2rspxzzv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rivatez')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
vumichien/tf-bert-base-cased-squad2
|
vumichien
| 2022-03-25T14:02:14Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-25T13:56:15Z |
---
license: cc-by-4.0
tags:
- generated_from_keras_callback
model-index:
- name: tf-bert-base-cased-squad2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf-bert-base-cased-squad2
This model is a fine-tuned version of [deepset/bert-base-cased-squad2](https://huggingface.co/deepset/bert-base-cased-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Tokenizers 0.11.6
|
azizbarank/mbert-finnic-ner
|
azizbarank
| 2022-03-25T13:55:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-25T12:43:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mbert-finnic-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-finnic-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the Finnish and Estonian parts of the "WikiANN" dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1427
- Precision: 0.9090
- Recall: 0.9156
- F1: 0.9123
- Accuracy: 0.9672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1636 | 1.0 | 2188 | 0.1385 | 0.8906 | 0.9000 | 0.8953 | 0.9601 |
| 0.0991 | 2.0 | 4376 | 0.1346 | 0.9099 | 0.9095 | 0.9097 | 0.9660 |
| 0.0596 | 3.0 | 6564 | 0.1427 | 0.9090 | 0.9156 | 0.9123 | 0.9672 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
bigmorning/try-m-e-perplexity594
|
bigmorning
| 2022-03-25T13:33:19Z | 10 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-25T13:28:27Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: try-m-e-perplexity594
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# try-m-e-perplexity594
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ssardorf/pegasus-xsum-new-dataset
|
ssardorf
| 2022-03-25T13:12:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-25T13:07:00Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-xsum-new-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-xsum-new-dataset
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8355
- Rouge1: 48.7306
- Rouge2: 34.1291
- Rougel: 44.0778
- Rougelsum: 45.7139
- Gen Len: 30.8889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cpu
- Datasets 1.18.3
- Tokenizers 0.11.6
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.