modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 18:29:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 18:25:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
madatnlp/kor-math-roberta-finetune
|
madatnlp
| 2022-05-02T11:44:14Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-04-30T11:16:10Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: madatnlp/kor-math-roberta-finetune
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# madatnlp/kor-math-roberta-finetune
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3205
- Validation Loss: 1.1407
- Epoch: 26
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_bfloat16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4242 | 2.0873 | 0 |
| 1.9159 | 1.6264 | 1 |
| 1.5933 | 1.4521 | 2 |
| 1.3806 | 1.3584 | 3 |
| 1.2487 | 1.2904 | 4 |
| 1.1464 | 1.2388 | 5 |
| 1.0552 | 1.2076 | 6 |
| 0.9889 | 1.1818 | 7 |
| 0.9118 | 1.1607 | 8 |
| 0.8459 | 1.1349 | 9 |
| 0.7838 | 1.1193 | 10 |
| 0.7389 | 1.1193 | 11 |
| 0.6864 | 1.1080 | 12 |
| 0.6495 | 1.1001 | 13 |
| 0.6103 | 1.1001 | 14 |
| 0.5795 | 1.0990 | 15 |
| 0.5436 | 1.0954 | 16 |
| 0.5136 | 1.0997 | 17 |
| 0.4906 | 1.0954 | 18 |
| 0.4565 | 1.1021 | 19 |
| 0.4347 | 1.1075 | 20 |
| 0.4131 | 1.1075 | 21 |
| 0.3924 | 1.1220 | 22 |
| 0.3741 | 1.1298 | 23 |
| 0.3549 | 1.1352 | 24 |
| 0.3395 | 1.1286 | 25 |
| 0.3205 | 1.1407 | 26 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
tristantristantristan/rumor
|
tristantristantristan
| 2022-05-02T09:33:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:tristantristantristan/autotrain-data-rumour_detection",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-02T09:27:38Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- tristantristantristan/autotrain-data-rumour_detection
co2_eq_emissions: 0.056186258092819436
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 813825547
- CO2 Emissions (in grams): 0.056186258092819436
## Validation Metrics
- Loss: 0.15057753026485443
- Accuracy: 0.9738805970149254
- Precision: 0.9469026548672567
- Recall: 0.9304347826086956
- AUC: 0.9891149437157905
- F1: 0.9385964912280702
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/tristantristantristan/autotrain-rumour_detection-813825547
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("tristantristantristan/autotrain-rumour_detection-813825547", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("tristantristantristan/autotrain-rumour_detection-813825547", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
DioLiu/distilbert-base-uncased-finetuned-sst2
|
DioLiu
| 2022-05-02T03:06:36Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-02T02:28:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8967889908256881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5963
- Accuracy: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.247 | 1.0 | 1404 | 0.3629 | 0.8865 |
| 0.1532 | 2.0 | 2808 | 0.3945 | 0.8979 |
| 0.0981 | 3.0 | 4212 | 0.4206 | 0.9025 |
| 0.0468 | 4.0 | 5616 | 0.5358 | 0.9014 |
| 0.0313 | 5.0 | 7020 | 0.5963 | 0.8968 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Ghost1/bert-finetuned-squad1
|
Ghost1
| 2022-05-02T02:28:59Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-05-02T00:04:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
voodooMaestro/finetuned-stories
|
voodooMaestro
| 2022-05-02T00:24:29Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-01T23:31:33Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: voodooMaestro/finetuned-stories
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# voodooMaestro/finetuned-stories
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9188
- Validation Loss: 1.5604
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.9188 | 1.5604 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
fahadtouseef/wav2vec2-base-timit-demo-colab_1
|
fahadtouseef
| 2022-05-01T23:57:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T12:46:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab_1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3233
- Wer: 0.2574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0949 | 3.52 | 500 | 1.1140 | 0.7136 |
| 0.7584 | 7.04 | 1000 | 0.5312 | 0.5154 |
| 0.4254 | 10.56 | 1500 | 0.4489 | 0.4401 |
| 0.2708 | 14.08 | 2000 | 0.4108 | 0.3770 |
| 0.1855 | 17.61 | 2500 | 0.3881 | 0.3257 |
| 0.139 | 21.13 | 3000 | 0.3666 | 0.2958 |
| 0.1057 | 24.65 | 3500 | 0.3351 | 0.2748 |
| 0.0855 | 28.17 | 4000 | 0.3233 | 0.2574 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sherry7144/wav2vec2-base-timit-demo-colab2
|
sherry7144
| 2022-05-01T23:51:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T23:01:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7746
- Wer: 0.5855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1452 | 13.89 | 500 | 2.9679 | 1.0 |
| 1.075 | 27.78 | 1000 | 0.7746 | 0.5855 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
SebastianS/bert-finetuned-ner
|
SebastianS
| 2022-05-01T21:38:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-01T21:12:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Accuracy
type: accuracy
value: 0.9910634321093416
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0452
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0544 | 1.0 | 1756 | 0.0440 | 0.9892 |
| 0.0246 | 2.0 | 3512 | 0.0417 | 0.9906 |
| 0.0105 | 3.0 | 5268 | 0.0452 | 0.9911 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Yanael/dummy-model
|
Yanael
| 2022-05-01T20:00:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-01T19:30:42Z |
# Dummy Model
Following the Hugging Face course
|
cuzeverynameistaken/wav2vec2-base-timit-demo-colab1
|
cuzeverynameistaken
| 2022-05-01T19:55:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T14:53:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7170
- Wer: 0.4784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1915 | 13.89 | 500 | 3.1318 | 1.0 |
| 1.4993 | 27.78 | 1000 | 0.6736 | 0.5485 |
| 0.3416 | 41.67 | 1500 | 0.7111 | 0.5092 |
| 0.1937 | 55.56 | 2000 | 0.7170 | 0.4784 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
cfilt/HiNER-collapsed-muril-base-cased
|
cfilt
| 2022-05-01T19:48:15Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:cfilt/HiNER-collapsed",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-29T17:19:39Z |
---
tags:
- generated_from_trainer
datasets:
- cfilt/HiNER-collapsed
metrics:
- precision
- recall
- f1
model-index:
- name: HiNER-collapsed-muril-base-cased
results:
- task:
name: Token Classification
type: token-classification
dataset:
type: cfilt/HiNER-collapsed
name: HiNER Collapsed
metrics:
- name: Precision
type: precision
value: 0.9049101352603298
- name: Recall
type: recall
value: 0.9209156735555891
- name: F1
type: f1
value: 0.9128427506027924
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HiNER-collapsed-muril-base-cased
This model was trained from scratch on the cfilt/HiNER-collapsed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.14.0
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
voidism/diffcse-roberta-base-trans
|
voidism
| 2022-05-01T19:30:38Z | 66 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2204.10298",
"arxiv:2104.08821",
"arxiv:2111.00899",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-04-14T15:20:39Z |
---
license: apache-2.0
---
# DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
[](https://github.com/voidism/DiffCSE/)
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
arXiv link: https://arxiv.org/abs/2204.10298
To be published in [**NAACL 2022**](https://2022.naacl.org/)
Authors:
[Yung-Sung Chuang](https://people.csail.mit.edu/yungsung/),
[Rumen Dangovski](http://super-ms.mit.edu/rumen.html),
[Hongyin Luo](http://people.csail.mit.edu/hyluo/),
[Yang Zhang](https://mitibmwatsonailab.mit.edu/people/yang-zhang/),
[Shiyu Chang](https://code-terminator.github.io/),
[Marin Soljačić](http://www.mit.edu/~soljacic/marin.html),
[Shang-Wen Li](https://swdanielli.github.io/),
[Scott Wen-tau Yih](https://scottyih.org/),
[Yoon Kim](https://people.csail.mit.edu/yoonkim/),
[James Glass](http://groups.csail.mit.edu/sls/people/glass.shtml)
Our code is mainly based on the code of [SimCSE](https://arxiv.org/abs/2104.08821). Please refer to their [repository](https://github.com/princeton-nlp/SimCSE) for more detailed information.
## Overview

We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning [(Dangovski et al., 2021)](https://arxiv.org/abs/2111.00899), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks.
## Setups
[](https://www.python.org/downloads/release/python-395/)
### Requirements
* Python 3.9.5
### Install our customized Transformers package
```
cd transformers-4.2.1
pip install .
```
> If you have already installed `transformers==4.2.1` through pip, you need to put `modeling_bert.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_bert.py` and `modeling_roberta.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_roberta.py`.
> We modify these two files in the package so that we can perform _conditional_ pretraining tasks using BERT/RoBERTa. If possible, please directly pip install our customized Transformers package.
### Install other packages
```
pip install -r requirements.txt
```
### Download the pretraining dataset
```
cd data
bash download_wiki.sh
```
### Download the downstream dataset
```
cd SentEval/data/downstream/
bash download_dataset.sh
```
## Training
(The same as `run_diffcse.sh`.)
```bash
python train.py \
--model_name_or_path bert-base-uncased \
--generator_name distilbert-base-uncased \
--train_file data/wiki1m_for_simcse.txt \
--output_dir <your_output_model_dir> \
--num_train_epochs 2 \
--per_device_train_batch_size 64 \
--learning_rate 7e-6 \
--max_seq_length 32 \
--evaluation_strategy steps \
--metric_for_best_model stsb_spearman \
--load_best_model_at_end \
--eval_steps 125 \
--pooler_type cls \
--mlp_only_train \
--overwrite_output_dir \
--logging_first_step \
--logging_dir <your_logging_dir> \
--temp 0.05 \
--do_train \
--do_eval \
--batchnorm \
--lambda_weight 0.005 \
--fp16 --masking_ratio 0.30
```
Our new arguments:
* `--lambda_weight`: the lambda coefficient mentioned in Section 3 of our paper.
* `--masking_ratio`: the masking ratio for MLM generator to randomly replace tokens.
* `--generator_name`: the model name of generator. For `bert-base-uncased`, we use `distilbert-base-uncased`. For `roberta-base`, we use `distilroberta-base`.
Arguments from [SimCSE](https://github.com/princeton-nlp/SimCSE):
* `--train_file`: Training file path (`data/wiki1m_for_simcse.txt`).
* `--model_name_or_path`: Pre-trained checkpoints to start with such as BERT-based models (`bert-base-uncased`, `bert-large-uncased`, etc.) and RoBERTa-based models (`RoBERTa-base`, `RoBERTa-large`).
* `--temp`: Temperature for the contrastive loss. We always use `0.05`.
* `--pooler_type`: Pooling method.
* `--mlp_only_train`: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models.
For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance.
## Evaluation
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation:
```bash
python evaluation.py \
--model_name_or_path <your_output_model_dir> \
--pooler cls_before_pooler \
--task_set <sts|transfer|full> \
--mode test
```
To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts:
### BERT
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
### RoBERTa
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
For more detailed information, please check [SimCSE's GitHub repo](https://github.com/princeton-nlp/SimCSE).
## Pretrained models
[](https://huggingface.co/voidism)
* DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts
* DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans
* DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts
* DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans
We can load the models using the API provided by [SimCSE](https://github.com/princeton-nlp/SimCSE).
See [Getting Started](https://github.com/princeton-nlp/SimCSE#getting-started) for more information.
```python
from diffcse import DiffCSE
model_bert_sts = DiffCSE("voidism/diffcse-bert-base-uncased-sts")
model_bert_trans = DiffCSE("voidism/diffcse-bert-base-uncased-trans")
model_roberta_sts = DiffCSE("voidism/diffcse-roberta-base-sts")
model_roberta_trans = DiffCSE("voidism/diffcse-roberta-base-trans")
```
## Citations
[](https://doi.org/10.48550/arXiv.2204.10298)
Please cite our paper and the SimCSE paper if they are helpful to your work!
```bibtex
@inproceedings{chuang2022diffcse,
title={{DiffCSE}: Difference-based Contrastive Learning for Sentence Embeddings},
author={Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Wen-tau and Kim, Yoon and Glass, James},
booktitle={Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
year={2022}
}
@inproceedings{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
year={2021}
}
```
|
voidism/diffcse-bert-base-uncased-trans
|
voidism
| 2022-05-01T19:24:20Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2204.10298",
"arxiv:2104.08821",
"arxiv:2111.00899",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-04-14T15:19:25Z |
---
license: apache-2.0
---
# DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
[](https://github.com/voidism/DiffCSE/)
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
arXiv link: https://arxiv.org/abs/2204.10298
To be published in [**NAACL 2022**](https://2022.naacl.org/)
Authors:
[Yung-Sung Chuang](https://people.csail.mit.edu/yungsung/),
[Rumen Dangovski](http://super-ms.mit.edu/rumen.html),
[Hongyin Luo](http://people.csail.mit.edu/hyluo/),
[Yang Zhang](https://mitibmwatsonailab.mit.edu/people/yang-zhang/),
[Shiyu Chang](https://code-terminator.github.io/),
[Marin Soljačić](http://www.mit.edu/~soljacic/marin.html),
[Shang-Wen Li](https://swdanielli.github.io/),
[Scott Wen-tau Yih](https://scottyih.org/),
[Yoon Kim](https://people.csail.mit.edu/yoonkim/),
[James Glass](http://groups.csail.mit.edu/sls/people/glass.shtml)
Our code is mainly based on the code of [SimCSE](https://arxiv.org/abs/2104.08821). Please refer to their [repository](https://github.com/princeton-nlp/SimCSE) for more detailed information.
## Overview

We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning [(Dangovski et al., 2021)](https://arxiv.org/abs/2111.00899), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks.
## Setups
[](https://www.python.org/downloads/release/python-395/)
### Requirements
* Python 3.9.5
### Install our customized Transformers package
```
cd transformers-4.2.1
pip install .
```
> If you have already installed `transformers==4.2.1` through pip, you need to put `modeling_bert.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_bert.py` and `modeling_roberta.py` into `<your_python_env>/site-packages/transformers/models/bert/modeling_roberta.py`.
> We modify these two files in the package so that we can perform _conditional_ pretraining tasks using BERT/RoBERTa. If possible, please directly pip install our customized Transformers package.
### Install other packages
```
pip install -r requirements.txt
```
### Download the pretraining dataset
```
cd data
bash download_wiki.sh
```
### Download the downstream dataset
```
cd SentEval/data/downstream/
bash download_dataset.sh
```
## Training
(The same as `run_diffcse.sh`.)
```bash
python train.py \
--model_name_or_path bert-base-uncased \
--generator_name distilbert-base-uncased \
--train_file data/wiki1m_for_simcse.txt \
--output_dir <your_output_model_dir> \
--num_train_epochs 2 \
--per_device_train_batch_size 64 \
--learning_rate 7e-6 \
--max_seq_length 32 \
--evaluation_strategy steps \
--metric_for_best_model stsb_spearman \
--load_best_model_at_end \
--eval_steps 125 \
--pooler_type cls \
--mlp_only_train \
--overwrite_output_dir \
--logging_first_step \
--logging_dir <your_logging_dir> \
--temp 0.05 \
--do_train \
--do_eval \
--batchnorm \
--lambda_weight 0.005 \
--fp16 --masking_ratio 0.30
```
Our new arguments:
* `--lambda_weight`: the lambda coefficient mentioned in Section 3 of our paper.
* `--masking_ratio`: the masking ratio for MLM generator to randomly replace tokens.
* `--generator_name`: the model name of generator. For `bert-base-uncased`, we use `distilbert-base-uncased`. For `roberta-base`, we use `distilroberta-base`.
Arguments from [SimCSE](https://github.com/princeton-nlp/SimCSE):
* `--train_file`: Training file path (`data/wiki1m_for_simcse.txt`).
* `--model_name_or_path`: Pre-trained checkpoints to start with such as BERT-based models (`bert-base-uncased`, `bert-large-uncased`, etc.) and RoBERTa-based models (`RoBERTa-base`, `RoBERTa-large`).
* `--temp`: Temperature for the contrastive loss. We always use `0.05`.
* `--pooler_type`: Pooling method.
* `--mlp_only_train`: For unsupervised SimCSE or DiffCSE, it works better to train the model with MLP layer but test the model without it. You should use this argument when training unsupervised SimCSE/DiffCSE models.
For the results in our paper, we use a NVidia 2080Ti GPU with CUDA 11.2. Using different types of devices or different versions of CUDA/Python/PyTorch may lead to slightly different performance.
## Evaluation
[](https://colab.research.google.com/github/voidism/DiffCSE/blob/master/diffcse_evaluation.ipynb)
We provide a simple colab notebook to reproduce our results easily. We can also run the commands below for evaluation:
```bash
python evaluation.py \
--model_name_or_path <your_output_model_dir> \
--pooler cls_before_pooler \
--task_set <sts|transfer|full> \
--mode test
```
To evaluate our pretrained DiffCSE checkpoints, we can use the following scripts:
### BERT
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-bert-base-uncased-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
### RoBERTa
#### STS
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-sts \
--pooler cls_before_pooler \
--task_set sts \
--mode test
```
#### Transfer Tasks
```bash
python evaluation.py \
--model_name_or_path voidism/diffcse-roberta-base-trans \
--pooler cls_before_pooler \
--task_set transfer \
--mode test
```
For more detailed information, please check [SimCSE's GitHub repo](https://github.com/princeton-nlp/SimCSE).
## Pretrained models
[](https://huggingface.co/voidism)
* DiffCSE-BERT-base (STS): https://huggingface.co/voidism/diffcse-bert-base-uncased-sts
* DiffCSE-BERT-base (transfer tasks): https://huggingface.co/voidism/diffcse-bert-base-uncased-trans
* DiffCSE-RoBERTa-base (STS): https://huggingface.co/voidism/diffcse-roberta-base-sts
* DiffCSE-RoBERTa-base (transfer tasks): https://huggingface.co/voidism/diffcse-roberta-base-trans
We can load the models using the API provided by [SimCSE](https://github.com/princeton-nlp/SimCSE).
See [Getting Started](https://github.com/princeton-nlp/SimCSE#getting-started) for more information.
```python
from diffcse import DiffCSE
model_bert_sts = DiffCSE("voidism/diffcse-bert-base-uncased-sts")
model_bert_trans = DiffCSE("voidism/diffcse-bert-base-uncased-trans")
model_roberta_sts = DiffCSE("voidism/diffcse-roberta-base-sts")
model_roberta_trans = DiffCSE("voidism/diffcse-roberta-base-trans")
```
## Citations
[](https://doi.org/10.48550/arXiv.2204.10298)
Please cite our paper and the SimCSE paper if they are helpful to your work!
```bibtex
@inproceedings{chuang2022diffcse,
title={{DiffCSE}: Difference-based Contrastive Learning for Sentence Embeddings},
author={Chuang, Yung-Sung and Dangovski, Rumen and Luo, Hongyin and Zhang, Yang and Chang, Shiyu and Soljacic, Marin and Li, Shang-Wen and Yih, Wen-tau and Kim, Yoon and Glass, James},
booktitle={Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
year={2022}
}
@inproceedings{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
year={2021}
}
```
|
agnihotri/cuad_contract_type
|
agnihotri
| 2022-05-01T18:49:12Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"en",
"dataset:agnihotri/autotrain-data-contract_type",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-01T18:36:58Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- agnihotri/autotrain-data-contract_type
co2_eq_emissions: 0.07610944071640048
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 809725368
- CO2 Emissions (in grams): 0.07610944071640048
## Validation Metrics
- Loss: 0.05312908813357353
- Accuracy: 0.9911504424778761
- Macro F1: 0.9912087912087912
- Micro F1: 0.9911504424778761
- Weighted F1: 0.9908586988233007
- Macro Precision: 0.9942857142857143
- Micro Precision: 0.9911504424778761
- Weighted Precision: 0.9924146649810366
- Macro Recall: 0.99
- Micro Recall: 0.9911504424778761
- Weighted Recall: 0.9911504424778761
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/agnihotri/autotrain-contract_type-809725368
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("agnihotri/autotrain-contract_type-809725368", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("agnihotri/autotrain-contract_type-809725368", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
rjuez00/meddocan-beto-ner
|
rjuez00
| 2022-05-01T16:23:58Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-01T16:21:07Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: beto_full_train_3_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beto_full_train_3_epochs
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0445
- Precision: 0.9541
- Recall: 0.9481
- F1: 0.9511
- Accuracy: 0.9951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
|
Siyam/SKYLy
|
Siyam
| 2022-05-01T16:02:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T08:47:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: SKYLy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SKYLy
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7645
- Wer: 0.4083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.4215 | 4.26 | 400 | 1.6323 | 0.9857 |
| 0.5716 | 8.51 | 800 | 0.6679 | 0.5107 |
| 0.1721 | 12.77 | 1200 | 0.6935 | 0.4632 |
| 0.1063 | 17.02 | 1600 | 0.7533 | 0.4432 |
| 0.0785 | 21.28 | 2000 | 0.7208 | 0.4255 |
| 0.0608 | 25.53 | 2400 | 0.7481 | 0.4117 |
| 0.0493 | 29.79 | 2800 | 0.7645 | 0.4083 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab9
|
hassnain
| 2022-05-01T15:58:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T09:32:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab9
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1922
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 5.0683 | 1.42 | 500 | 3.2471 | 1.0 |
| 3.1349 | 2.85 | 1000 | 3.2219 | 1.0 |
| 3.1317 | 4.27 | 1500 | 3.2090 | 1.0 |
| 3.1262 | 5.7 | 2000 | 3.2152 | 1.0 |
| 3.1307 | 7.12 | 2500 | 3.2147 | 1.0 |
| 3.1264 | 8.55 | 3000 | 3.2072 | 1.0 |
| 3.1279 | 9.97 | 3500 | 3.2158 | 1.0 |
| 3.1287 | 11.4 | 4000 | 3.2190 | 1.0 |
| 3.1256 | 12.82 | 4500 | 3.2069 | 1.0 |
| 3.1254 | 14.25 | 5000 | 3.2134 | 1.0 |
| 3.1259 | 15.67 | 5500 | 3.2231 | 1.0 |
| 3.1269 | 17.09 | 6000 | 3.2005 | 1.0 |
| 3.1279 | 18.52 | 6500 | 3.1988 | 1.0 |
| 3.1246 | 19.94 | 7000 | 3.1929 | 1.0 |
| 3.128 | 21.37 | 7500 | 3.1864 | 1.0 |
| 3.1245 | 22.79 | 8000 | 3.1868 | 1.0 |
| 3.1266 | 24.22 | 8500 | 3.1852 | 1.0 |
| 3.1239 | 25.64 | 9000 | 3.1855 | 1.0 |
| 3.125 | 27.07 | 9500 | 3.1917 | 1.0 |
| 3.1233 | 28.49 | 10000 | 3.1929 | 1.0 |
| 3.1229 | 29.91 | 10500 | 3.1922 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jcai1/distilbert-base-uncased-finetuned-imdb
|
jcai1
| 2022-05-01T15:16:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-05-01T15:10:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sameearif88/wav2vec2-base-timit-demo-colab12
|
sameearif88
| 2022-05-01T14:25:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T12:17:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab12
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4831
- Wer: 0.3546
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 420
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.1683 | 3.52 | 500 | 1.3684 | 0.7364 |
| 0.7614 | 7.04 | 1000 | 0.6008 | 0.5218 |
| 0.4721 | 10.56 | 1500 | 0.5319 | 0.4614 |
| 0.3376 | 14.08 | 2000 | 0.5234 | 0.4308 |
| 0.2508 | 17.61 | 2500 | 0.5109 | 0.3998 |
| 0.1978 | 21.13 | 3000 | 0.5037 | 0.3721 |
| 0.1645 | 24.65 | 3500 | 0.4918 | 0.3622 |
| 0.1449 | 28.17 | 4000 | 0.4831 | 0.3546 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab50
|
hassnain
| 2022-05-01T13:32:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T10:57:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab50
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2257
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.4568 | 7.04 | 500 | 3.3002 | 1.0 |
| 3.1795 | 14.08 | 1000 | 3.2170 | 1.0 |
| 3.1607 | 21.13 | 1500 | 3.2119 | 1.0 |
| 3.1537 | 28.17 | 2000 | 3.2257 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab52
|
hassnain
| 2022-05-01T12:59:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T12:14:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab52
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab52
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7941
- Wer: 0.7501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3424 | 7.04 | 500 | 3.3225 | 1.0 |
| 2.518 | 14.08 | 1000 | 1.5884 | 0.8300 |
| 1.0217 | 21.13 | 1500 | 1.6643 | 0.7719 |
| 0.6074 | 28.17 | 2000 | 1.7941 | 0.7501 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab40
|
hassnain
| 2022-05-01T12:54:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T10:36:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab40
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7341
- Wer: 0.5578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0438 | 13.89 | 500 | 3.0671 | 1.0 |
| 1.0734 | 27.78 | 1000 | 0.7341 | 0.5578 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab60
|
hassnain
| 2022-05-01T12:26:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T11:04:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab60
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab60
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1975
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.5799 | 7.04 | 500 | 3.2484 | 1.0 |
| 3.1859 | 14.08 | 1000 | 3.1951 | 1.0 |
| 3.1694 | 21.13 | 1500 | 3.1754 | 1.0 |
| 3.1637 | 28.17 | 2000 | 3.1818 | 1.0 |
| 3.1633 | 35.21 | 2500 | 3.1739 | 1.0 |
| 3.16 | 42.25 | 3000 | 3.2030 | 1.0 |
| 3.1602 | 49.3 | 3500 | 3.1974 | 1.0 |
| 3.1544 | 56.34 | 4000 | 3.1975 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab51
|
hassnain
| 2022-05-01T11:59:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T11:15:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab51
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8395
- Wer: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.481 | 7.04 | 500 | 3.2834 | 1.0 |
| 2.2521 | 14.08 | 1000 | 1.6333 | 0.8093 |
| 0.9467 | 21.13 | 1500 | 1.7458 | 0.7560 |
| 0.5888 | 28.17 | 2000 | 1.8395 | 0.7480 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sameearif88/wav2vec2-base-timit-demo-colab11
|
sameearif88
| 2022-05-01T11:54:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T11:05:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab11
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4922
- Wer: 0.4348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2269 | 3.52 | 500 | 1.1191 | 0.7121 |
| 0.8297 | 7.04 | 1000 | 0.6064 | 0.5228 |
| 0.4988 | 10.56 | 1500 | 0.5057 | 0.4627 |
| 0.3635 | 14.08 | 2000 | 0.4922 | 0.4348 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
obokkkk/mt5-base_2_3
|
obokkkk
| 2022-05-01T11:36:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-30T06:21:11Z |
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-base_2_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base_2_3
This model is a fine-tuned version of [obokkkk/mt5-base_2](https://huggingface.co/obokkkk/mt5-base_2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1465
- Bleu: 9.5474
- Gen Len: 17.854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 175 | 1.1739 | 9.0271 | 17.8543 |
| No log | 2.0 | 350 | 1.1660 | 9.1398 | 17.8468 |
| 1.3653 | 3.0 | 525 | 1.1585 | 9.251 | 17.8656 |
| 1.3653 | 4.0 | 700 | 1.1538 | 9.3176 | 17.8476 |
| 1.3653 | 5.0 | 875 | 1.1518 | 9.3529 | 17.8608 |
| 1.2985 | 6.0 | 1050 | 1.1505 | 9.4818 | 17.8552 |
| 1.2985 | 7.0 | 1225 | 1.1475 | 9.499 | 17.8575 |
| 1.2985 | 8.0 | 1400 | 1.1471 | 9.5511 | 17.871 |
| 1.2632 | 9.0 | 1575 | 1.1459 | 9.5315 | 17.8547 |
| 1.2632 | 10.0 | 1750 | 1.1465 | 9.5474 | 17.854 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sameearif88/wav2vec2-base-timit-demo-colab7
|
sameearif88
| 2022-05-01T11:12:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T10:15:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab7
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6917
- Wer: 0.5426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1400
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1854 | 13.89 | 500 | 3.1687 | 1.0 |
| 1.7033 | 27.78 | 1000 | 0.7289 | 0.5659 |
| 0.4208 | 41.67 | 1500 | 0.6917 | 0.5426 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/a_ergt-sausifaktai-suuiluap
|
huggingtweets
| 2022-05-01T11:05:56Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-01T11:05:49Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1512730099614953472/dyaBioOx_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/703268070962372608/sWc1Y_Ch_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/783999503711997952/BHnn3C1Z_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Æ𝚐𝚛𝚝 & Sausi Faktai & Pαulius</div>
<div style="text-align: center; font-size: 14px;">@a_ergt-sausifaktai-suuiluap</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Æ𝚐𝚛𝚝 & Sausi Faktai & Pαulius.
| Data | Æ𝚐𝚛𝚝 | Sausi Faktai | Pαulius |
| --- | --- | --- | --- |
| Tweets downloaded | 3241 | 3194 | 3192 |
| Retweets | 299 | 19 | 811 |
| Short tweets | 977 | 16 | 484 |
| Tweets kept | 1965 | 3159 | 1897 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bn9w1ob/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @a_ergt-sausifaktai-suuiluap's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3txmfh51) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3txmfh51/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/a_ergt-sausifaktai-suuiluap')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sameearif88/wav2vec2-base-timit-demo-colab10
|
sameearif88
| 2022-05-01T11:00:20Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T09:25:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab10
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4460
- Wer: 0.3425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9891 | 3.52 | 500 | 3.1554 | 1.0 |
| 1.71 | 7.04 | 1000 | 0.7122 | 0.5811 |
| 0.6164 | 10.56 | 1500 | 0.5149 | 0.4880 |
| 0.4188 | 14.08 | 2000 | 0.4726 | 0.4344 |
| 0.3038 | 17.61 | 2500 | 0.4765 | 0.4092 |
| 0.2312 | 21.13 | 3000 | 0.4387 | 0.3765 |
| 0.1867 | 24.65 | 3500 | 0.4411 | 0.3583 |
| 0.1582 | 28.17 | 4000 | 0.4460 | 0.3425 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab11
|
hassnain
| 2022-05-01T10:54:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T09:49:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab11
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6269
- Wer: 0.7418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.6439 | 7.04 | 500 | 3.3083 | 1.0 |
| 2.3763 | 14.08 | 1000 | 1.5059 | 0.8146 |
| 1.0161 | 21.13 | 1500 | 1.5101 | 0.7488 |
| 0.6195 | 28.17 | 2000 | 1.6269 | 0.7418 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/umakomptonrose
|
huggingtweets
| 2022-05-01T10:41:45Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-01T10:40:44Z |
---
language: en
thumbnail: http://www.huggingtweets.com/umakomptonrose/1651401701205/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509685524361105414/-iZ0C4dW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Uma Kompton</div>
<div style="text-align: center; font-size: 14px;">@umakomptonrose</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Uma Kompton.
| Data | Uma Kompton |
| --- | --- |
| Tweets downloaded | 184 |
| Retweets | 9 |
| Short tweets | 22 |
| Tweets kept | 153 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3q3vjpe4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @umakomptonrose's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37a8dws9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37a8dws9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/umakomptonrose')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hassnain/wav2vec2-base-timit-demo-colab7
|
hassnain
| 2022-05-01T09:02:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T07:40:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab7
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1687
- Wer: 0.6478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8409 | 7.04 | 500 | 3.1487 | 1.0 |
| 2.6259 | 14.08 | 1000 | 1.5598 | 0.8730 |
| 1.083 | 21.13 | 1500 | 1.0600 | 0.7347 |
| 0.6061 | 28.17 | 2000 | 1.0697 | 0.7006 |
| 0.4022 | 35.21 | 2500 | 1.0617 | 0.6913 |
| 0.2884 | 42.25 | 3000 | 1.1962 | 0.6768 |
| 0.225 | 49.3 | 3500 | 1.1753 | 0.6567 |
| 0.1852 | 56.34 | 4000 | 1.1687 | 0.6478 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
cuzeverynameistaken/wav2vec2-base-timit-demo-colab0
|
cuzeverynameistaken
| 2022-05-01T08:59:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-30T21:06:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab0
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6960
- Wer: 0.5694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.3196 | 13.89 | 500 | 3.1225 | 1.0 |
| 1.2756 | 27.78 | 1000 | 0.6960 | 0.5694 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sameearif88/wav2vec2-base-timit-demo-colab4
|
sameearif88
| 2022-05-01T08:37:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T07:59:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9149
- Wer: 0.5907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9363 | 13.89 | 500 | 2.7532 | 1.0 |
| 0.9875 | 27.78 | 1000 | 0.9149 | 0.5907 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sherry7144/wav2vec2-base-timit-demo-colab1
|
sherry7144
| 2022-05-01T08:08:05Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T07:01:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0358
- Wer: 0.5729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3217 | 13.89 | 500 | 0.8951 | 0.5834 |
| 0.2263 | 27.78 | 1000 | 1.0358 | 0.5729 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sameearif88/wav2vec2-base-timit-demo-colab3
|
sameearif88
| 2022-05-01T07:50:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T07:10:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8480
- Wer: 0.5608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7977 | 13.89 | 500 | 1.6491 | 0.8257 |
| 0.7393 | 27.78 | 1000 | 0.8480 | 0.5608 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
shumail/wav2vec2-base-timit-demo-colab
|
shumail
| 2022-05-01T07:13:08Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-30T12:34:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8686
- Wer: 0.6263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0505 | 13.89 | 500 | 3.0760 | 1.0 |
| 1.2748 | 27.78 | 1000 | 0.8686 | 0.6263 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab3
|
hassnain
| 2022-05-01T07:06:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-05-01T00:50:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1016
- Wer: 0.6704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0006 | 13.89 | 500 | 3.0706 | 1.0 |
| 1.8796 | 27.78 | 1000 | 1.1154 | 0.7414 |
| 0.548 | 41.67 | 1500 | 1.0826 | 0.7034 |
| 0.2747 | 55.56 | 2000 | 1.1016 | 0.6704 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab2
|
hassnain
| 2022-05-01T06:45:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-30T23:44:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2355
- Wer: 0.7320
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.851 | 13.89 | 500 | 3.1260 | 1.0 |
| 1.9721 | 27.78 | 1000 | 1.2435 | 0.7992 |
| 0.5749 | 41.67 | 1500 | 1.1662 | 0.7374 |
| 0.291 | 55.56 | 2000 | 1.2355 | 0.7320 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sameearif88/wav2vec2-base-timit-demo-colab1
|
sameearif88
| 2022-05-01T06:15:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-29T15:31:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7411
- Wer: 0.5600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0773 | 13.89 | 500 | 3.1073 | 1.0 |
| 1.2444 | 27.78 | 1000 | 0.7411 | 0.5600 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
huggingtweets/chubbiverse
|
huggingtweets
| 2022-05-01T05:19:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-01T05:08:43Z |
---
language: en
thumbnail: http://www.huggingtweets.com/chubbiverse/1651382374986/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1479680767261229056/JH8LZA4w_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Chubbiverse</div>
<div style="text-align: center; font-size: 14px;">@chubbiverse</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Chubbiverse.
| Data | Chubbiverse |
| --- | --- |
| Tweets downloaded | 3220 |
| Retweets | 881 |
| Short tweets | 559 |
| Tweets kept | 1780 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ywslmnc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chubbiverse's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34yoo9j7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34yoo9j7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chubbiverse')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
charlieoneill/distilbert-base-uncased-finetuned-tweet_eval-offensive
|
charlieoneill
| 2022-05-01T03:36:21Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-01T03:22:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-tweet_eval-offensive
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: offensive
metrics:
- name: Accuracy
type: accuracy
value: 0.8089123867069486
- name: F1
type: f1
value: 0.8060281168230459
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-tweet_eval-offensive
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4185
- Accuracy: 0.8089
- F1: 0.8060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 187 | 0.4259 | 0.8059 | 0.7975 |
| 0.46 | 2.0 | 374 | 0.4185 | 0.8089 | 0.8060 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
princeton-nlp/CoFi-MNLI-s95
|
princeton-nlp
| 2022-05-01T01:20:45Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-29T21:57:29Z |
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 95% sparsity on dataset MNLI. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
princeton-nlp/CoFi-MNLI-s60
|
princeton-nlp
| 2022-05-01T01:20:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-29T21:58:04Z |
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 60% sparsity on dataset MNLI. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
princeton-nlp/CoFi-QNLI-s60
|
princeton-nlp
| 2022-05-01T01:19:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-29T21:58:20Z |
This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 60% sparsity on dataset QNLI. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
ChrisZeng/bart-base-detox
|
ChrisZeng
| 2022-05-01T00:01:11Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-30T22:01:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-detox
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-detox
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5633 | 1.0 | 135 | 0.2524 |
| 0.2589 | 2.0 | 270 | 0.2193 |
| 0.2307 | 3.0 | 405 | 0.1993 |
| 0.2171 | 4.0 | 540 | 0.2002 |
| 0.2027 | 5.0 | 675 | 0.1937 |
| 0.1946 | 6.0 | 810 | 0.1972 |
| 0.1874 | 7.0 | 945 | 0.1917 |
| 0.1853 | 8.0 | 1080 | 0.1868 |
| 0.1811 | 9.0 | 1215 | 0.1890 |
| 0.1776 | 10.0 | 1350 | 0.1871 |
| 0.1798 | 11.0 | 1485 | 0.1858 |
| 0.1745 | 12.0 | 1620 | 0.1820 |
| 0.1689 | 13.0 | 1755 | 0.1827 |
| 0.1707 | 14.0 | 1890 | 0.1843 |
| 0.1658 | 15.0 | 2025 | 0.1834 |
| 0.1647 | 16.0 | 2160 | 0.1820 |
| 0.1645 | 17.0 | 2295 | 0.1837 |
| 0.1633 | 18.0 | 2430 | 0.1814 |
| 0.1612 | 19.0 | 2565 | 0.1815 |
| 0.1603 | 20.0 | 2700 | 0.1819 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.0.dev20220429
- Datasets 2.1.0
- Tokenizers 0.10.3
|
Worldman/pegasus-samsum
|
Worldman
| 2022-04-30T23:42:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-30T22:35:12Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7073 | 0.54 | 500 | 1.4841 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
tahazakir/wav2vec2-base-timit-demo-colab2
|
tahazakir
| 2022-04-30T22:54:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-30T20:32:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1899
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 8.0486 | 13.89 | 500 | 3.6570 | 1.0 |
| 3.2905 | 27.78 | 1000 | 3.1899 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
tahazakir/wav2vec2-base-timit-demo-colab1
|
tahazakir
| 2022-04-30T22:47:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-30T19:13:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1918
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.7104 | 13.89 | 500 | 3.2161 | 1.0 |
| 3.1868 | 27.78 | 1000 | 3.1918 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
LiYuan/amazon-review-sentiment-analysis
|
LiYuan
| 2022-04-30T22:03:23Z | 4,927 | 41 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-30T20:37:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli-amazon-query-shopping
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli-amazon-query-shopping
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment?text=I+like+you.+I+love+you) on an [Amazon US Customer Reviews Dataset](https://www.kaggle.com/datasets/cynthiarempel/amazon-us-customer-reviews-dataset). The code for the fine-tuning process can be found
[here](https://github.com/vanderbilt-data-science/bigdata/blob/main/06-fine-tune-BERT-on-our-dataset.ipynb). This model is uncased: it does
not make a difference between english and English.
It achieves the following results on the evaluation set:
- Loss: 0.5202942490577698
- Accuracy: 0.8
## Model description
This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5).
This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further finetuning on related sentiment analysis tasks.
We replaced its head with our customer reviews to fine-tune it on 17,280 rows of training set while validating it on 4,320 rows of dev set. Finally, we evaluated our model performance on a held-out test set: 2,400 rows.
## Intended uses & limitations
Bert-base is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification, or question answering. This fine-tuned version of BERT-base is used to predict review rating star given the review.
The limitations are this trained model is focusing on reviews and products on Amazon. If you apply this model to other domains, it may perform poorly.
## How to use
You can use this model directly by downloading the trained weights and configurations like the below code snippet:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("LiYuan/amazon-review-sentiment-analysis")
model = AutoModelForSequenceClassification.from_pretrained("LiYuan/amazon-review-sentiment-analysis")
```
## Training and evaluation data
Download all the raw [dataset](https://www.kaggle.com/datasets/cynthiarempel/amazon-us-customer-reviews-dataset) from the Kaggle website.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.555400 | 1.0 | 1080 | 0.520294 | 0.800000 |
| 0.424300 | 2.0 | 1080 | 0.549649 | 0.798380 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ChrisZeng/t5-base-detox
|
ChrisZeng
| 2022-04-30T21:53:04Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-30T17:43:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-detox
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-detox
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.337 | 1.0 | 135 | 0.4810 |
| 0.5238 | 2.0 | 270 | 0.3886 |
| 0.4301 | 3.0 | 405 | 0.3378 |
| 0.3755 | 4.0 | 540 | 0.3122 |
| 0.3359 | 5.0 | 675 | 0.2910 |
| 0.3068 | 6.0 | 810 | 0.2737 |
| 0.2861 | 7.0 | 945 | 0.2710 |
| 0.2744 | 8.0 | 1080 | 0.2617 |
| 0.2649 | 9.0 | 1215 | 0.2630 |
| 0.2585 | 10.0 | 1350 | 0.2615 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.0.dev20220429
- Datasets 2.1.0
- Tokenizers 0.10.3
|
hassnain/wav2vec2-base-timit-demo-colab0
|
hassnain
| 2022-04-30T21:39:56Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-30T20:59:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab0
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1808
- Wer: 0.7734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8077 | 7.04 | 500 | 3.1554 | 1.0 |
| 2.8549 | 14.08 | 1000 | 2.0683 | 1.0846 |
| 1.3297 | 21.13 | 1500 | 1.2084 | 0.7984 |
| 0.6725 | 28.17 | 2000 | 1.1808 | 0.7734 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
YKXBCi/vit-base-patch16-224-in21k-euroSat
|
YKXBCi
| 2022-04-30T20:19:42Z | 34 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-04-15T14:35:58Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: YKXBCi/vit-base-patch16-224-in21k-euroSat
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# YKXBCi/vit-base-patch16-224-in21k-euroSat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0495
- Train Accuracy: 0.9948
- Train Top-3-accuracy: 0.9999
- Validation Loss: 0.0782
- Validation Accuracy: 0.9839
- Validation Top-3-accuracy: 1.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 3585, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.4593 | 0.9478 | 0.9912 | 0.1558 | 0.9809 | 0.9995 | 0 |
| 0.1008 | 0.9876 | 0.9997 | 0.0855 | 0.9856 | 1.0 | 1 |
| 0.0495 | 0.9948 | 0.9999 | 0.0782 | 0.9839 | 1.0 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jg/distilbert-base-uncased-finetuned-emotion
|
jg
| 2022-04-30T18:34:27Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-30T12:10:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.9235933186731068
- name: Accuracy
type: accuracy
value: 0.9235
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2199
- F1: 0.9236
- Accuracy: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.8072 | 1.0 | 250 | 0.3153 | 0.9023 | 0.905 |
| 0.2442 | 2.0 | 500 | 0.2199 | 0.9236 | 0.9235 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ParanoidAndroid/bert-finetuned-squad
|
ParanoidAndroid
| 2022-04-30T18:29:58Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-04-30T18:16:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ali221000262/wav2vec2-base-timit-ali-hasan-colab
|
ali221000262
| 2022-04-30T17:36:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-30T17:04:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-ali-hasan-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-ali-hasan-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2471
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.5485 | 13.89 | 500 | 3.2471 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ningkko/drug-stance-bert
|
ningkko
| 2022-04-30T17:29:17Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-17T21:05:00Z |
---
tags:
- generated_from_trainer
model-index:
- name: drug-stance-bert
results: [1, 0, 2]
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# drug-stance-bert
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on [COVID-CQ](https://github.com/eceveco/COVID-CQ), a dataset that contains 3-label annotated opinions (negative, neutral, and positive) of the tweet initiators regarding the use of Chloroquine or Hydroxychloroquine for the treatment or prevention of the coronavirus.
## Intended uses & limitations
Predict opinions (negative, neutral, and positive) of tweet initiators regarding the use of a drug for the treatment or prevention of the coronavirus. Note that having multiple drug names with different stances in a single tweet can confuse the model.
## Inference & understanding
We followed COVID-CQ to use the following label representation:
- 0 -> None/Neutral;
- 1 -> Against;
- 2 -> Favor
Try these examples:
- The gov's killing people by banning Ivm
- Great news cheers everybody:) ivermectin proven to not work by rct lol
## Tutorial
See our Github repo for [inference scripts](https://github.com/ningkko/COVID-drug/blob/main/stance_detection/inference.ipynb)
## Model description
"We developed two COVID-drug-stance RoBERTa-base models by fine-tuning a pre-trained Twitter-specific stance detection model on a stance data set called COVID-CQ. The data were divided into training-dev-test validation datasets with a 70:10:20 ratio. Model I (COVID-drug-stance-BERT) was trained on the original tweet data, and Model II (COVID-drug-stance-BERT-masked) was trained on tweets with drug names masked as “[mask]” for model generalizability on different drugs. The two models had similar performance on the COVID-19 validation set: COVID-drug-stance-BERT had an accuracy of 86.88%, and the masked model had an accuracy of 86.67%. The two models were then evaluated by predicting tweet initiators’ attitudes towards the drug mentioned in each tweet using randomly selected test sets (100 tweets) of each drug (Hydroxychloquine, Ivermectin, Molnupiravir, Remdesivir). As suggested by the evaluation in Table 2, Model I had better performance and was therefore used in this study".
| **Drug** | **Model I: Original Tweet** | | | **Model II: Drug Names Masked** | | |
|------------------------|:---------------------------:|:-----------:|:------------:|:-------------------------------:|:-----------:|:------------:|
| | **Precision** | **Recall** | **F1-Score** | **Precision** | **Recall** | **F1-Score** |
| **Hydroxychloroquine** | 0.93 | 0.92 | **0.92** | 0.84 | 0.83 | 0.83 |
| **Ivermectin** | 0.92 | 0.91 | **0.91** | 0.72 | 0.68 | 0.68 |
| **Molnupiravir** | 0.89 | 0.89 | **0.89** | 0.78 | 0.77 | 0.77 |
| **Remdesivir** | 0.82 | 0.79 | **0.79** | 0.70 | 0.66 | 0.66 |
The model uploaded here is Model I.
## Training and evaluation data
COVID-CQ
## Training procedure
See Github
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.11.0
- Pytorch 1.8.1+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Slavka/bert-base-cased-finetuned-log-parser-winlogbeat_nowhitespace_large
|
Slavka
| 2022-04-30T16:29:23Z | 7 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-04-30T16:23:54Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-base-cased-finetuned-log-parser-winlogbeat_nowhitespace_large
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-log-parser-winlogbeat_nowhitespace_large
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 15321, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 15321, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-06, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
moaiz237/wav2vec2-base-timit-moaiz_exp2
|
moaiz237
| 2022-04-30T16:23:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-30T15:41:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-moaiz_exp2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-moaiz_exp2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1884
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.15 | 13.89 | 500 | 3.2020 | 1.0 |
| 3.1522 | 27.78 | 1000 | 3.1884 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
moaiz237/wav2vec2-base-timit-moaiz_exp1
|
moaiz237
| 2022-04-30T15:13:12Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-30T12:17:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-moaiz_exp1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-moaiz_exp1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6910
- Wer: 0.5549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7261 | 13.89 | 500 | 2.4864 | 0.9942 |
| 1.0036 | 27.78 | 1000 | 0.6910 | 0.5549 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Davincilee/door_inner
|
Davincilee
| 2022-04-30T15:07:38Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-04-30T14:47:04Z |
language:
- "List of ISO 639-1 code for your language"
|
huggan/NeonGAN
|
huggan
| 2022-04-30T14:26:58Z | 0 | 5 | null |
[
"gan",
"unconditional image generation",
"huggan",
"style-transfer",
"cyclegan",
"Pytorch",
"unconditional-image-generation",
"arxiv:1703.10593",
"license:mit",
"region:us"
] |
unconditional-image-generation
| 2022-04-24T19:46:41Z |
---
license: mit
tags:
- gan
- unconditional image generation
- huggan
- style-transfer
- cyclegan
- Pytorch
- unconditional-image-generation
---
This model is based on [CycleGAN](https://arxiv.org/abs/1703.10593) architecture. It takes images, and generates a futuristic neon image for the image provided.Hope this model neonifies your images.

# Dataset
The model is trained on 256x256 high contrasted neon images as style images, and normal images (including people,scenery etc.) as base images.
#### Dataset - https://www.kaggle.com/datasets/aanisha07/futuristic-images
# Model
All details regarding how to use the model, fine-tune it, are added to GitHub.
#### Github - https://github.com/Aanisha/NeonGAN
# Spaces Demo
Check out the spaces demo, and try the model by yourselves.
#### Demo - https://huggingface.co/spaces/huggan/NeonGAN_Demo
Hope you all enjoy it!
|
Muennighoff/t5-small-finetuned-xsum
|
Muennighoff
| 2022-04-30T14:26:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-30T14:15:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.2881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4784
- Rouge1: 28.2881
- Rouge2: 7.6834
- Rougel: 22.2163
- Rougelsum: 22.219
- Gen Len: 18.8292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7184 | 1.0 | 12753 | 2.4784 | 28.2881 | 7.6834 | 22.2163 | 22.219 | 18.8292 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sameearif88/wav2vec2-base-timit-demo-colab
|
sameearif88
| 2022-04-30T13:08:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-26T10:31:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
adielsa/distilbert-base-uncased-finetuned-cola
|
adielsa
| 2022-04-30T12:37:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-30T12:16:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5387376669923544
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8256
- Matthews Correlation: 0.5387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5257 | 1.0 | 535 | 0.5286 | 0.4093 |
| 0.3447 | 2.0 | 1070 | 0.5061 | 0.4972 |
| 0.2303 | 3.0 | 1605 | 0.5878 | 0.5245 |
| 0.1761 | 4.0 | 2140 | 0.7969 | 0.5153 |
| 0.1346 | 5.0 | 2675 | 0.8256 | 0.5387 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ai4bharat/MultiIndicSentenceSummarizationSS
|
ai4bharat
| 2022-04-30T10:35:01Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"sentence-summarization",
"multilingual",
"nlp",
"indicnlp",
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te",
"dataset:ai4bharat/IndicSentenceSummarization",
"arxiv:2203.05437",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-23T17:54:14Z |
---
tags:
- sentence-summarization
- multilingual
- nlp
- indicnlp
datasets:
- ai4bharat/IndicSentenceSummarization
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- mit
widget:
- जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया। <s> <2hi>
---
# MultiIndicSentenceSummarizationSS
This repository contains the [IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS) checkpoint finetuned on the 11 languages of [IndicSentenceSummarization](https://huggingface.co/datasets/ai4bharat/IndicSentenceSummarization) dataset. For finetuning details,
see the [paper](https://arxiv.org/abs/2203.05437).
<ul>
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for decoding. </li>
<li> Trained on large Indic language corpora (5.53 million sentences). </li>
<li> Unlike <a href="https://huggingface.co/ai4bharat/MultiIndicSentenceSummarization">MultiIndicSentenceSummarization</a> each language is written in its own script, so you do not need to perform any script mapping to/from Devanagari. </li>
</ul>
## Using this model in `transformers`
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicSentenceSummarizationSS", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicSentenceSummarizationSS", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicSentenceSummarizationSS")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicSentenceSummarizationSS")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया। </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3, num_beams=5, length_penalty=0.8, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # अनंतनाग में सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादी ढेर
```
## Benchmarks
Scores on the `IndicSentenceSummarization` test sets are as follows:
Language | Rouge-1 / Rouge-2 / Rouge-L
---------|----------------------------
as | 63.56 / 49.90 / 62.57
bn | 52.52 / 36.15 / 50.60
gu | 47.69 / 29.77 / 45.61
hi | 50.43 / 28.13 / 45.15
kn | 77.06 / 69.36 / 76.33
ml | 65.00 / 51.99 / 63.76
mr | 47.05 / 25.97 / 45.52
or | 50.96 / 30.32 / 49.23
pa | 54.95 / 36.26 / 51.26
ta | 58.52 / 38.36 / 56.49
te | 53.75 / 35.17 / 52.66
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
|
ai4bharat/MultiIndicSentenceSummarization
|
ai4bharat
| 2022-04-30T10:26:02Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"sentence-summarization",
"multilingual",
"nlp",
"indicnlp",
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te",
"dataset:ai4bharat/IndicSentenceSummarization",
"arxiv:2203.05437",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-23T17:53:36Z |
---
tags:
- sentence-summarization
- multilingual
- nlp
- indicnlp
datasets:
- ai4bharat/IndicSentenceSummarization
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- mit
widget:
- जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया। </s> <2hi>
---
# MultiIndicSentenceSummarization
This repository contains the [IndicBART](https://huggingface.co/ai4bharat/IndicBART) checkpoint finetuned on the 11 languages of [IndicSentenceSummarization](https://huggingface.co/datasets/ai4bharat/IndicSentenceSummarization) dataset. For finetuning details,
see the [paper](https://arxiv.org/abs/2203.05437).
<ul>
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for decoding. </li>
<li> Trained on large Indic language corpora (431K sentences). </li>
<li> All languages, have been represented in Devanagari script to encourage transfer learning among the related languages. </li>
</ul>
## Using this model in `transformers`
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicSentenceSummarization", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicSentenceSummarization", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicSentenceSummarization")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicSentenceSummarization")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("जम्मू एवं कश्मीर के अनंतनाग जिले में शनिवार को सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादियों को मार गिराया गया। </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3, num_beams=5, length_penalty=0.8, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # जम्मू एवं कश्मीरः अनंतनाग में सुरक्षाबलों के साथ मुठभेड़ में दो आतंकवादी ढेर
# Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the Indic NLP Library.
```
# Note:
If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script.
## Benchmarks
Scores on the `IndicSentenceSummarization` test sets are as follows:
Language | Rouge-1 / Rouge-2 / Rouge-L
---------|----------------------------
as | 60.46 / 46.77 / 59.29
bn | 51.12 / 34.91 / 49.29
gu | 47.89 / 29.97 / 45.92
hi | 50.7 / 28.11 / 45.34
kn | 77.93 / 70.03 / 77.32
ml | 67.7 / 54.42 / 66.42
mr | 48.06 / 26.98 / 46.5
or | 45.2 / 23.66 / 43.65
pa | 55.96 / 37.2 / 52.22
ta | 58.85 / 38.97 / 56.83
te | 54.81 / 35.28 / 53.44
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
|
DrishtiSharma/TEST123
|
DrishtiSharma
| 2022-04-30T10:24:56Z | 0 | 0 | null |
[
"tflite",
"mixtec",
"region:us"
] | null | 2022-04-30T10:11:52Z |
---
tags:
- mixtec
# See a list of available tags here:
# https://coqui.ai/mixtec/jemeyer/v1.0.0#model-details
# task: Speech-to-Text for the Yoloxóchitl Mixtec Language on 16kHz, mono-channel audio
---
# Model card for Yoloxóchitl Mixtec STT
Jump to section:
- [Model details](#model-details)
- [Intended use](#intended-use)
- [Performance Factors](#performance-factors)
- [Metrics](#metrics)
- [Training data](#training-data)
- [Evaluation data](#evaluation-data)
- [Ethical considerations](#ethical-considerations)
- [Caveats and recommendations](#caveats-and-recommendations)
## Model details
- Person or organization developing model: Originally trained by [Joe Meyer](https://www.linkedin.com/in/joe-meyer-25753951/).
- Model language: Yoloxóchitl Mixtec / / `xty`
- Model date: April 17, 2022
- Model type: `Speech-to-Text`
- Model version: `v0.1.0`
- Compatible with 🐸 STT version: `v1.0.0`
- License: CC BY-NC-SA 3.0
- Citation details: `@techreport{xty-stt, author = {Meyer,Joe}, title = {Yoloxóchitl Mixtec STT 0.1}, institution = {Coqui}, address = {\url{https://github.com/coqui-ai/STT-models}} year = {2022}, month = {April}, number = {STT-SLR89-XTY-0.1} }`
- Where to send questions or comments about the model: You can leave an issue on [`STT-model` issues](https://github.com/coqui-ai/STT-models/issues), open a new discussion on [`STT-model` discussions](https://github.com/coqui-ai/STT-models/discussions), or chat with us on [Gitter](https://gitter.im/coqui-ai/).
## Intended use
Speech-to-Text for the [Yoloxóchitl Mixtec Language](https://en.wikipedia.org/wiki/Yolox%C3%B3chitl_Mixtec) on 16kHz, mono-channel audio.
## Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors [here](https://stt.readthedocs.io/en/latest/DEPLOYMENT.html#how-will-a-model-perform-on-my-data).
## Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk.
#### Transcription Accuracy
The following Word Error Rates and Character Error Rates are reported for a modified data set from OpenSLR [SLR89](https://www.openslr.org/89/). The official `validated.tsv` had rows removed which had errors processing, and the data was re-processed by [Cmmon Voice Utils](https://github.com/ftyers/commonvoice-utils) to convert to 16kHz mono-channel PCM .wav files.
|Test Corpus|WER|CER|
|-----------|---|---|
|OpenSLR|48.85\%|18.04\%|
#### Real-Time Factor
Real-Time Factor (RTF) is defined as `processing-time / length-of-audio`. The exact real-time factor of an STT model will depend on the hardware setup, so you may experience a different RTF.
Recorded average RTF on laptop CPU: ``
#### Model Size
`model.pbmm`: M
`model.tflite`: M
### Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
## Training data
This model was trained on a modified data set from OpenSLR [SLR89](https://www.openslr.org/89/). The official `validated.tsv` had rows removed which had errors processing, and the data was re-processed by [Cmmon Voice Utils](https://github.com/ftyers/commonvoice-utils) to convert to 16kHz mono-channel PCM .wav files.
## Evaluation data
This model was evaluated on a modified data set from OpenSLR [SLR89](https://www.openslr.org/89/). The official `validated.tsv` had rows removed which had errors processing, and the data was re-processed by [Cmmon Voice Utils](https://github.com/ftyers/commonvoice-utils) to convert to 16kHz mono-channel PCM .wav files.
## Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
### Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
### Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech.
## Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data [here](https://stt.readthedocs.io/en/latest/DEPLOYMENT.html#how-will-a-model-perform-on-my-data).
In most applications, it is recommended that you [train your own language model](https://stt.readthedocs.io/en/latest/LANGUAGE_MODEL.html) to improve transcription accuracy on your speech data.
|
huggingtweets/itstomrobinson
|
huggingtweets
| 2022-04-30T07:06:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-30T06:45:28Z |
---
language: en
thumbnail: http://www.huggingtweets.com/itstomrobinson/1651302371165/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1388470365723168770/irz46Ykl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tom Robinson</div>
<div style="text-align: center; font-size: 14px;">@itstomrobinson</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tom Robinson.
| Data | Tom Robinson |
| --- | --- |
| Tweets downloaded | 733 |
| Retweets | 40 |
| Short tweets | 52 |
| Tweets kept | 641 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bluc7sk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @itstomrobinson's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ryc26oz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ryc26oz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/itstomrobinson')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
vegetable/test
|
vegetable
| 2022-04-30T02:48:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-28T10:12:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.7696078431372549
- name: Recall
type: recall
value: 0.839572192513369
- name: F1
type: f1
value: 0.8030690537084398
- name: Accuracy
type: accuracy
value: 0.8847040737893928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [hfl/chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7372
- Precision: 0.7696
- Recall: 0.8396
- F1: 0.8031
- Accuracy: 0.8847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 2 | 1.9496 | 0.0 | 0.0 | 0.0 | 0.4889 |
| No log | 2.0 | 4 | 1.6137 | 0.0 | 0.0 | 0.0 | 0.4919 |
| No log | 3.0 | 6 | 1.3906 | 0.0 | 0.0 | 0.0 | 0.5650 |
| No log | 4.0 | 8 | 1.2273 | 0.0652 | 0.0481 | 0.0554 | 0.6856 |
| No log | 5.0 | 10 | 1.0565 | 0.2051 | 0.1711 | 0.1866 | 0.7125 |
| No log | 6.0 | 12 | 0.9150 | 0.5094 | 0.4332 | 0.4682 | 0.7540 |
| No log | 7.0 | 14 | 0.8051 | 0.5988 | 0.5187 | 0.5559 | 0.7679 |
| No log | 8.0 | 16 | 0.7151 | 0.6707 | 0.5989 | 0.6328 | 0.7763 |
| No log | 9.0 | 18 | 0.6334 | 0.6685 | 0.6364 | 0.6521 | 0.8086 |
| No log | 10.0 | 20 | 0.5693 | 0.6957 | 0.6845 | 0.6900 | 0.8201 |
| No log | 11.0 | 22 | 0.5192 | 0.7166 | 0.7166 | 0.7166 | 0.8363 |
| No log | 12.0 | 24 | 0.4736 | 0.7135 | 0.7326 | 0.7230 | 0.8524 |
| No log | 13.0 | 26 | 0.4448 | 0.6938 | 0.7754 | 0.7323 | 0.8555 |
| No log | 14.0 | 28 | 0.4280 | 0.7177 | 0.8021 | 0.7576 | 0.8586 |
| No log | 15.0 | 30 | 0.4179 | 0.7588 | 0.8075 | 0.7824 | 0.8663 |
| No log | 16.0 | 32 | 0.4214 | 0.7356 | 0.8182 | 0.7747 | 0.8593 |
| No log | 17.0 | 34 | 0.4070 | 0.7391 | 0.8182 | 0.7766 | 0.8616 |
| No log | 18.0 | 36 | 0.4112 | 0.7586 | 0.8235 | 0.7897 | 0.8724 |
| No log | 19.0 | 38 | 0.4530 | 0.7330 | 0.8075 | 0.7684 | 0.8693 |
| No log | 20.0 | 40 | 0.4719 | 0.7766 | 0.8182 | 0.7969 | 0.8732 |
| No log | 21.0 | 42 | 0.4886 | 0.7260 | 0.8075 | 0.7646 | 0.8632 |
| No log | 22.0 | 44 | 0.5007 | 0.7217 | 0.8182 | 0.7669 | 0.8701 |
| No log | 23.0 | 46 | 0.5169 | 0.7321 | 0.8182 | 0.7727 | 0.8762 |
| No log | 24.0 | 48 | 0.5531 | 0.7238 | 0.8128 | 0.7657 | 0.8724 |
| No log | 25.0 | 50 | 0.5895 | 0.7311 | 0.8289 | 0.7769 | 0.8655 |
| No log | 26.0 | 52 | 0.5482 | 0.7330 | 0.8075 | 0.7684 | 0.8778 |
| No log | 27.0 | 54 | 0.5361 | 0.7488 | 0.8128 | 0.7795 | 0.8832 |
| No log | 28.0 | 56 | 0.5378 | 0.7427 | 0.8182 | 0.7786 | 0.8847 |
| No log | 29.0 | 58 | 0.5543 | 0.7371 | 0.8396 | 0.7850 | 0.8824 |
| No log | 30.0 | 60 | 0.5564 | 0.7585 | 0.8396 | 0.7970 | 0.8839 |
| No log | 31.0 | 62 | 0.5829 | 0.7235 | 0.8396 | 0.7772 | 0.8724 |
| No log | 32.0 | 64 | 0.5974 | 0.7269 | 0.8396 | 0.7792 | 0.8716 |
| No log | 33.0 | 66 | 0.5750 | 0.7610 | 0.8342 | 0.7959 | 0.8839 |
| No log | 34.0 | 68 | 0.5887 | 0.7723 | 0.8342 | 0.8021 | 0.8878 |
| No log | 35.0 | 70 | 0.6219 | 0.7441 | 0.8396 | 0.7889 | 0.8747 |
| No log | 36.0 | 72 | 0.6676 | 0.7269 | 0.8396 | 0.7792 | 0.8632 |
| No log | 37.0 | 74 | 0.6517 | 0.7452 | 0.8289 | 0.7848 | 0.8693 |
| No log | 38.0 | 76 | 0.6346 | 0.7828 | 0.8289 | 0.8052 | 0.8862 |
| No log | 39.0 | 78 | 0.6239 | 0.7839 | 0.8342 | 0.8083 | 0.8855 |
| No log | 40.0 | 80 | 0.6360 | 0.7277 | 0.8289 | 0.775 | 0.8762 |
| No log | 41.0 | 82 | 0.6645 | 0.7336 | 0.8396 | 0.7830 | 0.8701 |
| No log | 42.0 | 84 | 0.6611 | 0.7406 | 0.8396 | 0.7870 | 0.8747 |
| No log | 43.0 | 86 | 0.6707 | 0.7488 | 0.8289 | 0.7868 | 0.8762 |
| No log | 44.0 | 88 | 0.6901 | 0.7277 | 0.8289 | 0.775 | 0.8709 |
| No log | 45.0 | 90 | 0.6911 | 0.7393 | 0.8342 | 0.7839 | 0.8709 |
| No log | 46.0 | 92 | 0.6540 | 0.7761 | 0.8342 | 0.8041 | 0.8878 |
| No log | 47.0 | 94 | 0.6381 | 0.7761 | 0.8342 | 0.8041 | 0.8916 |
| No log | 48.0 | 96 | 0.6285 | 0.7745 | 0.8449 | 0.8082 | 0.8885 |
| No log | 49.0 | 98 | 0.6449 | 0.7692 | 0.8556 | 0.8101 | 0.8862 |
| No log | 50.0 | 100 | 0.6809 | 0.7442 | 0.8556 | 0.7960 | 0.8732 |
| No log | 51.0 | 102 | 0.6898 | 0.7395 | 0.8503 | 0.7910 | 0.8716 |
| No log | 52.0 | 104 | 0.6897 | 0.75 | 0.8503 | 0.7970 | 0.8762 |
| No log | 53.0 | 106 | 0.6714 | 0.7656 | 0.8556 | 0.8081 | 0.8855 |
| No log | 54.0 | 108 | 0.6612 | 0.7692 | 0.8556 | 0.8101 | 0.8855 |
| No log | 55.0 | 110 | 0.6583 | 0.7692 | 0.8556 | 0.8101 | 0.8855 |
| No log | 56.0 | 112 | 0.6648 | 0.7692 | 0.8556 | 0.8101 | 0.8855 |
| No log | 57.0 | 114 | 0.6757 | 0.7656 | 0.8556 | 0.8081 | 0.8832 |
| No log | 58.0 | 116 | 0.6803 | 0.7656 | 0.8556 | 0.8081 | 0.8839 |
| No log | 59.0 | 118 | 0.6834 | 0.7692 | 0.8556 | 0.8101 | 0.8862 |
| No log | 60.0 | 120 | 0.6889 | 0.7833 | 0.8503 | 0.8154 | 0.8878 |
| No log | 61.0 | 122 | 0.6963 | 0.7772 | 0.8396 | 0.8072 | 0.8862 |
| No log | 62.0 | 124 | 0.7057 | 0.7772 | 0.8396 | 0.8072 | 0.8862 |
| No log | 63.0 | 126 | 0.7212 | 0.7910 | 0.8503 | 0.8196 | 0.8862 |
| No log | 64.0 | 128 | 0.7334 | 0.7833 | 0.8503 | 0.8154 | 0.8824 |
| No log | 65.0 | 130 | 0.7398 | 0.7833 | 0.8503 | 0.8154 | 0.8801 |
| No log | 66.0 | 132 | 0.7400 | 0.7833 | 0.8503 | 0.8154 | 0.8809 |
| No log | 67.0 | 134 | 0.7345 | 0.7783 | 0.8449 | 0.8103 | 0.8855 |
| No log | 68.0 | 136 | 0.7270 | 0.79 | 0.8449 | 0.8165 | 0.8870 |
| No log | 69.0 | 138 | 0.7245 | 0.7839 | 0.8342 | 0.8083 | 0.8862 |
| No log | 70.0 | 140 | 0.7260 | 0.7868 | 0.8289 | 0.8073 | 0.8847 |
| No log | 71.0 | 142 | 0.7275 | 0.7817 | 0.8235 | 0.8021 | 0.8839 |
| No log | 72.0 | 144 | 0.7283 | 0.7778 | 0.8235 | 0.8000 | 0.8832 |
| No log | 73.0 | 146 | 0.7296 | 0.78 | 0.8342 | 0.8062 | 0.8847 |
| No log | 74.0 | 148 | 0.7344 | 0.7734 | 0.8396 | 0.8051 | 0.8832 |
| No log | 75.0 | 150 | 0.7314 | 0.7745 | 0.8449 | 0.8082 | 0.8824 |
| No log | 76.0 | 152 | 0.7299 | 0.7794 | 0.8503 | 0.8133 | 0.8832 |
| No log | 77.0 | 154 | 0.7282 | 0.7794 | 0.8503 | 0.8133 | 0.8839 |
| No log | 78.0 | 156 | 0.7252 | 0.7783 | 0.8449 | 0.8103 | 0.8839 |
| No log | 79.0 | 158 | 0.7216 | 0.7756 | 0.8503 | 0.8112 | 0.8855 |
| No log | 80.0 | 160 | 0.7194 | 0.7756 | 0.8503 | 0.8112 | 0.8870 |
| No log | 81.0 | 162 | 0.7191 | 0.7756 | 0.8503 | 0.8112 | 0.8878 |
| No log | 82.0 | 164 | 0.7201 | 0.7696 | 0.8396 | 0.8031 | 0.8862 |
| No log | 83.0 | 166 | 0.7211 | 0.7696 | 0.8396 | 0.8031 | 0.8862 |
| No log | 84.0 | 168 | 0.7222 | 0.7696 | 0.8396 | 0.8031 | 0.8862 |
| No log | 85.0 | 170 | 0.7220 | 0.7696 | 0.8396 | 0.8031 | 0.8862 |
| No log | 86.0 | 172 | 0.7239 | 0.7734 | 0.8396 | 0.8051 | 0.8870 |
| No log | 87.0 | 174 | 0.7291 | 0.7772 | 0.8396 | 0.8072 | 0.8847 |
| No log | 88.0 | 176 | 0.7344 | 0.7745 | 0.8449 | 0.8082 | 0.8824 |
| No log | 89.0 | 178 | 0.7373 | 0.7745 | 0.8449 | 0.8082 | 0.8824 |
| No log | 90.0 | 180 | 0.7391 | 0.7707 | 0.8449 | 0.8061 | 0.8832 |
| No log | 91.0 | 182 | 0.7403 | 0.7745 | 0.8449 | 0.8082 | 0.8824 |
| No log | 92.0 | 184 | 0.7412 | 0.7745 | 0.8449 | 0.8082 | 0.8832 |
| No log | 93.0 | 186 | 0.7417 | 0.7707 | 0.8449 | 0.8061 | 0.8832 |
| No log | 94.0 | 188 | 0.7402 | 0.7745 | 0.8449 | 0.8082 | 0.8839 |
| No log | 95.0 | 190 | 0.7389 | 0.7745 | 0.8449 | 0.8082 | 0.8847 |
| No log | 96.0 | 192 | 0.7381 | 0.7696 | 0.8396 | 0.8031 | 0.8839 |
| No log | 97.0 | 194 | 0.7377 | 0.7696 | 0.8396 | 0.8031 | 0.8847 |
| No log | 98.0 | 196 | 0.7374 | 0.7696 | 0.8396 | 0.8031 | 0.8847 |
| No log | 99.0 | 198 | 0.7372 | 0.7696 | 0.8396 | 0.8031 | 0.8847 |
| No log | 100.0 | 200 | 0.7372 | 0.7696 | 0.8396 | 0.8031 | 0.8847 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BigSalmon/CoverLetter
|
BigSalmon
| 2022-04-30T01:42:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-30T01:36:51Z |
how to do initial prompt:
captivated by [Enter Company Name]'s
also trained on: https://huggingface.co/BigSalmon/InformalToFormalLincoln40 (so you can use those prompt outlines, too)
|
tonydiana1/distilroberta-base-finetuned-wikitext2
|
tonydiana1
| 2022-04-30T01:23:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-04-30T01:01:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0853 | 1.0 | 2406 | 1.9214 |
| 1.986 | 2.0 | 4812 | 1.8799 |
| 1.9568 | 3.0 | 7218 | 1.8202 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
tonydiana1/distilgpt2-finetuned-wikitext2
|
tonydiana1
| 2022-04-30T01:00:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-30T00:08:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.76 | 1.0 | 2334 | 3.6658 |
| 3.6526 | 2.0 | 4668 | 3.6468 |
| 3.6004 | 3.0 | 7002 | 3.6425 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
zasheza/wav2vec2-base-timit-demo-colab
|
zasheza
| 2022-04-30T00:09:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-27T19:34:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Ahmed9275/ALL-3
|
Ahmed9275
| 2022-04-29T23:42:36Z | 85 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-04-29T23:42:24Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ALL-3
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9291744828224182
---
# ALL-3
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
|
Percival/finetuning-sentiment-model-3000-samples
|
Percival
| 2022-04-29T22:52:18Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-29T22:34:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
doc2query/msmarco-vietnamese-mt5-base-v1
|
doc2query
| 2022-04-29T22:06:03Z | 18 | 4 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"vi",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-29T22:05:47Z |
---
language: vi
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python (phát âm tiếng Anh: /ˈpaɪθɑːn/) là một ngôn ngữ lập trình bậc cao cho các mục đích lập trình đa năng, do Guido van Rossum tạo ra và lần đầu ra mắt vào năm 1991. Python được thiết kế với ưu điểm mạnh là dễ đọc, dễ học và dễ nhớ. Python là ngôn ngữ có hình thức rất sáng sủa, cấu trúc rõ ràng, thuận tiện cho người mới học lập trình và là ngôn ngữ lập trình dễ học; được dùng rộng rãi trong phát triển trí tuệ nhân tạo. Cấu trúc của Python còn cho phép người sử dụng viết mã lệnh với số lần gõ phím tối thiểu. Vào tháng 7 năm 2018, van Rossum đã từ chức lãnh đạo trong cộng đồng ngôn ngữ Python sau 30 năm làm việc."
license: apache-2.0
---
# doc2query/msmarco-vietnamese-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-vietnamese-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python (phát âm tiếng Anh: /ˈpaɪθɑːn/) là một ngôn ngữ lập trình bậc cao cho các mục đích lập trình đa năng, do Guido van Rossum tạo ra và lần đầu ra mắt vào năm 1991. Python được thiết kế với ưu điểm mạnh là dễ đọc, dễ học và dễ nhớ. Python là ngôn ngữ có hình thức rất sáng sủa, cấu trúc rõ ràng, thuận tiện cho người mới học lập trình và là ngôn ngữ lập trình dễ học; được dùng rộng rãi trong phát triển trí tuệ nhân tạo. Cấu trúc của Python còn cho phép người sử dụng viết mã lệnh với số lần gõ phím tối thiểu. Vào tháng 7 năm 2018, van Rossum đã từ chức lãnh đạo trong cộng đồng ngôn ngữ Python sau 30 năm làm việc."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
astrojihye/opus-mt-ko-en-finetuned-ko-to-en4
|
astrojihye
| 2022-04-29T22:02:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-29T14:09:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ko-en-finetuned-ko-to-en4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned-ko-to-en4
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9824
- Bleu: 0.5767
- Gen Len: 13.1529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 512
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 0.99 | 52 | 2.9824 | 0.5767 | 13.1529 |
| No log | 1.99 | 104 | 2.9824 | 0.5767 | 13.1529 |
| No log | 2.99 | 156 | 2.9824 | 0.5767 | 13.1529 |
| No log | 3.99 | 208 | 2.9824 | 0.5767 | 13.1529 |
| No log | 4.99 | 260 | 2.9824 | 0.5767 | 13.1529 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
espnet/arabic_commonvoice_blstm
|
espnet
| 2022-04-29T21:30:20Z | 2 | 1 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"ar",
"dataset:commonvoice",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-04-29T21:28:42Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: ar
datasets:
- commonvoice
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/arabic_commonvoice_blstm`
This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b
pip install -e .
cd egs2/commonvoice/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/arabic_commonvoice_blstm
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat Apr 16 17:11:01 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `5e6e95d087af8a7a4c33c4248b75114237eae64b`
- Commit date: `Mon Apr 4 21:04:45 2022 -0400`
## asr_train_asr_rnn_raw_ar_bpe150_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_ar|10388|54204|52.6|44.2|3.2|2.2|49.6|81.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_ar|10388|302630|87.9|5.7|6.5|8.1|20.3|81.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_ar|10388|231713|82.4|10.1|7.5|9.4|27.0|81.9|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_rnn.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_rnn_raw_ar_bpe150_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 30
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_ar_bpe150_sp/train/speech_shape
- exp/asr_stats_raw_ar_bpe150_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_ar_bpe150_sp/valid/speech_shape
- exp/asr_stats_raw_ar_bpe150_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_ar_sp/wav.scp
- speech
- sound
- - dump/raw/train_ar_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_ar/wav.scp
- speech
- sound
- - dump/raw/dev_ar/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- َ
- ا
- ِ
- ْ
- م
- ي
- ل
- ن
- ُ
- ر
- ه
- ▁ال
- ت
- ب
- ع
- ك
- د
- و
- ▁و
- .
- س
- ▁أ
- ق
- ة
- ▁م
- َّ
- ح
- ▁ل
- ف
- ▁ي
- ▁ب
- ▁ف
- ج
- ▁ت
- أ
- ذ
- ▁ع
- ال
- ّ
- ً
- ص
- ▁ك
- ى
- ط
- ض
- خ
- ون
- ش
- ▁ق
- ين
- ز
- ▁أن
- ▁س
- ▁من
- ▁إ
- ث
- ▁ر
- ▁ن
- وا
- ٌ
- ٍ
- ▁ا
- غ
- ▁ح
- اء
- ▁في
- إ
- ان
- ▁ج
- ▁
- ِّ
- ظ
- ▁؟
- ▁ه
- اب
- ▁ش
- ُّ
- ول
- ▁خ
- ار
- ئ
- ▁ص
- ▁سامي
- ▁إن
- ▁لا
- ▁الل
- ▁كان
- يد
- اد
- ائ
- ات
- ؟
- ▁الأ
- ▁د
- ▁إلى
- ير
- ▁غ
- ▁هل
- آ
- ؤ
- ء
- '!'
- ـ
- '"'
- ،
- ','
- ':'
- ی
- ٰ
- '-'
- ک
- ؛
- “
- ”
- T
- '?'
- I
- ;
- E
- O
- G
- »
- A
- L
- U
- F
- ۛ
- —
- S
- M
- D
- «
- N
- ۗ
- _
- ۚ
- H
- ''''
- W
- Y
- چ
- ڨ
- ھ
- ۘ
- ☭
- C
- ۖ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
use_preprocessor: true
token_type: bpe
bpemodel: data/ar_token_list/bpe_unigram150/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_ar_bpe150_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: vgg_rnn
encoder_conf:
rnn_type: lstm
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 1024
output_size: 1024
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf:
num_layers: 2
hidden_size: 1024
sampling_probability: 0
att_conf:
atype: location
adim: 1024
aconv_chans: 10
aconv_filts: 100
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/german_commonvoice_blstm
|
espnet
| 2022-04-29T21:11:06Z | 2 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"de",
"dataset:commonvoice",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-04-05T01:07:06Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: de
datasets:
- commonvoice
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/german_commonvoice_blstm`
This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b
pip install -e .
cd egs2/commonvoice/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/german_commonvoice_blstm
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Apr 4 16:41:54 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `fa1b865352475b744c37f70440de1cc6b257ba70`
- Commit date: `Wed Feb 16 16:42:36 2022 -0500`
## asr_de_blstm_specaug_num_time_mask_2_lr_0.1
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.best/test_de|15341|137512|80.0|18.0|2.0|2.5|22.5|69.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.best/test_de|15341|959619|94.6|3.0|2.3|1.5|6.8|69.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.best/test_de|15341|974965|94.7|3.0|2.3|1.5|6.7|69.9|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_rnn.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_de_blstm_specaug_num_time_mask_2_lr_0.1
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 30
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_de_bpe204_sp/train/speech_shape
- exp/asr_stats_raw_de_bpe204_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_de_bpe204_sp/valid/speech_shape
- exp/asr_stats_raw_de_bpe204_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_de_sp/wav.scp
- speech
- sound
- - dump/raw/train_de_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_de/wav.scp
- speech
- sound
- - dump/raw/dev_de/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ▁
- T
- S
- E
- I
- R
- M
- A
- N
- L
- U
- D
- .
- O
- H
- B
- G
- F
- Z
- K
- P
- ü
- W
- ','
- ä
- V
- ö
- J
- '?'
- ß
- '-'
- Y
- C
- '!'
- '"'
- X
- Q
- “
- Ä
- Ö
- ''''
- ':'
- ’
- –
- é
- ;
- í
- á
- ó
- ō
- ã
- š
- »
- «
- ú
- ‘
- ł
- ş
- ă
- ř
- ʻ
- '&'
- à
- ø
- č
- ı
- É
- ý
- â
- ô
- ū
- ñ
- ā
- ë
- ž
- '@'
- /
- ʿ
- ě
- ī
- ”
- ə
- å
- ń
- ′
- æ
- ň
- ś
- ð
- ą
- ė
- Œ
- Ç
- (
- )
- ò
- đ
- î
- '='
- −
- ů
- Ú
- и
- ġ
- а
- ę
- ›
- ṣ
- '`'
- ì
- õ
- ď
- ť
- ả
- —
- ‹
- œ
- ő
- û
- ế
- ф
- р
- о
- м
- е
- в
- С
- Ḫ
- ź
- Î
- Æ
- Ż
- Ś
- ï
- Ó
- Ř
- ğ
- Ł
- İ
- Đ
- Ž
- Ş
- ț
- ê
- Á
- Ō
- ́
- Š
- Č
- ć
- ‚
- ș
- „
- +
- Ø
- μ
- ‐
- $
- '['
- ']'
- ¡
- Â
- Í
- Ô
- ù
- ē
- Ħ
- Ī
- ņ
- ŏ
- ż
- ǐ
- О
- Ш
- к
- ч
- ш
- ་
- ན
- ṟ
- ṭ
- ạ
- ắ
- ễ
- ộ
- ‟
- ≡
- ⟨
- ⟩
- カ
- 临
- 孙
- 尣
- 支
- 無
- 臣
- →
- À
- 道
- Ü
- Þ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
use_preprocessor: true
token_type: bpe
bpemodel: data/de_token_list/bpe_unigram204/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_de_bpe204_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: vgg_rnn
encoder_conf:
rnn_type: lstm
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 1024
output_size: 1024
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf:
num_layers: 2
hidden_size: 1024
sampling_probability: 0
att_conf:
atype: location
adim: 1024
aconv_chans: 10
aconv_filts: 100
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
zoha/wav2vec2-base-common-voice-fa-demo-colab
|
zoha
| 2022-04-29T21:09:20Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-18T18:58:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-common-voice-fa-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-common-voice-fa-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0558
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.1626 | 0.3 | 100 | 4.0692 | 1.0 |
| 5.1776 | 0.6 | 200 | 3.6640 | 1.0 |
| 3.6628 | 0.9 | 300 | 3.3832 | 1.0 |
| 3.2022 | 1.2 | 400 | 3.3492 | 1.0 |
| 3.1714 | 1.5 | 500 | 3.3215 | 1.0 |
| 3.0689 | 1.8 | 600 | 3.0806 | 1.0 |
| 3.1478 | 2.1 | 700 | 3.0624 | 1.0 |
| 3.1818 | 2.4 | 800 | 3.0777 | 1.0 |
| 3.159 | 2.7 | 900 | 3.0558 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
umarkhalid96/t5-small-trainings
|
umarkhalid96
| 2022-04-29T18:36:13Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-04-29T18:27:40Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-trainings
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-trainings
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2580
- Rouge1: 41.5251
- Rouge2: 19.8842
- Rougel: 36.4895
- Rougelsum: 37.2565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.1338 | 1.0 | 51 | 2.5825 | 35.4169 | 15.379 | 30.8859 | 31.524 |
| 2.5905 | 2.0 | 102 | 2.3975 | 38.4266 | 17.2571 | 33.5912 | 34.312 |
| 2.3881 | 3.0 | 153 | 2.3329 | 39.8082 | 19.1925 | 34.8269 | 35.5295 |
| 2.3167 | 4.0 | 204 | 2.2938 | 41.3488 | 20.1513 | 35.6879 | 36.5864 |
| 2.2357 | 5.0 | 255 | 2.2727 | 41.2457 | 19.5358 | 36.0033 | 36.8405 |
| 2.232 | 6.0 | 306 | 2.2645 | 41.2746 | 20.0345 | 35.9226 | 36.7001 |
| 2.1986 | 7.0 | 357 | 2.2595 | 41.7542 | 19.9428 | 36.6819 | 37.4718 |
| 2.1457 | 8.0 | 408 | 2.2580 | 41.5251 | 19.8842 | 36.4895 | 37.2565 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Nadhiya/distilbert-base-uncased-finetuned-squad
|
Nadhiya
| 2022-04-29T18:20:29Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-04-24T20:58:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6023
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 54 | 5.8535 |
| No log | 2.0 | 108 | 6.4469 |
| No log | 3.0 | 162 | 6.6023 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
aneuraz/awesome-align-with-co
|
aneuraz
| 2022-04-29T16:16:12Z | 1,527 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"sentence alignment",
"de",
"fr",
"en",
"ro",
"zh",
"arxiv:2101.08231",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-04-29T14:55:54Z |
---
language:
- de
- fr
- en
- ro
- zh
thumbnail:
tags:
- sentence alignment
license: bsd-3-clause
---
# AWESOME: Aligning Word Embedding Spaces of Multilingual Encoders
This model comes from the following GitHub repository: [https://github.com/neulab/awesome-align](https://github.com/neulab/awesome-align)
It corresponds to this paper: [https://arxiv.org/abs/2101.08231](https://arxiv.org/abs/2101.08231)
Please cite the original paper if you decide to use the model:
```
@inproceedings{dou2021word,
title={Word Alignment by Fine-tuning Embeddings on Parallel Corpora},
author={Dou, Zi-Yi and Neubig, Graham},
booktitle={Conference of the European Chapter of the Association for Computational Linguistics (EACL)},
year={2021}
}
```
`awesome-align` is a tool that can extract word alignments from multilingual BERT (mBERT) [Demo](https://colab.research.google.com/drive/1205ubqebM0OsZa1nRgbGJBtitgHqIVv6?usp=sharing) and allows you to fine-tune mBERT on parallel corpora for better alignment quality (see our paper for more details).
## Usage (copied from this [DEMO](https://colab.research.google.com/drive/1205ubqebM0OsZa1nRgbGJBtitgHqIVv6?usp=sharing) )
```python
from transformers import AutoModel, AutoTokenizer
import itertools
import torch
# load model
model = AutoModel.from_pretrained("aneuraz/awesome-align-with-co")
tokenizer = AutoTokenizer.from_pretrained("aneuraz/awesome-align-with-co")
# model parameters
align_layer = 8
threshold = 1e-3
# define inputs
src = 'awesome-align is awesome !'
tgt = '牛对齐 是 牛 !'
# pre-processing
sent_src, sent_tgt = src.strip().split(), tgt.strip().split()
token_src, token_tgt = [tokenizer.tokenize(word) for word in sent_src], [tokenizer.tokenize(word) for word in sent_tgt]
wid_src, wid_tgt = [tokenizer.convert_tokens_to_ids(x) for x in token_src], [tokenizer.convert_tokens_to_ids(x) for x in token_tgt]
ids_src, ids_tgt = tokenizer.prepare_for_model(list(itertools.chain(*wid_src)), return_tensors='pt', model_max_length=tokenizer.model_max_length, truncation=True)['input_ids'], tokenizer.prepare_for_model(list(itertools.chain(*wid_tgt)), return_tensors='pt', truncation=True, model_max_length=tokenizer.model_max_length)['input_ids']
sub2word_map_src = []
for i, word_list in enumerate(token_src):
sub2word_map_src += [i for x in word_list]
sub2word_map_tgt = []
for i, word_list in enumerate(token_tgt):
sub2word_map_tgt += [i for x in word_list]
# alignment
align_layer = 8
threshold = 1e-3
model.eval()
with torch.no_grad():
out_src = model(ids_src.unsqueeze(0), output_hidden_states=True)[2][align_layer][0, 1:-1]
out_tgt = model(ids_tgt.unsqueeze(0), output_hidden_states=True)[2][align_layer][0, 1:-1]
dot_prod = torch.matmul(out_src, out_tgt.transpose(-1, -2))
softmax_srctgt = torch.nn.Softmax(dim=-1)(dot_prod)
softmax_tgtsrc = torch.nn.Softmax(dim=-2)(dot_prod)
softmax_inter = (softmax_srctgt > threshold)*(softmax_tgtsrc > threshold)
align_subwords = torch.nonzero(softmax_inter, as_tuple=False)
align_words = set()
for i, j in align_subwords:
align_words.add( (sub2word_map_src[i], sub2word_map_tgt[j]) )
print(align_words)
```
|
huggingtweets/cokedupoptions-greg16676935420-parikpatelcfa
|
huggingtweets
| 2022-04-29T15:09:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-29T07:44:08Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1514648481281056772/ACunKh0I_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484924573032148993/qdB7hbSU_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1341030286386192386/TzEiVCaJ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">greg & John W. Rich (Fake Tech Exec) & Dr. Parik Patel, BA, CFA, ACCA Esq. (drpatel.eth)</div>
<div style="text-align: center; font-size: 14px;">@cokedupoptions-greg16676935420-parikpatelcfa</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from greg & John W. Rich (Fake Tech Exec) & Dr. Parik Patel, BA, CFA, ACCA Esq. (drpatel.eth).
| Data | greg | John W. Rich (Fake Tech Exec) | Dr. Parik Patel, BA, CFA, ACCA Esq. (drpatel.eth) |
| --- | --- | --- | --- |
| Tweets downloaded | 3247 | 3247 | 3250 |
| Retweets | 27 | 202 | 22 |
| Short tweets | 664 | 331 | 719 |
| Tweets kept | 2556 | 2714 | 2509 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/snhk0760/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cokedupoptions-greg16676935420-parikpatelcfa's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/iresidwo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/iresidwo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cokedupoptions-greg16676935420-parikpatelcfa')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Goud/DziriBERT-summarization-goud
|
Goud
| 2022-04-29T15:06:30Z | 14 | 2 |
transformers
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"summarization",
"dataset:Goud/Goud-sum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-04-20T22:16:15Z |
---
datasets:
- Goud/Goud-sum
language:
- "Moroccan Arabic (MA)"
- "Modern Standard Arabic (MSA)"
metrics:
- rouge
tags:
- summarization
widget:
-
text: "توصل الاتحاد الأوروبي، في وقت مبكر من اليوم السبت، إلى اتفاق تاريخي يستهدف خطاب الكراهية والمعلومات المضللة والمحتويات الضارة الأخرى الموجودة على شبكة الإنترنيت. وحسب تقارير صحفية، سيجبر القانون شركات التكنولوجيا الكبرى على مراقبة نفسها بشكل أكثر صرامة، ويسهل على المستخدمين الإبلاغ عن المشاكل، ويمكن الاتفاق المنظمين من معاقبة الشركات غير الممتثلة بغرامات تقدر بالملايير. ويركز الاتفاق على قواعد جديدة تتطلب من شركات التكنولوجيا العملاقة بذل المزيد من الجهد لمراقبة المحتوى على منصاتها ودفع رسوم للجهات المنظمة التي تراقب مدى امتثالها. ويعد قانون الخدمات الرقمية الشق الثاني من إستراتيجية المفوضة الأوروبية لشؤون المنافسة، مارغريت فيستاغر، للحد من هيمنة وحدة غوغل التابعة لألفابت، وميتا (فيسبوك سابقا) وغيرهما من شركات التكنولوجيا الأمريكية العملاقة. وقالت فيستاغر في تغريدة “توصلنا إلى اتفاق بشأن قانون الخدمات الرقمية، موضحة أن القانون سيضمن أن ما يعتبر غير قانوني في حالة عدم الاتصال بالشبكة ينظر إليه أيضا ويتم التعامل معه على أنه غير قانوني عبر الشبكة (الإنترنت) – ليس كشعار (ولكن) كواقع”. وتواجه الشركات بموجب قانون الخدمات الرقمية غرامات تصل إلى 6 في المائة من إجمالي عملياتها على مستوى العالم لانتهاك القواعد بينما قد تؤدي الانتهاكات المتكررة إلى حظرها من ممارسة أعمالها في الاتحاد الأوروبي. وأيدت دول الاتحاد والمشرعون الشهر الماضي القواعد التي طرحتها فيستاغر والمسماة قانون الأسواق الرقمية التي قد تجبر غوغل وأمازون وأبل وميتا وميكروسوفت على تغيير ممارساتها الأساسية في أوروبا. "
---
This model was introduced in [this paper](https://openreview.net/forum?id=BMVq5MELb9). It is an encoder-decoder model that was initialized with [DziriBERT](https://huggingface.co/alger-ia/dziribert) checkpoint. The model is finetuned for text summarization on [Goud dataset](https://huggingface.co/datasets/Goud/Goud-sum).
## How to use
This is how you can use this model
```python
from transformers import EncoderDecoderModel, BertTokenizer
article = """توصل الاتحاد الأوروبي، في وقت مبكر من اليوم السبت، إلى اتفاق تاريخي يستهدف خطاب الكراهية والمعلومات المضللة والمحتويات الضارة الأخرى الموجودة على شبكة الإنترنيت.
وحسب تقارير صحفية، سيجبر القانون شركات التكنولوجيا الكبرى على مراقبة نفسها بشكل أكثر صرامة، ويسهل على المستخدمين الإبلاغ عن المشاكل، ويمكن الاتفاق المنظمين من معاقبة الشركات غير الممتثلة بغرامات تقدر بالملايير.
ويركز الاتفاق على قواعد جديدة تتطلب من شركات التكنولوجيا العملاقة بذل المزيد من الجهد لمراقبة المحتوى على منصاتها ودفع رسوم للجهات المنظمة التي تراقب مدى امتثالها.
ويعد قانون الخدمات الرقمية الشق الثاني من إستراتيجية المفوضة الأوروبية لشؤون المنافسة، مارغريت فيستاغر، للحد من هيمنة وحدة غوغل التابعة لألفابت، وميتا (فيسبوك سابقا) وغيرهما من شركات التكنولوجيا الأمريكية العملاقة.
وقالت فيستاغر في تغريدة “توصلنا إلى اتفاق بشأن قانون الخدمات الرقمية، موضحة أن القانون سيضمن أن ما يعتبر غير قانوني في حالة عدم الاتصال بالشبكة ينظر إليه أيضا ويتم التعامل معه على أنه غير قانوني عبر الشبكة (الإنترنت) – ليس كشعار (ولكن) كواقع”.
وتواجه الشركات بموجب قانون الخدمات الرقمية غرامات تصل إلى 6 في المائة من إجمالي عملياتها على مستوى العالم لانتهاك القواعد بينما قد تؤدي الانتهاكات المتكررة إلى حظرها من ممارسة أعمالها في الاتحاد الأوروبي.
وأيدت دول الاتحاد والمشرعون الشهر الماضي القواعد التي طرحتها فيستاغر والمسماة قانون الأسواق الرقمية التي قد تجبر غوغل وأمازون وأبل وميتا وميكروسوفت على تغيير ممارساتها الأساسية في أوروبا.
"""
tokenizer = BertTokenizer.from_pretrained("Goud/DziriBERT-summarization-goud")
model = EncoderDecoderModel.from_pretrained("Goud/DziriBERT-summarization-goud")
input_ids = tokenizer(article, return_tensors="pt", truncation=True, padding=True).input_ids
generated = model.generate(input_ids)[0]
output = tokenizer.decode(generated, skip_special_tokens=True)
```
## Citation Information
```
@inproceedings{issam2022goudma,
title={Goud.ma: a News Article Dataset for Summarization in Moroccan Darija},
author={Abderrahmane Issam and Khalil Mrini},
booktitle={3rd Workshop on African Natural Language Processing},
year={2022},
url={https://openreview.net/forum?id=BMVq5MELb9}
}
```
|
gsarti/it5-efficient-small-el32-question-answering
|
gsarti
| 2022-04-29T14:28:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"Italian",
"efficient",
"sequence-to-sequence",
"squad_it",
"text2text-question-answering",
"it",
"dataset:squad_it",
"arxiv:2203.03759",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-28T14:11:55Z |
---
language:
- it
license: apache-2.0
datasets:
- squad_it
tags:
- Italian
- efficient
- sequence-to-sequence
- squad_it
- text2text-question-answering
- text2text-generation
widget:
- text: "In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?"
- text: "L' embargo non era uniforme in tutta Europa. Dei nove membri della Comunità Economica Europea (CEE), i Paesi Bassi hanno dovuto affrontare un embargo totale, il Regno Unito e la Francia hanno ricevuto forniture quasi ininterrotte (poichè si sono rifiutati di consentire all' America di utilizzare i loro aerodromi e le armi e forniture embargo sia agli arabi che agli israeliani), mentre gli altri sei hanno dovuto affrontare tagli parziali. Il Regno Unito era tradizionalmente un alleato di Israele, e il governo di Harold Wilson ha sostenuto gli israeliani durante la guerra dei sei giorni. Il suo successore, Ted Heath, ribaltò questa politica nel 1970, chiedendo a Israele di ritirarsi ai suoi confini prima del 1967. Domanda: Il Regno Unito e la Francia non hanno avuto interruzioni dell' approvvigionamento petrolifero in quanto non hanno consentito a quale paese di utilizzare il loro aeroporto?"
- text: "Nel 1962, il grafico Paul Rand ridisegna il logo ABC nella sua forma più conosciuta (e attuale) con le lettere minuscole \"abc\" racchiuse in un unico cerchio nero. Il nuovo logo esordisce in onda per le promozioni di ABC all' inizio della stagione 1963-64. Le lettere ricordano fortemente il carattere tipografico Bauhaus disegnato da Herbert Bayer negli anni Venti, ma condividono anche similitudini con diversi altri caratteri, come ITC Avant Garde e Horatio, e lo Chalet più simile. La semplicità del logo ha reso più facile la riprogettazione e la duplicazione, il che ha conferito un beneficio per ABC (soprattutto prima dell' avvento della computer grafica). Domanda: Di quale carattere tipografico ricordano le lettere dell' iconico logo ABC?"
- text: "La fotorespirazione può verificarsi quando la concentrazione di ossigeno è troppo elevata. Rubisco non è in grado di distinguere molto bene tra ossigeno e anidride carbonica, quindi può accidentalmente aggiungere O2 invece di CO2 a RuBP. Questo processo riduce l' efficienza della fotosintesi: consuma ATP e ossigeno, rilascia CO2 e non produce zucchero. Può sprecare fino alla metà del carbonio fissato dal ciclo di Calvin. Diversi meccanismi si sono evoluti in diversi lignaggi che aumentano la concentrazione di anidride carbonica rispetto all' ossigeno all' interno del cloroplasto, aumentando l' efficienza della fotosintesi. Questi meccanismi sono chiamati meccanismi di concentrazione dell' anidride carbonica, o CCM. Tra questi figurano il metabolismo degli acidi crassulaceanici, la fissazione del carbonio C4 e i pirenoidi. I cloroplasti negli impianti C4 sono notevoli in quanto presentano un chiaro dimorfismo cloroplastico. Domanda: Che cosa può fare rubisco per errore?"
metrics:
- f1
- exact-match
model-index:
- name: it5-efficient-small-el32-question-answering
results:
- task:
type: question-answering
name: "Question Answering"
dataset:
type: squad_it
name: "SQuAD-IT"
metrics:
- type: f1
value: 0.747
name: "Test F1"
- type: exact-match
value: 0.645
name: "Test Exact Match"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Cased Small Efficient EL32 for Question Answering ⁉️ 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
qa = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-question-answering')
qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?")
>>> [{"generated_text": "ultimo massimo glaciale"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-question-answering")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-question-answering")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
faisalahmad2/autotrain-nlp-text-summarization-by-faisal-793224456
|
faisalahmad2
| 2022-04-29T14:05:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"en",
"dataset:faisalahmad2/autotrain-data-nlp-text-summarization-by-faisal",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-27T15:03:43Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- faisalahmad2/autotrain-data-nlp-text-summarization-by-faisal
co2_eq_emissions: 27.26671996544415
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 793224456
- CO2 Emissions (in grams): 27.26671996544415
## Validation Metrics
- Loss: 1.5189369916915894
- Rouge1: 38.7852
- Rouge2: 17.0785
- RougeL: 32.1082
- RougeLsum: 32.1103
- Gen Len: 18.7332
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/faisalahmad2/autotrain-nlp-text-summarization-by-faisal-793224456
```
|
huggingtweets/corpsecrusader
|
huggingtweets
| 2022-04-29T13:57:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/corpsecrusader/1651240626010/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1515787050334801925/tyxpMmj1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Corpse Crusader 🫀🇫🇮 gamedev hours🧱🍐💨💪</div>
<div style="text-align: center; font-size: 14px;">@corpsecrusader</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Corpse Crusader 🫀🇫🇮 gamedev hours🧱🍐💨💪.
| Data | Corpse Crusader 🫀🇫🇮 gamedev hours🧱🍐💨💪 |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 405 |
| Short tweets | 658 |
| Tweets kept | 2181 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ogdqtie2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @corpsecrusader's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ecpg08j) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ecpg08j/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/corpsecrusader')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
umarkhalid96/t5-small-train
|
umarkhalid96
| 2022-04-29T12:36:08Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-04-24T19:52:13Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-train
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2669
- Rouge1: 43.2372
- Rouge2: 21.6755
- Rougel: 38.1637
- Rougelsum: 38.5444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.2032 | 1.0 | 45 | 2.6305 | 34.393 | 15.4821 | 30.3601 | 30.5865 |
| 2.6291 | 2.0 | 90 | 2.4169 | 38.2327 | 18.4622 | 34.2887 | 34.3385 |
| 2.4294 | 3.0 | 135 | 2.3395 | 40.4405 | 19.927 | 36.559 | 36.8095 |
| 2.3191 | 4.0 | 180 | 2.3059 | 41.4214 | 20.4534 | 36.6399 | 36.9088 |
| 2.2949 | 5.0 | 225 | 2.2857 | 42.6906 | 21.1492 | 37.5557 | 37.8722 |
| 2.2591 | 6.0 | 270 | 2.2762 | 43.1598 | 21.6179 | 38.1235 | 38.5053 |
| 2.1722 | 7.0 | 315 | 2.2680 | 43.4447 | 21.8048 | 38.4077 | 38.7384 |
| 2.1993 | 8.0 | 360 | 2.2669 | 43.2372 | 21.6755 | 38.1637 | 38.5444 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BlackSamorez/ebanko-base
|
BlackSamorez
| 2022-04-29T12:29:02Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"PyTorch",
"Transformers",
"ru",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-28T18:43:43Z |
---
language:
- ru
tags:
- PyTorch
- Transformers
---
# ebanko-base
Model was finetuned by [black_samorez](https://github.com/BlackSamorez).
Based off [sberbank-ai/ruT5-base](https://huggingface.co/sberbank-ai/ruT5-base).
Finetuned on [
russe_detox_2022](https://github.com/skoltech-nlp/russe_detox_2022) train to toxify text.
I recommend using it with **temperature = 1.5**
* Task: `text2text generation`
* Type: `encoder-decoder`
* Tokenizer: `bpe`
* Dict size: `32 101`
* Num Parameters: `222 M`
---
license: apache-2.0
---
|
doc2query/msmarco-spanish-mt5-base-v1
|
doc2query
| 2022-04-29T12:11:59Z | 4 | 3 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"es",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-29T12:11:43Z |
---
language: es
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python es un lenguaje de alto nivel de programación interpretado cuya filosofía hace hincapié en la legibilidad de su código, se utiliza para desarrollar aplicaciones de todo tipo, ejemplos: Instagram, Netflix, Panda 3D, entre otros.2 Se trata de un lenguaje de programación multiparadigma, ya que soporta parcialmente la orientación a objetos, programación imperativa y, en menor medida, programación funcional. Es un lenguaje interpretado, dinámico y multiplataforma."
license: apache-2.0
---
# doc2query/msmarco-spanish-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-spanish-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python es un lenguaje de alto nivel de programación interpretado cuya filosofía hace hincapié en la legibilidad de su código, se utiliza para desarrollar aplicaciones de todo tipo, ejemplos: Instagram, Netflix, Panda 3D, entre otros.2 Se trata de un lenguaje de programación multiparadigma, ya que soporta parcialmente la orientación a objetos, programación imperativa y, en menor medida, programación funcional. Es un lenguaje interpretado, dinámico y multiplataforma."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
doc2query/msmarco-russian-mt5-base-v1
|
doc2query
| 2022-04-29T12:10:29Z | 21 | 8 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"ru",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-29T12:10:14Z |
---
language: ru
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python (МФА: [ˈpʌɪθ(ə)n]; в русском языке встречаются названия пито́н или па́йтон) — высокоуровневый язык программирования общего назначения с динамической строгой типизацией и автоматическим управлением памятью, ориентированный на повышение производительности разработчика, читаемости кода и его качества, а также на обеспечение переносимости написанных на нём программ."
license: apache-2.0
---
# doc2query/msmarco-russian-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-russian-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python (МФА: [ˈpʌɪθ(ə)n]; в русском языке встречаются названия пито́н или па́йтон) — высокоуровневый язык программирования общего назначения с динамической строгой типизацией и автоматическим управлением памятью, ориентированный на повышение производительности разработчика, читаемости кода и его качества, а также на обеспечение переносимости написанных на нём программ."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
doc2query/msmarco-italian-mt5-base-v1
|
doc2query
| 2022-04-29T12:06:16Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"it",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-29T12:00:49Z |
---
language: it
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python è un linguaggio di programmazione di alto livello, orientato a oggetti, adatto, tra gli altri usi, a sviluppare applicazioni distribuite, scripting, computazione numerica e system testing."
license: apache-2.0
---
# doc2query/msmarco-italian-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-italian-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python è un linguaggio di programmazione di alto livello, orientato a oggetti, adatto, tra gli altri usi, a sviluppare applicazioni distribuite, scripting, computazione numerica e system testing."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
doc2query/msmarco-japanese-mt5-base-v1
|
doc2query
| 2022-04-29T12:05:37Z | 28 | 5 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"ja",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-29T12:05:21Z |
---
language: ja
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python(パイソン)はインタープリタ型の高水準汎用プログラミング言語である。グイド・ヴァン・ロッサムにより創り出され、1991年に最初にリリースされたPythonの設計哲学は、有意なホワイトスペース(オフサイドルール)の顕著な使用によってコードの可読性を重視している。その言語構成とオブジェクト指向のアプローチは、プログラマが小規模なプロジェクトから大規模なプロジェクトまで、明確で論理的なコードを書くのを支援することを目的としている。"
license: apache-2.0
---
# doc2query/msmarco-japanese-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-japanese-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python(パイソン)はインタープリタ型の高水準汎用プログラミング言語である。グイド・ヴァン・ロッサムにより創り出され、1991年に最初にリリースされたPythonの設計哲学は、有意なホワイトスペース(オフサイドルール)の顕著な使用によってコードの可読性を重視している。その言語構成とオブジェクト指向のアプローチは、プログラマが小規模なプロジェクトから大規模なプロジェクトまで、明確で論理的なコードを書くのを支援することを目的としている。"
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
huggan/stylegan_car512
|
huggan
| 2022-04-29T12:01:09Z | 0 | 0 | null |
[
"pytorch",
"gan",
"stylegan",
"huggan",
"unconditional-image-generation",
"license:apache-2.0",
"region:us"
] |
unconditional-image-generation
| 2022-04-18T21:43:45Z |
---
tags:
- gan
- stylegan
- huggan
- unconditional-image-generation
license: apache-2.0
---
The model provided is a StyleGan generator trained on the Cars dataset with a resolution of 512px. It is uploaded as part of porting this project: https://github.com/genforce/sefa to hugginface spaces.
|
doc2query/msmarco-indonesian-mt5-base-v1
|
doc2query
| 2022-04-29T11:58:59Z | 23 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"id",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-29T11:58:44Z |
---
language: id
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python adalah bahasa pemrograman tujuan umum yang ditafsirkan, tingkat tinggi. Dibuat oleh Guido van Rossum dan pertama kali dirilis pada tahun 1991, filosofi desain Python menekankan keterbacaan kode dengan penggunaan spasi putih yang signifikan. Konstruksi bahasanya dan pendekatan berorientasi objek bertujuan untuk membantu pemrogram menulis kode yang jelas dan logis untuk proyek skala kecil dan besar."
license: apache-2.0
---
# doc2query/msmarco-indonesian-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-indonesian-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python adalah bahasa pemrograman tujuan umum yang ditafsirkan, tingkat tinggi. Dibuat oleh Guido van Rossum dan pertama kali dirilis pada tahun 1991, filosofi desain Python menekankan keterbacaan kode dengan penggunaan spasi putih yang signifikan. Konstruksi bahasanya dan pendekatan berorientasi objek bertujuan untuk membantu pemrogram menulis kode yang jelas dan logis untuk proyek skala kecil dan besar."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
huggan/pggan-celebahq-1024
|
huggan
| 2022-04-29T11:58:41Z | 0 | 0 | null |
[
"pytorch",
"gan",
"pggan",
"huggan",
"unconditional-image-generation",
"license:apache-2.0",
"region:us"
] |
unconditional-image-generation
| 2022-04-17T19:15:25Z |
---
license: apache-2.0
tags:
- gan
- pggan
- huggan
- unconditional-image-generation
---
The model provided is a PGGAN generator trained on the celebahq dataset with a resolution of 1024px. It is uploaded as part of porting this project: https://github.com/genforce/sefa to hugginface spaces.
|
doc2query/msmarco-french-mt5-base-v1
|
doc2query
| 2022-04-29T11:53:01Z | 13 | 4 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"fr",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-29T11:52:40Z |
---
language: fr
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python (prononcé /pi.tɔ̃/) est un langage de programmation interprété, multi-paradigme et multiplateformes. Il favorise la programmation impérative structurée, fonctionnelle et orientée objet. Il est doté d'un typage dynamique fort, d'une gestion automatique de la mémoire par ramasse-miettes et d'un système de gestion d'exceptions ; il est ainsi similaire à Perl, Ruby, Scheme, Smalltalk et Tcl."
license: apache-2.0
---
# doc2query/msmarco-french-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-french-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python (prononcé /pi.tɔ̃/) est un langage de programmation interprété, multi-paradigme et multiplateformes. Il favorise la programmation impérative structurée, fonctionnelle et orientée objet. Il est doté d'un typage dynamique fort, d'une gestion automatique de la mémoire par ramasse-miettes et d'un système de gestion d'exceptions ; il est ainsi similaire à Perl, Ruby, Scheme, Smalltalk et Tcl."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
norefly/opus-mt-ko-en-finetuned-ko-to-en3
|
norefly
| 2022-04-29T11:48:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-29T04:28:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ko-en-finetuned-ko-to-en3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned-ko-to-en3
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1864
- Bleu: 0.7037
- Gen Len: 11.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 0.99 | 119 | 4.4541 | 0.0 | 5.0 |
| No log | 1.99 | 238 | 2.4214 | 0.3414 | 16.0 |
| No log | 2.99 | 357 | 2.2158 | 0.3212 | 15.0 |
| No log | 3.99 | 476 | 2.1737 | 0.3283 | 12.0 |
| 3.2958 | 4.99 | 595 | 2.1864 | 0.7037 | 11.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.