modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 12:33:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 12:33:10
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
groar/gpt-neo-1.3B-finetuned-escape
|
groar
| 2022-02-06T18:14:00Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-1.3B-finetuned-escape
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-1.3B-finetuned-escape
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
keras-io/char-lstm-seq2seq
|
keras-io
| 2022-02-06T18:12:35Z | 9 | 1 |
tf-keras
|
[
"tf-keras",
"seq2seq",
"translation",
"en",
"fr",
"license:cc0-1.0",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- en
- fr
tags:
- seq2seq
- translation
license:
- cc0-1.0
---
## Keras Implementation of Character-level recurrent sequence-to-sequence model
This repo contains the model and the notebook [to this Keras example on Character-level recurrent sequence-to-sequence model](https://keras.io/examples/nlp/lstm_seq2seq/).
Full credits to: [fchollet](https://twitter.com/fchollet)
## Background Information
This example demonstrates how to implement a basic character-level recurrent sequence-to-sequence model. We apply it to translating short English sentences into short French sentences, character-by-character. Note that it is fairly unusual to do character-level machine translation, as word-level models are more common in this domain.
## Limitations
It works on text of length <= 15 characters
## Parameters needed for using the model
```python
latent_dim = 256
num_encoder_tokens = 71
max_encoder_seq_length = 15
num_decoder_tokens = 92
max_decoder_seq_length = 59
```
|
asalics/distilbert-base-uncased-finetuned-emotion
|
asalics
| 2022-02-06T14:29:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9244145121183605
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy: 0.924
- F1: 0.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7914 | 1.0 | 250 | 0.3032 | 0.905 | 0.9030 |
| 0.2379 | 2.0 | 500 | 0.2207 | 0.924 | 0.9244 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Mahalakshmi/wav2vec2-xls-r-300m-demo-colab
|
Mahalakshmi
| 2022-02-06T13:51:42Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9475
- eval_wer: 1.0377
- eval_runtime: 70.5646
- eval_samples_per_second: 25.239
- eval_steps_per_second: 3.16
- epoch: 21.05
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 300
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
joykirat/bert-base-uncased-finetuned-swag
|
joykirat
| 2022-02-06T11:11:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
|
DrishtiSharma/wav2vec2-xls-r-pa-IN-a1
|
DrishtiSharma
| 2022-02-05T21:58:25Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- pa-IN
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1508
- Wer: 0.4908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5841 | 9.26 | 500 | 3.2514 | 0.9941 |
| 0.3992 | 18.52 | 1000 | 0.8790 | 0.6107 |
| 0.2409 | 27.78 | 1500 | 1.0012 | 0.6366 |
| 0.1447 | 37.04 | 2000 | 1.0167 | 0.6276 |
| 0.1109 | 46.3 | 2500 | 1.0638 | 0.5653 |
| 0.0797 | 55.56 | 3000 | 1.1447 | 0.5715 |
| 0.0636 | 64.81 | 3500 | 1.1503 | 0.5316 |
| 0.0466 | 74.07 | 4000 | 1.2227 | 0.5386 |
| 0.0372 | 83.33 | 4500 | 1.1214 | 0.5225 |
| 0.0239 | 92.59 | 5000 | 1.1375 | 0.4998 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
pritamdeka/PubMedBert-fulltext-cord19
|
pritamdeka
| 2022-02-05T20:56:37Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:pritamdeka/cord-19-fulltext",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- pritamdeka/cord-19-fulltext
metrics:
- accuracy
model-index:
- name: pubmedbert-fulltext-cord19
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: pritamdeka/cord-19-fulltext
type: pritamdeka/cord-19-fulltext
args: fulltext
metrics:
- name: Accuracy
type: accuracy
value: 0.7175316733550737
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmedbert-fulltext-cord19
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the pritamdeka/cord-19-fulltext dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2667
- Accuracy: 0.7175
## Model description
The model has been trained using a maximum train sample size of 300K and evaluation size of 25K due to GPU limitations
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7985 | 0.27 | 5000 | 1.2710 | 0.7176 |
| 1.7542 | 0.53 | 10000 | 1.3359 | 0.7070 |
| 1.7462 | 0.8 | 15000 | 1.3489 | 0.7034 |
| 1.8371 | 1.07 | 20000 | 1.4361 | 0.6891 |
| 1.7102 | 1.33 | 25000 | 1.3502 | 0.7039 |
| 1.6596 | 1.6 | 30000 | 1.3341 | 0.7065 |
| 1.6265 | 1.87 | 35000 | 1.3228 | 0.7087 |
| 1.605 | 2.13 | 40000 | 1.3079 | 0.7099 |
| 1.5731 | 2.4 | 45000 | 1.2986 | 0.7121 |
| 1.5602 | 2.67 | 50000 | 1.2929 | 0.7136 |
| 1.5447 | 2.93 | 55000 | 1.2875 | 0.7143 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/bouncemanautumn
|
huggingtweets
| 2022-02-05T20:35:09Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/bouncemanautumn/1644093304436/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1466500150759763979/_SP07dAh_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">autumn wants to hold ty’s hand</div>
<div style="text-align: center; font-size: 14px;">@bouncemanautumn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from autumn wants to hold ty’s hand.
| Data | autumn wants to hold ty’s hand |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 195 |
| Short tweets | 434 |
| Tweets kept | 2616 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/16mq5may/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bouncemanautumn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3vlqrfex) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3vlqrfex/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bouncemanautumn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sunitha/Trial_3_Results
|
sunitha
| 2022-02-05T19:27:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: Trial_3_Results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial_3_Results
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
infinitejoy/wav2vec2-large-xls-r-300m-odia-cv8
|
infinitejoy
| 2022-02-05T18:24:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"or",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- or
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-odia-cv8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-odia-cv8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - OR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8176
- Wer: 0.5818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.3957 | 20.83 | 500 | 1.0925 | 0.8111 |
| 1.0351 | 41.67 | 1000 | 0.7837 | 0.6574 |
| 0.7396 | 62.5 | 1500 | 0.7674 | 0.6083 |
| 0.5385 | 83.33 | 2000 | 0.8015 | 0.5812 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
transformersbook/xlm-roberta-base-finetuned-panx-de-fr
|
transformersbook
| 2022-02-05T17:08:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.1616
- F1: 0.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2855 | 1.0 | 715 | 0.1944 | 0.8178 |
| 0.1485 | 2.0 | 1430 | 0.1679 | 0.8469 |
| 0.0966 | 3.0 | 2145 | 0.1616 | 0.8590 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
transformersbook/xlm-roberta-base-finetuned-panx-fr
|
transformersbook
| 2022-02-05T17:07:57Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8454790823211876
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.2772
- F1: 0.8455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.562 | 1.0 | 191 | 0.3183 | 0.7828 |
| 0.2697 | 2.0 | 382 | 0.2706 | 0.8324 |
| 0.1735 | 3.0 | 573 | 0.2772 | 0.8455 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
transformersbook/xlm-roberta-base-finetuned-panx-de
|
transformersbook
| 2022-02-05T17:07:41Z | 9 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8645910410381922
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.1388
- F1: 0.8646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2652 | 1.0 | 525 | 0.1602 | 0.8230 |
| 0.1314 | 2.0 | 1050 | 0.1372 | 0.8527 |
| 0.0806 | 3.0 | 1575 | 0.1388 | 0.8646 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
transformersbook/xlm-roberta-base-finetuned-panx-en
|
transformersbook
| 2022-02-05T17:07:09Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.69816564758199
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.3676
- F1: 0.6982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.026 | 1.0 | 50 | 0.5734 | 0.4901 |
| 0.4913 | 2.0 | 100 | 0.3870 | 0.6696 |
| 0.3734 | 3.0 | 150 | 0.3676 | 0.6982 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
transformersbook/distilbert-base-uncased-finetuned-clinc
|
transformersbook
| 2022-02-05T16:46:21Z | 100 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.7773
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2923 | 1.0 | 318 | 3.2893 | 0.7423 |
| 2.6307 | 2.0 | 636 | 1.8837 | 0.8281 |
| 1.5483 | 3.0 | 954 | 1.1583 | 0.8968 |
| 1.0153 | 4.0 | 1272 | 0.8618 | 0.9094 |
| 0.7958 | 5.0 | 1590 | 0.7773 | 0.9174 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.13.0
- Tokenizers 0.10.3
|
transformersbook/bert-base-uncased-finetuned-clinc
|
transformersbook
| 2022-02-05T16:38:54Z | 922 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1909.02027",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# Intent Detection with BERT
This model was trained on the [CLINC150](https://arxiv.org/abs/1909.02027) dataset for customer intent detection. The dataset can be found on the [Hub](https://huggingface.co/datasets/clinc_oos). The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb).
|
transformersbook/codeparrot-small
|
transformersbook
| 2022-02-05T16:28:36Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
# CodeParrot
CodeParrot (small) is a 110M parameter GPT-2 model trained on the [CodeParrot Python code dataset](https://huggingface.co/datasets/transformersbook/codeparrot). The model is trained in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
|
transformersbook/codeparrot
|
transformersbook
| 2022-02-05T16:27:42Z | 18 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
# CodeParrot
CodeParrot (large) is a 1.5B parameter GPT-2 model trained on the [CodeParrot Python code dataset](https://huggingface.co/datasets/transformersbook/codeparrot). The model is trained in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
|
groar/distilgpt2-finetuned-escape
|
groar
| 2022-02-05T14:44:47Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-escape
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-escape
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Ayham/distilbert_distilgpt2_summarization_cnn_dailymail
|
Ayham
| 2022-02-05T11:39:58Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: distilbert_distilgpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ajitrajasekharan/biomedical
|
ajitrajasekharan
| 2022-02-05T08:44:05Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- {en} # Example: fr
license: mit
widget:
- text: "Lou Gehrig who works for XCorp and lives in New York suffers from [MASK]"
example_title: "Test for entity type: Disease"
- text: "Overexpression of [MASK] occurs across a wide range of cancers"
example_title: "Test for entity type: Gene"
- text: "Patients treated with [MASK] are vulnerable to infectious diseases"
example_title: "Test for entity type: Drug"
- text: "A eGFR level below [MASK] indicates chronic kidney disease"
example_title: "Test for entity type: Measure "
- text: "In the [MASK], increased daily imatinib dose induced MMR"
example_title: "Test for entity type: STUDY/TRIAL"
- text: "Paul Erdos died at [MASK]"
example_title: "Test for entity type: TIME"
inference:
parameters:
top_k: 10
tags:
- {fill-mask} # Example: audio
- exbert
---
This **cased model** was pretrained from scratch using a custom vocabulary on the following corpora
- Pubmed
- Clinical trials corpus
- and a small subset of Bookcorpus
The pretrained model was used to do NER **as is, with no fine-tuning**. The approach is described [in this post](https://ajitrajasekharan.github.io/2021/01/02/my-first-post.html). [Towards Data Science review](https://twitter.com/TDataScience/status/1486300137366466560?s=20)
[App in Spaces](https://huggingface.co/spaces/ajitrajasekharan/self-supervised-ner-biomedical) demonstrates this approach.
[Github link](https://github.com/ajitrajasekharan/unsupervised_NER) to perform NER using this model in an ensemble with bert-base cased.
The ensemble detects 69 entity subtypes (17 broad entity groups)
<img src="https://ajitrajasekharan.github.io/images/1.png" width="600">
### Ensemble model performance
<img src="https://ajitrajasekharan.github.io/images/6.png" width="600">
### Additional notes
- The model predictions on the right do not include [CLS] predictions. Hosted inference API only returns the masked position predictions. In practice, the [CLS] predictions are just as useful as the model predictions for the masked position _(if the next sentence prediction loss was low during pretraining)_ and are used for NER.
- Some of the top model predictions like "a", "the", punctuations, etc. while valid predictions, bear no entity information. These are filtered when harvesting descriptors for NER. The examples on the right are unfiltered results.
- [Use this link](https://huggingface.co/spaces/ajitrajasekharan/Qualitative-pretrained-model-evaluation) to examine both fill-mask prediction and [CLS] predictions
### License
MIT license
<a href="https://huggingface.co/exbert/?model=ajitrajasekharan/biomedical&modelKind=bidirectional&sentence=Gefitinib%20is%20an%20EGFR%20tyrosine%20kinase%20inhibitor,%20which%20is%20often%20used%20for%20breast%20cancer%20and%20NSCLC%20treatment.&layer=3&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=17&tokenSide=right&maskInds=..&hideClsSep=true">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
HenryHXR/t5-base-finetuned-scitldr
|
HenryHXR
| 2022-02-05T05:48:10Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-scitldr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-scitldr
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0232
- Rouge1: 35.2134
- Rouge2: 16.8919
- Rougel: 30.8442
- Rougelsum: 30.9316
- Gen Len: 18.7981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.0533 | 1.0 | 996 | 2.0285 | 34.9774 | 16.6163 | 30.6177 | 30.7038 | 18.7981 |
| 2.0994 | 2.0 | 1992 | 2.0232 | 35.2134 | 16.8919 | 30.8442 | 30.9316 | 18.7981 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/tetranode
|
huggingtweets
| 2022-02-05T00:48:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1446914192825454592/cGOslAWZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">TΞtranodΞ (💎, 💎)</div>
<div style="text-align: center; font-size: 14px;">@tetranode</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from TΞtranodΞ (💎, 💎).
| Data | TΞtranodΞ (💎, 💎) |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 929 |
| Short tweets | 629 |
| Tweets kept | 1676 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3remlcqq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tetranode's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3sa798tb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3sa798tb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tetranode')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingartists/andre-3000
|
huggingartists
| 2022-02-04T22:00:23Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/andre-3000",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/andre-3000
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/64b15c9489c65f5bf8f6577334347404.434x434x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">André 3000</div>
<a href="https://genius.com/artists/andre-3000">
<div style="text-align: center; font-size: 14px;">@andre-3000</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from André 3000.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/andre-3000).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/andre-3000")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2hnhboqf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on André 3000's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1mydp6nh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1mydp6nh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/andre-3000')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/andre-3000")
model = AutoModelWithLMHead.from_pretrained("huggingartists/andre-3000")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
hyperion-ml/voxceleb-v1.1-fbank80_stmn_lresnet34_e256_arcs30m0.3_do0_adam_lr0.05_b512.v1
|
hyperion-ml
| 2022-02-04T21:20:32Z | 5 | 1 | null |
[
"hyperion",
"audio",
"speech",
"speaker-recognition",
"x-vector",
"thin-resnet34",
"en",
"dataset:voxceleb",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
license: apache-2.0
tags:
- hyperion
- audio
- speech
- speaker-recognition
- x-vector
- thin-resnet34
datasets:
- voxceleb
metrics:
- eer
- min_dcf-p=0.05
- min_dcf-p=0.01
model-index:
- name: voxceleb-v1.1-fbank80_stmn_lresnet34_e256_arcs30m0.3_do0_adam_lr0.05_b512.v1
results:
- task:
type: speaker-verification
name: Speaker Verification
dataset:
type: voxceleb1
name: Voxceleb1
args: Train on VoxCeleb2-dev
metrics:
- type: eer
value: 2.11
name: EER Vox1-O
- type: min_dcf-p=0.05
value: 0.135
name: Minimum DCF Vox1-O prior=0.05
- type: act_dcf-p=0.01
value: 0.208
name: Minimum DCF Vox1-O prior=0.01
- type: eer
value: 1.93
name: EER Vox1-E
- type: min_dcf-p=0.05
value: 0.121
name: Minimum DCF Vox1-E prior=0.05
- type: act_dcf-p=0.01
value: 0.204
name: Minimum DCF Vox1-E Original prior=0.01
- type: eer
value: 3.21
name: EER Vox1-H
- type: min_dcf-p=0.05
value: 0.190
name: Minimum DCF Vox1-H prior=0.05
- type: act_dcf-p=0.01
value: 0.298
name: Minimum DCF Vox1-H Original prior=0.01
---
# Hyperion Toolkit Speaker Verification pre-trained Model
## Model Configuration
This model was trained using recipe [voxceleb/v1.1](https://github.com/hyperion-ml/hyperion/tree/master/egs/voxceleb/v1.1)
The configuration for this modeis is defined in
[config_fbank80_stmn_lresnet34_arcs30m0.3_adam_lr0.05_amp.v1.sh](https://github.com/hyperion-ml/hyperion/blob/master/egs/voxceleb/v1.1/global_conf/config_fbank80_stmn_lresnet34_arcs30m0.3_adam_lr0.05_amp.v1.sh)
This is an x-vector model with:
- 80 logMel filter-banks with short-time mean normalization.
- ThinResNet34 (aka Light ResNet34) encoder.
- Mean+Stddev pooling
- AAM-softmax loss (m=0.3, s=30)
- Mixed prec. training.
|
MarioPenguin/bert-model-english
|
MarioPenguin
| 2022-02-04T20:12:58Z | 6 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-model-english
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-model-english
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1408
- Train Sparse Categorical Accuracy: 0.9512
- Validation Loss: nan
- Validation Sparse Categorical Accuracy: 0.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.2775 | 0.8887 | nan | 0.0 | 0 |
| 0.1702 | 0.9390 | nan | 0.0 | 1 |
| 0.1300 | 0.9555 | nan | 0.0 | 2 |
| 0.1346 | 0.9544 | nan | 0.0 | 3 |
| 0.1408 | 0.9512 | nan | 0.0 | 4 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
LenaSchmidt/distilbert-base-uncased-finetuned-squad
|
LenaSchmidt
| 2022-02-04T19:20:11Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0325 | 1.0 | 585 | 1.7520 |
| 1.609 | 2.0 | 1170 | 1.7713 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
mrm8488/roberta-base-bne-finetuned-sqac-retriever
|
mrm8488
| 2022-02-04T17:59:07Z | 4 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 939 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 93,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
dkurt/wav2vec2-base-ft-keyword-spotting-int8
|
dkurt
| 2022-02-04T16:40:37Z | 7 | 2 |
transformers
|
[
"transformers",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
[anton-l/wav2vec2-base-ft-keyword-spotting](https://huggingface.co/anton-l/wav2vec2-base-ft-keyword-spotting) model quantized with [Optimum OpenVINO](https://github.com/dkurt/optimum-openvino/).
| Accuracy on eval (baseline) | Accuracy on eval (quantized) |
|-----------------------------|----------------------------------------|
| 0.9828 | 0.9553 (-0.0274) |
|
groar/distilgpt2-finetuned-wikitext2
|
groar
| 2022-02-04T16:27:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7852 | 1.0 | 2334 | 3.6895 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
shreyasgite/wav2vec2-large-xls-r-300m-dm32
|
shreyasgite
| 2022-02-04T14:53:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-large-xls-r-300m-dm32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-dm32
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5688
- Accuracy: 0.7917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 22
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 2.41 | 34 | 0.6769 | 0.6458 |
| No log | 4.83 | 68 | 0.6864 | 0.5208 |
| No log | 7.28 | 102 | 0.6596 | 0.6042 |
| 0.7106 | 9.69 | 136 | 0.6208 | 0.6875 |
| 0.7106 | 12.14 | 170 | 0.6152 | 0.6875 |
| 0.7106 | 14.55 | 204 | 0.6167 | 0.6875 |
| 0.6464 | 16.97 | 238 | 0.5782 | 0.7708 |
| 0.6464 | 19.41 | 272 | 0.6011 | 0.7292 |
| 0.6464 | 21.83 | 306 | 0.5688 | 0.7917 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cahya/wav2vec2-base-turkish-cv8
|
cahya
| 2022-02-04T14:30:19Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [./checkpoint-1000](https://huggingface.co/./checkpoint-1000) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3282
- Wer: 0.2836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 96
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0671 | 2.04 | 200 | 0.3079 | 0.2752 |
| 0.6433 | 4.08 | 400 | 0.2728 | 0.2848 |
| 0.5687 | 6.12 | 600 | 0.2882 | 0.3036 |
| 0.5355 | 8.16 | 800 | 0.2778 | 0.2920 |
| 0.5116 | 10.2 | 1000 | 0.2906 | 0.3014 |
| 0.5313 | 9.16 | 1200 | 0.2984 | 0.3273 |
| 0.4996 | 10.69 | 1400 | 0.3170 | 0.3344 |
| 0.4845 | 12.21 | 1600 | 0.3202 | 0.3634 |
| 0.5092 | 13.74 | 1800 | 0.3167 | 0.3373 |
| 0.4777 | 15.27 | 2000 | 0.3292 | 0.3386 |
| 0.4651 | 16.79 | 2200 | 0.3070 | 0.3427 |
| 0.461 | 18.32 | 2400 | 0.3149 | 0.3561 |
| 0.4481 | 19.85 | 2600 | 0.3292 | 0.3441 |
| 0.4479 | 21.37 | 2800 | 0.3142 | 0.3209 |
| 0.4305 | 22.9 | 3000 | 0.3525 | 0.3547 |
| 0.4254 | 24.43 | 3200 | 0.3414 | 0.3400 |
| 0.4066 | 25.95 | 3400 | 0.3118 | 0.3207 |
| 0.4043 | 27.48 | 3600 | 0.3418 | 0.3483 |
| 0.3985 | 29.01 | 3800 | 0.3254 | 0.3166 |
| 0.3982 | 30.53 | 4000 | 0.3306 | 0.3453 |
| 0.3929 | 32.06 | 4200 | 0.3262 | 0.3229 |
| 0.378 | 33.59 | 4400 | 0.3546 | 0.3336 |
| 0.4062 | 35.11 | 4600 | 0.3174 | 0.3457 |
| 0.3648 | 36.64 | 4800 | 0.3377 | 0.3357 |
| 0.3609 | 38.17 | 5000 | 0.3346 | 0.3520 |
| 0.3483 | 39.69 | 5200 | 0.3350 | 0.3526 |
| 0.3548 | 41.22 | 5400 | 0.3330 | 0.3406 |
| 0.3446 | 42.75 | 5600 | 0.3398 | 0.3372 |
| 0.3346 | 44.27 | 5800 | 0.3449 | 0.3288 |
| 0.3309 | 45.8 | 6000 | 0.3320 | 0.3144 |
| 0.326 | 47.33 | 6200 | 0.3400 | 0.3279 |
| 0.3189 | 48.85 | 6400 | 0.3400 | 0.3150 |
| 0.3165 | 50.38 | 6600 | 0.3359 | 0.2995 |
| 0.3132 | 51.91 | 6800 | 0.3343 | 0.3096 |
| 0.3092 | 53.44 | 7000 | 0.3224 | 0.3029 |
| 0.2995 | 54.96 | 7200 | 0.3205 | 0.2985 |
| 0.304 | 56.49 | 7400 | 0.3523 | 0.3034 |
| 0.2952 | 58.02 | 7600 | 0.3289 | 0.2934 |
| 0.2875 | 59.54 | 7800 | 0.3350 | 0.3008 |
| 0.2868 | 61.07 | 8000 | 0.3537 | 0.3227 |
| 0.2875 | 62.6 | 8200 | 0.3389 | 0.2970 |
| 0.2778 | 64.12 | 8400 | 0.3370 | 0.2960 |
| 0.2706 | 65.65 | 8600 | 0.3250 | 0.2802 |
| 0.2669 | 67.18 | 8800 | 0.3351 | 0.2903 |
| 0.2615 | 68.7 | 9000 | 0.3382 | 0.2989 |
| 0.2563 | 70.23 | 9200 | 0.3312 | 0.2975 |
| 0.2546 | 71.76 | 9400 | 0.3212 | 0.3003 |
| 0.2482 | 73.28 | 9600 | 0.3337 | 0.3091 |
| 0.2504 | 74.81 | 9800 | 0.3308 | 0.3110 |
| 0.2456 | 76.34 | 10000 | 0.3157 | 0.3118 |
| 0.2363 | 77.86 | 10200 | 0.3251 | 0.3144 |
| 0.2319 | 79.39 | 10400 | 0.3253 | 0.3038 |
| 0.2266 | 80.92 | 10600 | 0.3374 | 0.3038 |
| 0.2279 | 82.44 | 10800 | 0.3268 | 0.2964 |
| 0.2231 | 83.97 | 11000 | 0.3278 | 0.2950 |
| 0.2185 | 85.5 | 11200 | 0.3462 | 0.2981 |
| 0.2245 | 87.02 | 11400 | 0.3311 | 0.2895 |
| 0.223 | 88.55 | 11600 | 0.3325 | 0.2877 |
| 0.2121 | 90.08 | 11800 | 0.3337 | 0.2828 |
| 0.2126 | 91.6 | 12000 | 0.3325 | 0.2808 |
| 0.2027 | 93.13 | 12200 | 0.3277 | 0.2820 |
| 0.2058 | 94.66 | 12400 | 0.3308 | 0.2827 |
| 0.1991 | 96.18 | 12600 | 0.3279 | 0.2820 |
| 0.1991 | 97.71 | 12800 | 0.3300 | 0.2822 |
| 0.1986 | 99.24 | 13000 | 0.3285 | 0.2835 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
abhishek/autonlp-imdb-roberta-base-3662644
|
abhishek
| 2022-02-04T14:25:35Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"unk",
"dataset:abhishek/autonlp-data-imdb-roberta-base",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-imdb-roberta-base
co2_eq_emissions: 25.894117734124272
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 3662644
- CO2 Emissions (in grams): 25.894117734124272
## Validation Metrics
- Loss: 0.20277436077594757
- Accuracy: 0.92604
- Precision: 0.9560674830864092
- Recall: 0.89312
- AUC: 0.9814625504000001
- F1: 0.9235223559581421
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb-roberta-base-3662644
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
Language-Media-Lab/mt5-small-jpn-ain-mt
|
Language-Media-Lab
| 2022-02-04T14:23:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"jpn",
"ain",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- jpn
- ain
tags:
- translation
---
mt5-small-jpn-ain-mt is a machine translation model pretrained with [Google's mT5-small](https://huggingface.co/google/mt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Japanese to Ainu language.
|
Subhashini17/wav2vec2-large-xls-r-300m-ta-colab-new1
|
Subhashini17
| 2022-02-04T11:14:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ta-colab-new1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab-new1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6642
- eval_wer: 0.7611
- eval_runtime: 152.4412
- eval_samples_per_second: 11.683
- eval_steps_per_second: 1.463
- epoch: 10.11
- step: 960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
yohida/yoshida_gpt
|
yohida
| 2022-02-04T10:13:45Z | 4 | 0 |
transformers
|
[
"transformers",
"gpt2",
"text-generation",
"ja",
"japanese",
"gpt",
"lm",
"nlp",
"dataset:cc100",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: ja
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
tags:
- ja
- japanese
- gpt
- text-generation
- lm
- nlp
license: mit
datasets:
- cc100
- wikipedia
widget:
- text: "西田幾多郎は、"
---
# japanese-gpt-1b

This repository provides a 1.3B-parameter Japanese GPT model. The model was trained by [rinna Co., Ltd.](https://corp.rinna.co.jp/)
# How to use the model
*NOTE:* Use `T5Tokenizer` to initiate the tokenizer.
~~~~
import torch
from transformers import T5Tokenizer, AutoModelForCausalLM
tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt-1b")
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-1b")
if torch.cuda.is_available():
model = model.to("cuda")
text = "西田幾多郎は、"
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_length=100,
min_length=100,
do_sample=True,
top_k=500,
top_p=0.95,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
bad_word_ids=[[tokenizer.unk_token_id]]
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
# sample output: 西田幾多郎は、その主著の「善の研究」などで、人間の内面に自然とその根源があると指摘し、その根源的な性格は、この西田哲学を象徴しているとして、カントの「純粋理性批判」と「判断力批判」を対比して捉えます。それは、「人が理性的存在であるかぎりにおいて、人はその当人に固有な道徳的に自覚された善悪の基準を持っている」とするもので、この理性的な善悪の観念を否定するのがカントの
~~~~
# Model architecture
A 24-layer, 2048-hidden-size transformer-based language model.
# Training
The model was trained on [Japanese C4](https://huggingface.co/datasets/allenai/c4), [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective. It reaches around 14 perplexity on a chosen validation set from the same data.
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script, and then augmented with emojis and symbols.
# Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
huggingtweets/dril-drilbot_neo-jril_bot
|
huggingtweets
| 2022-02-04T09:52:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/dril-drilbot_neo-jril_bot/1643968320729/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1468502340634296326/gbl8-ltv_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1374924360780242944/-Q8NfgEr_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Jril & wintbot_neo</div>
<div style="text-align: center; font-size: 14px;">@dril-drilbot_neo-jril_bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Jril & wintbot_neo.
| Data | wint | Jril | wintbot_neo |
| --- | --- | --- | --- |
| Tweets downloaded | 3228 | 113 | 3241 |
| Retweets | 475 | 0 | 315 |
| Short tweets | 305 | 0 | 453 |
| Tweets kept | 2448 | 113 | 2473 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27nmrlyy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-drilbot_neo-jril_bot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/i64hq9wb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/i64hq9wb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-drilbot_neo-jril_bot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/dril-heroicvillain95
|
huggingtweets
| 2022-02-04T08:49:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1402535431523217411/h07KN7VS_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & casually Jesse</div>
<div style="text-align: center; font-size: 14px;">@dril-heroicvillain95</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & casually Jesse.
| Data | wint | casually Jesse |
| --- | --- | --- |
| Tweets downloaded | 3228 | 2663 |
| Retweets | 475 | 133 |
| Short tweets | 305 | 353 |
| Tweets kept | 2448 | 2177 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3u36b2x8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-heroicvillain95's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3c8ft6vl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3c8ft6vl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-heroicvillain95')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Ayham/xlnet_distilgpt2_summarization_cnn_dailymail
|
Ayham
| 2022-02-04T06:33:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: xlnet_distilgpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jonc/distilbert-base-uncased-finetuned-emotion
|
jonc
| 2022-02-04T06:15:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9230733583303665
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2159
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8494 | 1.0 | 250 | 0.3134 | 0.907 | 0.9051 |
| 0.2504 | 2.0 | 500 | 0.2159 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Mapcar/pegasus-samsum
|
Mapcar
| 2022-02-04T03:27:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6936 | 0.54 | 500 | 1.4844 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
SkelterLabsInc/bert-base-japanese-jaquad
|
SkelterLabsInc
| 2022-02-04T02:39:25Z | 87 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"extractive-qa",
"ja",
"dataset:SkelterLabsInc/JaQuAD",
"arxiv:2202.01764",
"license:cc-by-sa-3.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: cc-by-sa-3.0
language: ja
tags:
- question-answering
- extractive-qa
pipeline_tag:
- None
datasets:
- SkelterLabsInc/JaQuAD
metrics:
- Exact match
- F1 score
---
# BERT base Japanese - JaQuAD
## Description
A Japanese Question Answering model fine-tuned on [JaQuAD](https://huggingface.co/datasets/SkelterLabsInc/JaQuAD).
Please refer [BERT base Japanese](https://huggingface.co/cl-tohoku/bert-base-japanese) for details about the pre-training model.
The codes for the fine-tuning are available at [SkelterLabsInc/JaQuAD](https://github.com/SkelterLabsInc/JaQuAD)
## Evaluation results
On the development set.
```shell
{"f1": 77.35, "exact_match": 61.01}
```
On the test set.
```shell
{"f1": 78.92, "exact_match": 63.38}
```
## Usage
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
question = 'アレクサンダー・グラハム・ベルは、どこで生まれたの?'
context = 'アレクサンダー・グラハム・ベルは、スコットランド生まれの科学者、発明家、工学者である。世界初の>実用的電話の発明で知られている。'
model = AutoModelForQuestionAnswering.from_pretrained(
'SkelterLabsInc/bert-base-japanese-jaquad')
tokenizer = AutoTokenizer.from_pretrained(
'SkelterLabsInc/bert-base-japanese-jaquad')
inputs = tokenizer(
question, context, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
outputs = model(**inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
# Get the most likely beginning of answer with the argmax of the score.
answer_start = torch.argmax(answer_start_scores)
# Get the most likely end of answer with the argmax of the score.
# 1 is added to `answer_end` because the index pointed by score is inclusive.
answer_end = torch.argmax(answer_end_scores) + 1
answer = tokenizer.convert_tokens_to_string(
tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
# answer = 'スコットランド'
```
## License
The fine-tuned model is licensed under the [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license.
## Citation
```bibtex
@misc{so2022jaquad,
title={{JaQuAD: Japanese Question Answering Dataset for Machine Reading Comprehension}},
author={ByungHoon So and Kyuhong Byun and Kyungwon Kang and Seongjin Cho},
year={2022},
eprint={2202.01764},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
pritamdeka/PubMedBert-abstract-cord19
|
pritamdeka
| 2022-02-03T23:18:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:pritamdeka/cord-19-abstract",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- pritamdeka/cord-19-abstract
model-index:
- name: PubMedBert-abstract-cord19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmedbert-abstract-cord19
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [pritamdeka/cord-19-abstract](https://huggingface.co/datasets/pritamdeka/cord-19-abstract) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.3774 | 0.15 | 5000 | 1.3212 |
| 1.3937 | 0.29 | 10000 | 1.4059 |
| 1.6812 | 0.44 | 15000 | 1.6174 |
| 1.4712 | 0.59 | 20000 | 1.4383 |
| 1.4293 | 0.73 | 25000 | 1.4356 |
| 1.4155 | 0.88 | 30000 | 1.4283 |
| 1.3963 | 1.03 | 35000 | 1.4135 |
| 1.3718 | 1.18 | 40000 | 1.3948 |
| 1.369 | 1.32 | 45000 | 1.3961 |
| 1.354 | 1.47 | 50000 | 1.3788 |
| 1.3399 | 1.62 | 55000 | 1.3866 |
| 1.3289 | 1.76 | 60000 | 1.3630 |
| 1.3155 | 1.91 | 65000 | 1.3609 |
| 1.2976 | 2.06 | 70000 | 1.3489 |
| 1.2783 | 2.2 | 75000 | 1.3333 |
| 1.2696 | 2.35 | 80000 | 1.3260 |
| 1.2607 | 2.5 | 85000 | 1.3232 |
| 1.2547 | 2.64 | 90000 | 1.3034 |
| 1.2495 | 2.79 | 95000 | 1.3035 |
| 1.2404 | 2.94 | 100000 | 1.3029 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
am-shb/bert-base-multilingual-cased-finetuned
|
am-shb
| 2022-02-03T21:59:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: '57426955'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 57426955
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
AVSilva/bertimbau-large-fine-tuned-md
|
AVSilva
| 2022-02-03T17:19:02Z | 27 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ArBert/roberta-base-finetuned-ner
|
ArBert
| 2022-02-03T16:42:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0738
- Precision: 0.9232
- Recall: 0.9437
- F1: 0.9333
- Accuracy: 0.9825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1397 | 1.0 | 1368 | 0.0957 | 0.9141 | 0.9048 | 0.9094 | 0.9753 |
| 0.0793 | 2.0 | 2736 | 0.0728 | 0.9274 | 0.9324 | 0.9299 | 0.9811 |
| 0.0499 | 3.0 | 4104 | 0.0738 | 0.9232 | 0.9437 | 0.9333 | 0.9825 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tomascufaro/wav2vec2-large-xls-r-300m-spanish-small-v3
|
tomascufaro
| 2022-02-03T15:57:54Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"robust-speech-event",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- "es"
- "robust-speech-event"
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-small-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-small-v3
This model is a fine-tuned version of [jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom](https://huggingface.co/jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3986
- Wer: 0.1980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2372 | 0.26 | 400 | 0.3011 | 0.2660 |
| 0.3413 | 0.53 | 800 | 0.3559 | 0.3228 |
| 0.3598 | 0.79 | 1200 | 0.3753 | 0.3400 |
| 0.3529 | 1.05 | 1600 | 0.3385 | 0.2979 |
| 0.3133 | 1.32 | 2000 | 0.3559 | 0.3056 |
| 0.3158 | 1.58 | 2400 | 0.3364 | 0.2994 |
| 0.3092 | 1.85 | 2800 | 0.3210 | 0.2876 |
| 0.2919 | 2.11 | 3200 | 0.3460 | 0.3010 |
| 0.2666 | 2.37 | 3600 | 0.3543 | 0.3036 |
| 0.2819 | 2.64 | 4000 | 0.3477 | 0.2959 |
| 0.283 | 2.9 | 4400 | 0.3492 | 0.2968 |
| 0.2484 | 3.16 | 4800 | 0.3647 | 0.2993 |
| 0.2371 | 3.43 | 5200 | 0.3601 | 0.2942 |
| 0.2382 | 3.69 | 5600 | 0.3656 | 0.3019 |
| 0.2425 | 3.96 | 6000 | 0.3379 | 0.2873 |
| 0.2092 | 4.22 | 6400 | 0.3385 | 0.2736 |
| 0.2171 | 4.48 | 6800 | 0.3503 | 0.2889 |
| 0.2185 | 4.75 | 7200 | 0.3289 | 0.2727 |
| 0.2236 | 5.01 | 7600 | 0.3447 | 0.2771 |
| 0.1882 | 5.27 | 8000 | 0.3586 | 0.2860 |
| 0.1986 | 5.54 | 8400 | 0.3404 | 0.2829 |
| 0.2055 | 5.8 | 8800 | 0.3561 | 0.2869 |
| 0.196 | 6.06 | 9200 | 0.3633 | 0.2811 |
| 0.1748 | 6.33 | 9600 | 0.3703 | 0.2818 |
| 0.1758 | 6.59 | 10000 | 0.3525 | 0.2816 |
| 0.1819 | 6.86 | 10400 | 0.3581 | 0.2765 |
| 0.1715 | 7.12 | 10800 | 0.3480 | 0.2628 |
| 0.1606 | 7.38 | 11200 | 0.3490 | 0.2703 |
| 0.1632 | 7.65 | 11600 | 0.3461 | 0.2706 |
| 0.1638 | 7.91 | 12000 | 0.3458 | 0.2673 |
| 0.1552 | 8.17 | 12400 | 0.3646 | 0.2732 |
| 0.154 | 8.44 | 12800 | 0.3706 | 0.2726 |
| 0.1512 | 8.7 | 13200 | 0.3609 | 0.2683 |
| 0.149 | 8.97 | 13600 | 0.3610 | 0.2668 |
| 0.1357 | 9.23 | 14000 | 0.3693 | 0.2740 |
| 0.1375 | 9.49 | 14400 | 0.3677 | 0.2625 |
| 0.1391 | 9.76 | 14800 | 0.3795 | 0.2762 |
| 0.1378 | 10.02 | 15200 | 0.3541 | 0.2592 |
| 0.1197 | 10.28 | 15600 | 0.3562 | 0.2507 |
| 0.1259 | 10.55 | 16000 | 0.3612 | 0.2584 |
| 0.1266 | 10.81 | 16400 | 0.3470 | 0.2527 |
| 0.1199 | 11.07 | 16800 | 0.3721 | 0.2571 |
| 0.1157 | 11.34 | 17200 | 0.3734 | 0.2571 |
| 0.1107 | 11.6 | 17600 | 0.3730 | 0.2589 |
| 0.1148 | 11.87 | 18000 | 0.3648 | 0.2536 |
| 0.1095 | 12.13 | 18400 | 0.3746 | 0.2521 |
| 0.1047 | 12.39 | 18800 | 0.3566 | 0.2530 |
| 0.1043 | 12.66 | 19200 | 0.3794 | 0.2545 |
| 0.1066 | 12.92 | 19600 | 0.3548 | 0.2439 |
| 0.0974 | 13.18 | 20000 | 0.3702 | 0.2461 |
| 0.0978 | 13.45 | 20400 | 0.3721 | 0.2492 |
| 0.095 | 13.71 | 20800 | 0.3599 | 0.2467 |
| 0.0963 | 13.97 | 21200 | 0.3650 | 0.2402 |
| 0.0902 | 14.24 | 21600 | 0.3689 | 0.2459 |
| 0.0898 | 14.5 | 22000 | 0.3832 | 0.2452 |
| 0.0865 | 14.77 | 22400 | 0.3982 | 0.2436 |
| 0.0911 | 15.03 | 22800 | 0.3785 | 0.2398 |
| 0.0793 | 15.29 | 23200 | 0.3731 | 0.2396 |
| 0.0806 | 15.56 | 23600 | 0.3626 | 0.2372 |
| 0.0789 | 15.82 | 24000 | 0.3707 | 0.2356 |
| 0.0779 | 16.08 | 24400 | 0.3850 | 0.2368 |
| 0.078 | 16.35 | 24800 | 0.3831 | 0.2363 |
| 0.0732 | 16.61 | 25200 | 0.3947 | 0.2287 |
| 0.0733 | 16.88 | 25600 | 0.3928 | 0.2374 |
| 0.0721 | 17.14 | 26000 | 0.3943 | 0.2324 |
| 0.0676 | 17.4 | 26400 | 0.3793 | 0.2311 |
| 0.0682 | 17.67 | 26800 | 0.3958 | 0.2257 |
| 0.0714 | 17.93 | 27200 | 0.3890 | 0.2322 |
| 0.0673 | 18.19 | 27600 | 0.3872 | 0.2229 |
| 0.0613 | 18.46 | 28000 | 0.3828 | 0.2226 |
| 0.0621 | 18.72 | 28400 | 0.3812 | 0.2214 |
| 0.0622 | 18.98 | 28800 | 0.3919 | 0.2212 |
| 0.0576 | 19.25 | 29200 | 0.4000 | 0.2205 |
| 0.0581 | 19.51 | 29600 | 0.3953 | 0.2203 |
| 0.0573 | 19.78 | 30000 | 0.3947 | 0.2190 |
| 0.0576 | 20.04 | 30400 | 0.3909 | 0.2156 |
| 0.0551 | 20.3 | 30800 | 0.4178 | 0.2153 |
| 0.0525 | 20.57 | 31200 | 0.3935 | 0.2152 |
| 0.0522 | 20.83 | 31600 | 0.4054 | 0.2151 |
| 0.0519 | 21.09 | 32000 | 0.3877 | 0.2135 |
| 0.0479 | 21.36 | 32400 | 0.4119 | 0.2107 |
| 0.0472 | 21.62 | 32800 | 0.3967 | 0.2091 |
| 0.048 | 21.89 | 33200 | 0.3812 | 0.2057 |
| 0.0458 | 22.15 | 33600 | 0.3931 | 0.2043 |
| 0.0459 | 22.41 | 34000 | 0.3937 | 0.2049 |
| 0.0448 | 22.68 | 34400 | 0.3900 | 0.2056 |
| 0.0432 | 22.94 | 34800 | 0.4050 | 0.2049 |
| 0.0425 | 23.2 | 35200 | 0.3985 | 0.2014 |
| 0.0415 | 23.47 | 35600 | 0.3976 | 0.2013 |
| 0.0403 | 23.73 | 36000 | 0.4031 | 0.2018 |
| 0.04 | 23.99 | 36400 | 0.3996 | 0.2000 |
| 0.039 | 24.26 | 36800 | 0.3977 | 0.1993 |
| 0.0406 | 24.52 | 37200 | 0.3967 | 0.2000 |
| 0.0391 | 24.79 | 37600 | 0.3986 | 0.1980 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
keras-io/pixel-cnn-mnist
|
keras-io
| 2022-02-03T13:19:37Z | 6 | 0 |
tf-keras
|
[
"tf-keras",
"convnet",
"mnist",
"generative",
"license:cc0-1.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- convnet
- mnist
- generative
license:
- cc0-1.0
---
## Keras Implementation of PixelCNN on MNIST 🔢
This repo contains the model [PixelCNN](https://keras.io/examples/generative/pixelcnn/).
Sample images generated:
<img src="https://i.ibb.co/RDWbJBM/image.png" width="120" height='120'> <img src="https://i.ibb.co/kGPTDDb/104c083f-68e4-4d10-8b37-a242a7f10dd6.png" width="120" height='120'> <img src="https://i.ibb.co/9Wqqhyc/indir-5.png" width="120" height='120'> <img src="https://i.ibb.co/x7yh5py/indir-6.png" width="120" height='120'>
Full credits to author: [ADMoreau](https://github.com/ADMoreau)
|
diwank/maptask-deberta-pair
|
diwank
| 2022-02-03T12:51:24Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"deberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
---
# maptask-deberta-pair
Deberta-based Daily MapTask style dialog-act annotations classification model
## Example
```python
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
model = ClassificationModel("deberta", "diwank/maptask-deberta-pair")
predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]])
convert_to_label = lambda n: ["acknowledge (0), align (1), check (2), clarify (3), explain (4), instruct (5), query_w (6), query_yn (7), ready (8), reply_n (9), reply_w (10), reply_y (11)".split(', ')[i] for i in n]
convert_to_label(predictions) # reply_n (9)
```
|
Hetarth/marian-finetuned-hi-hinglish
|
Hetarth
| 2022-02-03T09:54:31Z | 8 | 0 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: marian-finetuned-hi-hinglish
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# marian-finetuned-hi-hinglish
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1869
- Validation Loss: 4.0607
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 279, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.1869 | 4.0607 | 0 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
versae/kenlm-5gram-ncc
|
versae
| 2022-02-03T08:16:51Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
|
pritoms/distilroberta-base-YTTranscript23
|
pritoms
| 2022-02-03T05:52:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-YTTranscript23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-YTTranscript23
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 70 | 2.9007 |
| No log | 2.0 | 140 | 2.9651 |
| No log | 3.0 | 210 | 2.9374 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
pritoms/distilgpt2-YTTranscriptTrial2
|
pritoms
| 2022-02-03T04:46:19Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-YTTranscriptTrial2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-YTTranscriptTrial2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 70 | 6.0027 |
| No log | 2.0 | 140 | 5.9072 |
| No log | 3.0 | 210 | 5.8738 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/denyah_
|
huggingtweets
| 2022-02-03T01:43:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/denyah_/1643852632266/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484264819049959425/siOsFP3t_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Den</div>
<div style="text-align: center; font-size: 14px;">@denyah_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Den.
| Data | Den |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 464 |
| Short tweets | 795 |
| Tweets kept | 1985 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3e5c08gr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @denyah_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1438ocp8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1438ocp8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/denyah_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Plim/xls-r-300m-lm-fr
|
Plim
| 2022-02-02T23:29:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"fr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- fr
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [./checkpoint-6000](https://huggingface.co/./checkpoint-6000) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2619
- Wer: 0.2457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.495 | 0.16 | 500 | 3.3883 | 1.0 |
| 2.9095 | 0.32 | 1000 | 2.9152 | 1.0000 |
| 1.8434 | 0.49 | 1500 | 1.0473 | 0.7446 |
| 1.4298 | 0.65 | 2000 | 0.5729 | 0.5130 |
| 1.1937 | 0.81 | 2500 | 0.3795 | 0.3450 |
| 1.1248 | 0.97 | 3000 | 0.3321 | 0.3052 |
| 1.0835 | 1.13 | 3500 | 0.3038 | 0.2805 |
| 1.0479 | 1.3 | 4000 | 0.2910 | 0.2689 |
| 1.0413 | 1.46 | 4500 | 0.2798 | 0.2593 |
| 1.014 | 1.62 | 5000 | 0.2727 | 0.2512 |
| 1.004 | 1.78 | 5500 | 0.2646 | 0.2471 |
| 0.9949 | 1.94 | 6000 | 0.2619 | 0.2457 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
microsoft/wavlm-large
|
microsoft
| 2022-02-02T21:21:50Z | 310,610 | 67 |
transformers
|
[
"transformers",
"pytorch",
"wavlm",
"feature-extraction",
"speech",
"en",
"arxiv:1912.07875",
"arxiv:2106.06909",
"arxiv:2101.00390",
"arxiv:2110.13900",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- speech
inference: false
---
# WavLM-Large
[Microsoft's WavLM](https://github.com/microsoft/unilm/tree/master/wavlm)
The large model pretrained on 16kHz sampled speech audio. When using the model, make sure that your speech input is also sampled at 16kHz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
The model was pre-trained on:
- 60,000 hours of [Libri-Light](https://arxiv.org/abs/1912.07875)
- 10,000 hours of [GigaSpeech](https://arxiv.org/abs/2106.06909)
- 24,000 hours of [VoxPopuli](https://arxiv.org/abs/2101.00390)
[Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)
Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei
**Abstract**
*Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.*
The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm.
# Usage
This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be
used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on the [SUPERB benchmark](https://superbbenchmark.org/).
**Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence
of phonemes before fine-tuning.
## Speech Recognition
To fine-tune the model for speech recognition, see [the official speech recognition example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition).
## Speech Classification
To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification).
## Speaker Verification
TODO
## Speaker Diarization
TODO
# Contribution
The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)

|
Iskaj/newnew
|
Iskaj
| 2022-02-02T20:02:08Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"nl",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: newnew
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newnew
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL dataset.
It achieves the following results on the evaluation set:
- Loss: 11.4375
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
jonfd/convbert-base-igc-is
|
jonfd
| 2022-02-02T17:10:34Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"convbert",
"feature-extraction",
"is",
"dataset:igc",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- is
license: cc-by-4.0
datasets:
- igc
---
# Icelandic ConvBERT-Base
This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
|
cahya/wav2vec2-base-turkish-artificial
|
cahya
| 2022-02-02T15:44:36Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Base Turkish with Artificial Voices by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 57.60
---
# Wav2Vec2-Large-XLSR-Turkish
Fine-tuned [ceyda/wav2vec2-base-760](https://huggingface.co/ceyda/wav2vec2-base-760)
on the [Turkish Artificial Common Voice dataset](https://cloud.uncool.ai/index.php/f/2165181).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-turkish-artificial")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 57.60 %
## Training
The Artificial Common Voice `train`, `validation` is used to fine tune the model
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
kmfoda/staging-pegasus-gmeetsamsum
|
kmfoda
| 2022-02-02T14:34:58Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"feature-extraction",
"summarization",
"en",
"arxiv:1912.08777",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- summarization
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Ayham/roberta_distilgpt2_summarization_cnn_dailymail
|
Ayham
| 2022-02-02T12:46:46Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: roberta_distilgpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
diwank/dyda-deberta-pair
|
diwank
| 2022-02-02T10:48:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"deberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
---
# diwank/dyda-deberta-pair
Deberta-based Daily Dialog style dialog-act annotations classification model. It takes two sentences as inputs (one previous and one current of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. Outputs one of four labels (exactly as in the [daily-dialog dataset](https://huggingface.co/datasets/daily_dialog) ): *__dummy__ (0), inform (1), question (2), directive (3), commissive (4)*
## Usage
```python
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
model = ClassificationModel("deberta", "diwank/dyda-deberta-pair")
convert_to_label = lambda n: ["__dummy__ (0), inform (1), question (2), directive (3), commissive (4)".split(', ')[i] for i in n]
predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]])
convert_to_label(predictions) # inform (1)
```
|
NbAiLab/wav2vec2-xlsr-300M-NPSC-OH
|
NbAiLab
| 2022-02-02T06:10:42Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- NbAiLab/NPSC
- generated_from_trainer
model-index:
- name: wav2vec2-xlsr-300M-NPSC-OH
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-300M-NPSC-OH
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1692
- Wer: 0.1663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.1638 | 0.66 | 500 | 3.0686 | 1.0 |
| 2.9311 | 1.31 | 1000 | 2.9208 | 1.0 |
| 2.4175 | 1.97 | 1500 | 1.5009 | 0.9049 |
| 1.4442 | 2.63 | 2000 | 0.4426 | 0.3783 |
| 1.2624 | 3.28 | 2500 | 0.3193 | 0.2998 |
| 1.1889 | 3.94 | 3000 | 0.2867 | 0.2630 |
| 1.1315 | 4.6 | 3500 | 0.2566 | 0.2444 |
| 1.0864 | 5.26 | 4000 | 0.2368 | 0.2294 |
| 1.093 | 5.91 | 4500 | 0.2240 | 0.2151 |
| 1.0368 | 6.57 | 5000 | 0.2117 | 0.2056 |
| 1.0178 | 7.23 | 5500 | 0.2020 | 0.1954 |
| 1.0035 | 7.88 | 6000 | 0.2005 | 0.1924 |
| 0.9759 | 8.54 | 6500 | 0.1971 | 0.1863 |
| 0.9795 | 9.2 | 7000 | 0.1892 | 0.1812 |
| 0.9601 | 9.85 | 7500 | 0.1863 | 0.1795 |
| 0.9673 | 10.51 | 8000 | 0.1809 | 0.1761 |
| 0.9233 | 11.17 | 8500 | 0.1818 | 0.1755 |
| 0.9382 | 11.83 | 9000 | 0.1767 | 0.1741 |
| 0.9242 | 12.48 | 9500 | 0.1743 | 0.1703 |
| 0.9703 | 13.14 | 10000 | 0.1711 | 0.1711 |
| 0.9139 | 13.8 | 10500 | 0.1718 | 0.1672 |
| 0.9073 | 14.45 | 11000 | 0.1700 | 0.1665 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
tal-yifat/bert-injury-classifier
|
tal-yifat
| 2022-02-02T04:35:44Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-injury-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-injury-classifier
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6915
- Accuracy: 0.5298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6676 | 1.0 | 19026 | 0.6635 | 0.6216 |
| 0.6915 | 2.0 | 38052 | 0.6915 | 0.5298 |
| 0.6924 | 3.0 | 57078 | 0.6915 | 0.5298 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
BigSalmon/InfillFormalLincoln
|
BigSalmon
| 2022-02-02T03:45:03Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InfillFormalLincoln")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InfillFormalLincoln")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2Space (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
infill: increasing the number of sidewalks in suburban areas will [MASK].
Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ).
infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago.
infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly.
Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly.
infill:
```
|
huggingtweets/cosmonolan
|
huggingtweets
| 2022-02-01T21:59:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/cosmonolan/1643752768713/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1436186613667622920/PQrOPSrV_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nolan Koblischke</div>
<div style="text-align: center; font-size: 14px;">@cosmonolan</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nolan Koblischke.
| Data | Nolan Koblischke |
| --- | --- |
| Tweets downloaded | 154 |
| Retweets | 5 |
| Short tweets | 6 |
| Tweets kept | 143 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13msto5g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cosmonolan's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/25mhxfie) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/25mhxfie/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cosmonolan')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
cahya/wav2vec2-base-turkish-artificial-cv
|
cahya
| 2022-02-01T19:34:46Z | 14 | 4 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: tr
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Base Turkish by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tr
type: common_voice
args: tr
metrics:
- name: Test WER
type: wer
value: 13.70
---
# Wav2Vec2-Large-XLSR-Turkish
This is the model for Wav2Vec2-Base-Turkish-Artificial-CV, a fine-tuned
[cahya/wav2vec2-base-turkish-artificial](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial)
model on [Turkish Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-base-turkish-artificial-cv")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\'\`…\’»«]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 13.70 %
## Training
The Common Voice `train`, `validation`, other and invalidated
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
|
cahya/output
|
cahya
| 2022-02-01T15:40:45Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial-cv](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial-cv) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1822
- Wer: 0.1423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-07
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-common_voice-tamil
|
patrickvonplaten
| 2022-02-01T14:17:40Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"ta",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ta
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tamil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tamil
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1172
- Wer: 1.0070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.84 | 100 | 4.0148 | 1.0 |
| No log | 1.69 | 200 | 3.1738 | 1.0 |
| No log | 2.54 | 300 | 2.5980 | 1.0236 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
huggingtweets/scottmorrisonmp
|
huggingtweets
| 2022-02-01T11:31:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/scottmorrisonmp/1643715083152/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1116081523394891776/AYnEcQnG_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Scott Morrison</div>
<div style="text-align: center; font-size: 14px;">@scottmorrisonmp</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Scott Morrison.
| Data | Scott Morrison |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 610 |
| Short tweets | 34 |
| Tweets kept | 2599 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ytdoprx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @scottmorrisonmp's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/19568gcc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/19568gcc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/scottmorrisonmp')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
moussaKam/frugalscore_medium_bert-base_bert-score
|
moussaKam
| 2022-02-01T10:50:43Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
|
moussaKam/frugalscore_small_bert-base_bert-score
|
moussaKam
| 2022-02-01T10:50:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
|
moussaKam/frugalscore_tiny_bert-base_bert-score
|
moussaKam
| 2022-02-01T10:50:21Z | 4,310 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
|
MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar
|
MaryaAI
| 2022-02-01T08:51:38Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0589
- Validation Loss: 5.3227
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.0589 | 5.3227 | 0 |
### Framework versions
- Transformers 4.17.0.dev0
- TensorFlow 2.7.0
- Datasets 1.18.3.dev0
- Tokenizers 0.10.3
|
AndrewMcDowell/wav2vec2-xls-r-1b-arabic
|
AndrewMcDowell
| 2022-02-01T08:13:55Z | 20 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ar",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ar
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1373
- Wer: 0.8607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.2416 | 0.84 | 500 | 1.2867 | 0.8875 |
| 2.3089 | 1.67 | 1000 | 1.8336 | 0.9548 |
| 2.3614 | 2.51 | 1500 | 1.5937 | 0.9469 |
| 2.5234 | 3.35 | 2000 | 1.9765 | 0.9867 |
| 2.5373 | 4.19 | 2500 | 1.9062 | 0.9916 |
| 2.5703 | 5.03 | 3000 | 1.9772 | 0.9915 |
| 2.4656 | 5.86 | 3500 | 1.8083 | 0.9829 |
| 2.4339 | 6.7 | 4000 | 1.7548 | 0.9752 |
| 2.344 | 7.54 | 4500 | 1.6146 | 0.9638 |
| 2.2677 | 8.38 | 5000 | 1.5105 | 0.9499 |
| 2.2074 | 9.21 | 5500 | 1.4191 | 0.9357 |
| 2.3768 | 10.05 | 6000 | 1.6663 | 0.9665 |
| 2.3804 | 10.89 | 6500 | 1.6571 | 0.9720 |
| 2.3237 | 11.72 | 7000 | 1.6049 | 0.9637 |
| 2.317 | 12.56 | 7500 | 1.5875 | 0.9655 |
| 2.2988 | 13.4 | 8000 | 1.5357 | 0.9603 |
| 2.2906 | 14.24 | 8500 | 1.5637 | 0.9592 |
| 2.2848 | 15.08 | 9000 | 1.5326 | 0.9537 |
| 2.2381 | 15.91 | 9500 | 1.5631 | 0.9508 |
| 2.2072 | 16.75 | 10000 | 1.4565 | 0.9395 |
| 2.197 | 17.59 | 10500 | 1.4304 | 0.9406 |
| 2.198 | 18.43 | 11000 | 1.4230 | 0.9382 |
| 2.1668 | 19.26 | 11500 | 1.3998 | 0.9315 |
| 2.1498 | 20.1 | 12000 | 1.3920 | 0.9258 |
| 2.1244 | 20.94 | 12500 | 1.3584 | 0.9153 |
| 2.0953 | 21.78 | 13000 | 1.3274 | 0.9054 |
| 2.0762 | 22.61 | 13500 | 1.2933 | 0.9073 |
| 2.0587 | 23.45 | 14000 | 1.2516 | 0.8944 |
| 2.0363 | 24.29 | 14500 | 1.2214 | 0.8902 |
| 2.0302 | 25.13 | 15000 | 1.2087 | 0.8871 |
| 2.0071 | 25.96 | 15500 | 1.1953 | 0.8786 |
| 1.9882 | 26.8 | 16000 | 1.1738 | 0.8712 |
| 1.9772 | 27.64 | 16500 | 1.1647 | 0.8672 |
| 1.9585 | 28.48 | 17000 | 1.1459 | 0.8635 |
| 1.944 | 29.31 | 17500 | 1.1414 | 0.8616 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
huggingtweets/clamtime-madramami
|
huggingtweets
| 2022-02-01T07:09:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/clamtime-madramami/1643699341002/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486460616927858690/H_L_HiW-_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486839044906618880/x1Q9ED9b_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">clementine!!!! & riley, twink eliminator 🐾🏳️⚧️</div>
<div style="text-align: center; font-size: 14px;">@clamtime-madramami</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from clementine!!!! & riley, twink eliminator 🐾🏳️⚧️.
| Data | clementine!!!! | riley, twink eliminator 🐾🏳️⚧️ |
| --- | --- | --- |
| Tweets downloaded | 3239 | 3247 |
| Retweets | 340 | 114 |
| Short tweets | 872 | 607 |
| Tweets kept | 2027 | 2526 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lh3p7v6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @clamtime-madramami's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gman3fy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gman3fy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/clamtime-madramami')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hady/wav2vec2-base-timit-demo-colab
|
hady
| 2022-02-01T07:01:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jkang/espnet2_an4_asr
|
jkang
| 2022-02-01T04:46:54Z | 2 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:an4",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- an4
license: cc-by-4.0
---
## ESPnet2 ASR model
### `jkang/espnet2_an4_asr`
This model was trained by jaekookang using an4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 48422215e272812feb9bbac9d7cf4aae6a316bca
pip install -e .
cd egs2/an4/asr1
./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_an4_asr
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Feb 1 13:22:35 KST 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `48422215e272812feb9bbac9d7cf4aae6a316bca`
- Commit date: `Fri Jan 28 17:25:31 2022 +0000`
## asr_train_asr_transformer_raw_en_bpe30_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/test|130|773|91.5|6.5|2.1|0.6|9.2|38.5|
|decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/train_dev|100|591|88.8|7.4|3.7|0.7|11.8|41.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/test|130|2565|96.6|1.2|2.2|1.0|4.4|38.5|
|decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/train_dev|100|1915|94.0|1.7|4.3|0.4|6.4|41.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/test|130|2695|96.8|1.1|2.1|0.9|4.2|38.5|
|decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/train_dev|100|2015|94.3|1.6|4.1|0.4|6.1|41.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_raw_en_bpe30_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 64
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe30_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe30_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe30_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe30_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_nodev_sp/wav.scp
- speech
- sound
- - dump/raw/train_nodev_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/train_dev/wav.scp
- speech
- sound
- - dump/raw/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 2500
token_list:
- <blank>
- <unk>
- ▁
- T
- E
- O
- R
- Y
- A
- H
- U
- S
- I
- F
- B
- L
- P
- D
- G
- M
- C
- V
- X
- J
- K
- Z
- W
- N
- Q
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram30/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe30_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Priyajay/xls-r-ab-test
|
Priyajay
| 2022-02-01T04:29:17Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hi",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- hi
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 248.1278
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
jonfd/electra-small-is-no
|
jonfd
| 2022-01-31T23:41:45Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"is",
"no",
"dataset:igc",
"dataset:ic3",
"dataset:jonfd/ICC",
"dataset:mc4",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- is
- no
license: cc-by-4.0
datasets:
- igc
- ic3
- jonfd/ICC
- mc4
---
# Icelandic-Norwegian ELECTRA-Small
This model was pretrained on the following corpora:
* The [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/) (IGC)
* The Icelandic Common Crawl Corpus (IC3)
* The [Icelandic Crawled Corpus](https://huggingface.co/datasets/jonfd/ICC) (ICC)
* The [Multilingual Colossal Clean Crawled Corpus](https://huggingface.co/datasets/mc4) (mC4) - Icelandic and Norwegian text obtained from .is and .no domains, respectively
The total size of the corpus after document-level deduplication and filtering was 7.41B tokens, split equally between the two languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 64,105 for 1.1 million steps, and otherwise with default settings.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
|
paintingpeter/distilbert-base-uncased-distilled-clinc
|
paintingpeter
| 2022-01-31T23:27:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9467741935483871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2795
- Accuracy: 0.9468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.4223 | 1.0 | 318 | 2.5556 | 0.7561 |
| 1.9655 | 2.0 | 636 | 1.3075 | 0.8577 |
| 1.0041 | 3.0 | 954 | 0.6970 | 0.9165 |
| 0.5449 | 4.0 | 1272 | 0.4637 | 0.9339 |
| 0.3424 | 5.0 | 1590 | 0.3630 | 0.9397 |
| 0.247 | 6.0 | 1908 | 0.3225 | 0.9442 |
| 0.1968 | 7.0 | 2226 | 0.2983 | 0.9458 |
| 0.1693 | 8.0 | 2544 | 0.2866 | 0.9465 |
| 0.1547 | 9.0 | 2862 | 0.2820 | 0.9468 |
| 0.1477 | 10.0 | 3180 | 0.2795 | 0.9468 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
gilparmentier/pokemon_gptj_model
|
gilparmentier
| 2022-01-31T21:19:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"causal-lm",
"en",
"arxiv:2104.09864",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- The Pile
---
# GPT-J 6B
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Intended Use and Limitations
GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
<figure>
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
details.</p>
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
Thus, evaluation was not attempted.</p>
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
- [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
- [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
- [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
- [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
- [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
- [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
|
shaina/CoQUAD_MPNet
|
shaina
| 2022-01-31T18:22:46Z | 0 | 0 | null |
[
"MPNet",
"en",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
tags:
- MPNet
license: apache-2.0
dataset:
- covid-19
---
# CoQUAD_MPNet : MPNet model for COVID-19
## Introduction
It is a state-of-the-art language model for MPNet for Covid-19 dataset with focus on post-covid.
## How to use for Deepset Haystack
```python
# Load data
from datasets import load_dataset
dataset = load_dataset("shaina/covid19")
# Haystack pipeline
!sudo apt-get install git-lfs
!git lfs install
!git clone https://huggingface.co/shaina/CoQUAD_MPNet
GIT_LFS_SKIP_SMUDGE=1
from haystack.nodes import ElasticsearchRetriever
retriever = ElasticsearchRetriever(document_store=document_store)
reader = FARMReader(model_name_or_path="/content/drive/MyDrive/CoQUAD_MPNet", use_gpu=True)
from haystack.pipelines import ExtractiveQAPipeline
pipe = ExtractiveQAPipeline(reader, retriever)
prediction = pipe.run(
query="What is post-COVID?", params={"Retriever": {"top_k": 10}, "Reader": {"top_k": 5}}
)
from pprint import pprint
pprint(prediction)
```
---
## Authors
Shaina Raza
---
|
ncats/EpiExtract4GARD-v1
|
ncats
| 2022-01-31T17:03:33Z | 21 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
## Model description
**EpiExtract4GARD** is a fine-tuned [BioBERT-base-cased](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) model that is ready to use for **Named Entity Recognition** of locations (LOC), epidemiologic types (EPI), and epidemiologic rates (STAT). This model was fine-tuned on [EpiSet4NER](https://huggingface.co/datasets/ncats/EpiSet4NER) for epidemiological information from rare disease abstracts. See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. See [EpiExtract4GARD on GitHub](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard) for details on the entire pipeline.
#### How to use
You can use this model with the Hosted inference API to the right with this [test sentence](https://pubmed.ncbi.nlm.nih.gov/21659675/): "27 patients have been diagnosed with PKU in Iceland since 1947. Incidence 1972-2008 is 1/8400 living births."
See code below for use with Transformers *pipeline* for NER.:
~~~
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("ncats/EpiExtract4GARD")
tokenizer = AutoTokenizer.from_pretrained("ncats/EpiExtract4GARD")
NER_pipeline = pipeline('ner', model=model, tokenizer=tokenizer,aggregation_strategy='simple')
sample = "The live-birth prevalence of mucopolysaccharidoses in Estonia. Previous studies on the prevalence of mucopolysaccharidoses (MPS) in different populations have shown considerable variations. There are, however, few data with regard to the prevalence of MPSs in Fenno-Ugric populations or in north-eastern Europe, except for a report about Scandinavian countries. A retrospective epidemiological study of MPSs in Estonia was undertaken, and live-birth prevalence of MPS patients born between 1985 and 2006 was estimated. The live-birth prevalence for all MPS subtypes was found to be 4.05 per 100,000 live births, which is consistent with most other European studies. MPS II had the highest calculated incidence, with 2.16 per 100,000 live births (4.2 per 100,000 male live births), forming 53% of all diagnosed MPS cases, and was twice as high as in other studied European populations. The second most common subtype was MPS IIIA, with a live-birth prevalence of 1.62 in 100,000 live births. With 0.27 out of 100,000 live births, MPS VI had the third-highest live-birth prevalence. No cases of MPS I were diagnosed in Estonia, making the prevalence of MPS I in Estonia much lower than in other European populations. MPSs are the third most frequent inborn error of metabolism in Estonia after phenylketonuria and galactosemia."
sample2 = "Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Kuwait is a small Arabian Gulf country with a high rate of consanguinity and where a national newborn screening program was expanded in October 2014 to include a wide range of endocrine and metabolic disorders. A retrospective study conducted between January 2015 and December 2020 revealed a total of 304,086 newborns have been screened in Kuwait. Six newborns were diagnosed with classic homocystinuria with an incidence of 1:50,000, which is not as high as in Qatar but higher than the global incidence. Molecular testing for five of them has revealed three previously reported pathogenic variants in the <i>CBS</i> gene, c.969G>A, p.(Trp323Ter); c.982G>A, p.(Asp328Asn); and the Qatari founder variant c.1006C>T, p.(Arg336Cys). This is the first study to review the screening of newborns in Kuwait for classic homocystinuria, starting with the detection of elevated blood methionine and providing a follow-up strategy for positive results, including plasma total homocysteine and amino acid analyses. Further, we have demonstrated an increase in the specificity of the current newborn screening test for classic homocystinuria by including the methionine to phenylalanine ratio along with the elevated methionine blood levels in first-tier testing. Here, we provide evidence that the newborn screening in Kuwait has led to the early detection of classic homocystinuria cases and enabled the affected individuals to lead active and productive lives."
#Sample 1 is from: Krabbi K, Joost K, Zordania R, Talvik I, Rein R, Huijmans JG, Verheijen FV, Õunap K. The live-birth prevalence of mucopolysaccharidoses in Estonia. Genet Test Mol Biomarkers. 2012 Aug;16(8):846-9. doi: 10.1089/gtmb.2011.0307. Epub 2012 Apr 5. PMID: 22480138; PMCID: PMC3422553.
#Sample 2 is from: Alsharhan H, Ahmed AA, Ali NM, Alahmad A, Albash B, Elshafie RM, Alkanderi S, Elkazzaz UM, Cyril PX, Abdelrahman RM, Elmonairy AA, Ibrahim SM, Elfeky YME, Sadik DI, Al-Enezi SD, Salloum AM, Girish Y, Al-Ali M, Ramadan DG, Alsafi R, Al-Rushood M, Bastaki L. Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Int J Neonatal Screen. 2021 Aug 17;7(3):56. doi: 10.3390/ijns7030056. PMID: 34449519; PMCID: PMC8395821.
NER_pipeline(sample)
NER_pipeline(sample2)
~~~
Or if you download [*classify_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/classify_abs.py), [*extract_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/extract_abs.py), and [*gard-id-name-synonyms.json*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/gard-id-name-synonyms.json) from GitHub then you can test with this [*additional* code](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/Case%20Study.ipynb):
~~~
import pandas as pd
import extract_abs
import classify_abs
pd.set_option('display.max_colwidth', None)
NER_pipeline = extract_abs.init_NER_pipeline()
GARD_dict, max_length = extract_abs.load_GARD_diseases()
nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer = classify_abs.init_classify_model()
def search(term,num_results = 50):
return extract_abs.search_term_extraction(term, num_results, NER_pipeline, GARD_dict, max_length,nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer)
a = search(7058)
a
b = search('Santos Mateus Leal syndrome')
b
c = search('Fellman syndrome')
c
d = search('GARD:0009941')
d
e = search('Homocystinuria')
e
~~~
#### Limitations and bias
## Training data
It was trained on [EpiSet4NER](https://huggingface.co/datasets/ncats/EpiSet4NER). See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
---------|--------------
O |Outside of a named entity
B-LOC | Beginning of a location
I-LOC | Inside of a location
B-EPI | Beginning of an epidemiologic type (e.g. "incidence", "prevalence", "occurrence")
I-EPI | Epidemiologic type that is not the beginning token.
B-STAT | Beginning of an epidemiologic rate
I-STAT | Inside of an epidemiologic rate
### EpiSet Statistics
Beyond any limitations due to the EpiSet4NER dataset, this model is limited in numeracy due to BERT-based model's use of subword embeddings, which is crucial for epidemiologic rate identification and limits the entity-level results. Additionally, more recent weakly supervised learning techniques could be used to improve the performance of the model without improving the underlying dataset.
## Training procedure
This model was trained on a [AWS EC2 p3.2xlarge](https://aws.amazon.com/ec2/instance-types/), which utilized a single Tesla V100 GPU, with these hyperparameters:
4 epochs of training (AdamW weight decay = 0.05) with a batch size of 16. Maximum sequence length = 192. Model was fed one sentence at a time. Full config [here](https://wandb.ai/wzkariampuzha/huggingface/runs/353prhts/files/config.yaml).
## Hold-out validation results
metric| entity-level result
-|-
f1 | 83.8
precision | 83.2
recall | 84.5
## Test results
| Dataset for Model Training | Evaluation Level | Entity | Precision | Recall | F1 |
|:--------------------------:|:----------------:|:------------------:|:---------:|:------:|:-----:|
| EpiSet | Entity-Level | Overall | 0.556 | 0.662 | 0.605 |
| | | Location | 0.661 | 0.696 | 0.678 |
| | | Epidemiologic Type | 0.854 | 0.911 | 0.882 |
| | | Epidemiologic Rate | 0.143 | 0.218 | 0.173 |
| | Token-Level | Overall | 0.811 | 0.713 | 0.759 |
| | | Location | 0.949 | 0.742 | 0.833 |
| | | Epidemiologic Type | 0.9 | 0.917 | 0.908 |
| | | Epidemiologic Rate | 0.724 | 0.636 | 0.677 |
Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at Axle Informatics/NCATS for contributing this model.
|
gagan3012/xls-r-300m-pa
|
gagan3012
| 2022-01-31T15:27:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- pa-IN
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: xls-r-300m-pa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-pa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0443
- Wer: 0.5715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 500.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 4.6694 | 19.22 | 500 | 4.0455 | 1.0 |
| 3.3907 | 38.45 | 1000 | 3.2836 | 1.0 |
| 2.0866 | 57.67 | 1500 | 1.2788 | 0.7715 |
| 1.4106 | 76.9 | 2000 | 0.7866 | 0.6891 |
| 1.1711 | 96.15 | 2500 | 0.6556 | 0.6272 |
| 1.038 | 115.37 | 3000 | 0.6195 | 0.5680 |
| 0.8989 | 134.6 | 3500 | 0.6563 | 0.5602 |
| 0.8021 | 153.82 | 4000 | 0.6644 | 0.5327 |
| 0.7161 | 173.07 | 4500 | 0.6844 | 0.5253 |
| 0.6449 | 192.3 | 5000 | 0.7018 | 0.5331 |
| 0.5659 | 211.52 | 5500 | 0.7451 | 0.5465 |
| 0.5118 | 230.75 | 6000 | 0.7857 | 0.5386 |
| 0.4385 | 249.97 | 6500 | 0.8062 | 0.5382 |
| 0.3984 | 269.22 | 7000 | 0.8316 | 0.5621 |
| 0.3666 | 288.45 | 7500 | 0.8736 | 0.5504 |
| 0.3256 | 307.67 | 8000 | 0.9133 | 0.5688 |
| 0.289 | 326.9 | 8500 | 0.9556 | 0.5684 |
| 0.2663 | 346.15 | 9000 | 0.9344 | 0.5708 |
| 0.2445 | 365.37 | 9500 | 0.9472 | 0.5590 |
| 0.2289 | 384.6 | 10000 | 0.9713 | 0.5672 |
| 0.2048 | 403.82 | 10500 | 0.9978 | 0.5762 |
| 0.1857 | 423.07 | 11000 | 1.0230 | 0.5798 |
| 0.1751 | 442.3 | 11500 | 1.0409 | 0.5755 |
| 0.1688 | 461.52 | 12000 | 1.0445 | 0.5727 |
| 0.1633 | 480.75 | 12500 | 1.0484 | 0.5739 |
| 0.1488 | 499.97 | 13000 | 1.0443 | 0.5715 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
huggingtweets/goando-tsuchinao83-za09313103
|
huggingtweets
| 2022-01-31T09:56:33Z | 0 | 0 | null |
[
"huggingtweets",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/goando-tsuchinao83-za09313103/1643622988627/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/715665333218979842/fLLzpFee_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1145832571214815232/KYNcOP04_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1281544202627674112/zglo72WL_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">土屋尚史 / Goodpatch & Go Ando / PREDUCTS / THE GUILD & shun nozaki / Goodpatch</div>
<div style="text-align: center; font-size: 14px;">@goando-tsuchinao83-za09313103</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 土屋尚史 / Goodpatch & Go Ando / PREDUCTS / THE GUILD & shun nozaki / Goodpatch.
| Data | 土屋尚史 / Goodpatch | Go Ando / PREDUCTS / THE GUILD | shun nozaki / Goodpatch |
| --- | --- | --- | --- |
| Tweets downloaded | 3236 | 3250 | 798 |
| Retweets | 1577 | 97 | 34 |
| Short tweets | 914 | 1729 | 458 |
| Tweets kept | 745 | 1424 | 306 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/31bsh75f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @goando-tsuchinao83-za09313103's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/26i8c30r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/26i8c30r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/goando-tsuchinao83-za09313103')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
anton-l/wav2vec2-xls-r-common_voice-tr-ft
|
anton-l
| 2022-01-31T09:48:53Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-common_voice-tr-ft-500sh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-common_voice-tr-ft-500sh
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5794
- Wer: 0.4009
- Cer: 0.1032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 0.5288 | 17.0 | 500 | 0.5099 | 0.5426 | 0.1432 |
| 0.2967 | 34.0 | 1000 | 0.5421 | 0.4746 | 0.1256 |
| 0.2447 | 51.0 | 1500 | 0.5347 | 0.4831 | 0.1267 |
| 0.122 | 68.01 | 2000 | 0.5854 | 0.4479 | 0.1161 |
| 0.1035 | 86.0 | 2500 | 0.5597 | 0.4457 | 0.1166 |
| 0.081 | 103.0 | 3000 | 0.5748 | 0.4250 | 0.1144 |
| 0.0849 | 120.0 | 3500 | 0.5598 | 0.4337 | 0.1145 |
| 0.0542 | 137.01 | 4000 | 0.5687 | 0.4223 | 0.1097 |
| 0.0318 | 155.0 | 4500 | 0.5904 | 0.4057 | 0.1052 |
| 0.0106 | 172.0 | 5000 | 0.5794 | 0.4009 | 0.1032 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
|
huggingtweets/eri_razapii-marisakura-miyakomx
|
huggingtweets
| 2022-01-31T07:36:10Z | 0 | 0 | null |
[
"huggingtweets",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/eri_razapii-marisakura-miyakomx/1643614565483/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1463699400405164034/aRY9jlnO_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1460131579930755073/ln4j-nWU_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1466279279667277828/VqmxK5gB_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">えりらざぴ | SHE CEO/CCO & 櫻本真理 cotree/CoachEd & 吉澤美弥子🤿Coral Capital</div>
<div style="text-align: center; font-size: 14px;">@eri_razapii-marisakura-miyakomx</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from えりらざぴ | SHE CEO/CCO & 櫻本真理 cotree/CoachEd & 吉澤美弥子🤿Coral Capital.
| Data | えりらざぴ | SHE CEO/CCO | 櫻本真理 cotree/CoachEd | 吉澤美弥子🤿Coral Capital |
| --- | --- | --- | --- |
| Tweets downloaded | 3232 | 3205 | 1206 |
| Retweets | 1781 | 1564 | 79 |
| Short tweets | 959 | 877 | 736 |
| Tweets kept | 492 | 764 | 391 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1xlu40i1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eri_razapii-marisakura-miyakomx's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/22cwqnkv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/22cwqnkv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/eri_razapii-marisakura-miyakomx')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
TajMahaladeen/pokemon_gptj
|
TajMahaladeen
| 2022-01-31T06:12:31Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
|
NbAiLab/xls-r-1b-npsc
|
NbAiLab
| 2022-01-31T04:33:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
huggingtweets/newsfrmhome
|
huggingtweets
| 2022-01-30T20:50:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/newsfrmhome/1643575848331/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484642358641807369/XYfGxtPs_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sarah (allegedly)</div>
<div style="text-align: center; font-size: 14px;">@newsfrmhome</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sarah (allegedly).
| Data | sarah (allegedly) |
| --- | --- |
| Tweets downloaded | 3229 |
| Retweets | 448 |
| Short tweets | 378 |
| Tweets kept | 2403 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1kr9qjmz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @newsfrmhome's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zjy142t4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zjy142t4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/newsfrmhome')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Kayvane/distilbert-complaints-product
|
Kayvane
| 2022-01-30T19:15:13Z | 33 | 3 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:consumer_complaints",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- consumer_complaints
model-index:
- name: distilbert-complaints-product
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-complaints-product
This model was trained from the [CFBP](https://www.consumerfinance.gov/data-research/consumer-complaints/) dataset, also made available on the HuggingFace Datasets library. This model predicts the type of financial complaint based on the text provided
## Model description
A DistilBert Text Classification Model, with 18 possible classes to determine the nature of a financial customer complaint.
## Intended uses & limitations
This model is used as part of.a demonstration for E2E Machine Learning Projects focused on Contact Centre Automation:
- **Infrastructure:** Terraform
- **ML Ops:** HuggingFace (Datasets, Hub, Transformers)
- **Ml Explainability:** SHAP
- **Cloud:** AWS
- Model Hosting: Lambda
- DB Backend: DynamoDB
- Orchestration: Step-Functions
- UI Hosting: EC2
- Routing: API Gateway
- **UI:** Budibase
## Training and evaluation data
consumer_complaints dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
Sindhu/rembert-squad2
|
Sindhu
| 2022-01-30T18:35:08Z | 5 | 3 |
transformers
|
[
"transformers",
"pytorch",
"rembert",
"question-answering",
"multilingual",
"dataset:squad2",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
language:
- multilingual
tags:
- question-answering
datasets:
- squad2
metrics:
- squad2
---
# Rembert Squad2
This model is finetuned for QA task on Squad2 from [Rembert checkpoint](https://huggingface.co/google/rembert).
## Hyperparameters
```
Batch Size: 4
Grad Accumulation Steps = 8
Total epochs = 3
MLM Checkpoint = "rembert"
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_ratio = 0.1
doc_stride = 128
```
## Squad 2 Evaluation stats:
Metrics generated from [the official Squad2 evaluation script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/)
```json
{
"exact": 84.51107554956624,
"f1": 87.46644042781853,
"total": 11873,
"HasAns_exact": 80.97165991902834,
"HasAns_f1": 86.89086491219469,
"HasAns_total": 5928,
"NoAns_exact": 88.04037005887301,
"NoAns_f1": 88.04037005887301,
"NoAns_total": 5945
}
```
For any questions, you can reach out to me [on Twitter](https://twitter.com/batw0man)
|
Erfan/mT5-base_Farsi_Title_Generator
|
Erfan
| 2022-01-30T18:00:42Z | 11 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"Title-Generation",
"fa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
language:
- fa
tags:
- Title-Generation
metrics:
- ROUGH
---
|
tomascufaro/wav2vec2-large-xls-r-300m-spanish-small
|
tomascufaro
| 2022-01-30T17:23:59Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-small
This model is a fine-tuned version of [jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom](https://huggingface.co/jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3763
- Wer: 0.1791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2277 | 0.26 | 400 | 0.2601 | 0.2291 |
| 0.2932 | 0.53 | 800 | 0.2950 | 0.2670 |
| 0.3019 | 0.79 | 1200 | 0.3247 | 0.2766 |
| 0.2987 | 1.05 | 1600 | 0.3031 | 0.2606 |
| 0.261 | 1.32 | 2000 | 0.2994 | 0.2620 |
| 0.2651 | 1.58 | 2400 | 0.3134 | 0.2700 |
| 0.264 | 1.85 | 2800 | 0.3016 | 0.2641 |
| 0.2475 | 2.11 | 3200 | 0.3135 | 0.2661 |
| 0.2269 | 2.37 | 3600 | 0.3029 | 0.2562 |
| 0.2389 | 2.64 | 4000 | 0.3035 | 0.2549 |
| 0.2319 | 2.9 | 4400 | 0.3022 | 0.2551 |
| 0.2123 | 3.16 | 4800 | 0.3256 | 0.2638 |
| 0.2094 | 3.43 | 5200 | 0.3227 | 0.2712 |
| 0.2121 | 3.69 | 5600 | 0.3085 | 0.2596 |
| 0.207 | 3.96 | 6000 | 0.3041 | 0.2597 |
| 0.1809 | 4.22 | 6400 | 0.3122 | 0.2524 |
| 0.1846 | 4.48 | 6800 | 0.3254 | 0.2579 |
| 0.1885 | 4.75 | 7200 | 0.2958 | 0.2437 |
| 0.1923 | 5.01 | 7600 | 0.3136 | 0.2502 |
| 0.1626 | 5.27 | 8000 | 0.3059 | 0.2488 |
| 0.1704 | 5.54 | 8400 | 0.3082 | 0.2515 |
| 0.1674 | 5.8 | 8800 | 0.3196 | 0.2509 |
| 0.1691 | 6.06 | 9200 | 0.3193 | 0.25 |
| 0.1499 | 6.33 | 9600 | 0.3529 | 0.2635 |
| 0.1568 | 6.59 | 10000 | 0.3241 | 0.2481 |
| 0.1538 | 6.86 | 10400 | 0.3354 | 0.2476 |
| 0.1503 | 7.12 | 10800 | 0.3180 | 0.2402 |
| 0.136 | 7.38 | 11200 | 0.3230 | 0.2397 |
| 0.1413 | 7.65 | 11600 | 0.3178 | 0.2451 |
| 0.147 | 7.91 | 12000 | 0.3170 | 0.2389 |
| 0.1341 | 8.17 | 12400 | 0.3380 | 0.2501 |
| 0.1329 | 8.44 | 12800 | 0.3265 | 0.2414 |
| 0.1314 | 8.7 | 13200 | 0.3281 | 0.2482 |
| 0.1312 | 8.97 | 13600 | 0.3259 | 0.2539 |
| 0.12 | 9.23 | 14000 | 0.3291 | 0.2424 |
| 0.1193 | 9.49 | 14400 | 0.3302 | 0.2412 |
| 0.1189 | 9.76 | 14800 | 0.3376 | 0.2407 |
| 0.1217 | 10.02 | 15200 | 0.3334 | 0.2400 |
| 0.1118 | 10.28 | 15600 | 0.3359 | 0.2368 |
| 0.1139 | 10.55 | 16000 | 0.3239 | 0.2335 |
| 0.1106 | 10.81 | 16400 | 0.3374 | 0.2352 |
| 0.1081 | 11.07 | 16800 | 0.3585 | 0.2434 |
| 0.1063 | 11.34 | 17200 | 0.3639 | 0.2472 |
| 0.1041 | 11.6 | 17600 | 0.3399 | 0.2423 |
| 0.1062 | 11.87 | 18000 | 0.3410 | 0.2388 |
| 0.1012 | 12.13 | 18400 | 0.3597 | 0.2413 |
| 0.0953 | 12.39 | 18800 | 0.3440 | 0.2296 |
| 0.097 | 12.66 | 19200 | 0.3440 | 0.2269 |
| 0.0968 | 12.92 | 19600 | 0.3498 | 0.2333 |
| 0.0902 | 13.18 | 20000 | 0.3471 | 0.2290 |
| 0.0868 | 13.45 | 20400 | 0.3462 | 0.2266 |
| 0.0892 | 13.71 | 20800 | 0.3373 | 0.2227 |
| 0.0902 | 13.97 | 21200 | 0.3377 | 0.2240 |
| 0.0846 | 14.24 | 21600 | 0.3484 | 0.2237 |
| 0.0839 | 14.5 | 22000 | 0.3706 | 0.2260 |
| 0.0834 | 14.77 | 22400 | 0.3430 | 0.2268 |
| 0.0841 | 15.03 | 22800 | 0.3489 | 0.2259 |
| 0.076 | 15.29 | 23200 | 0.3626 | 0.2281 |
| 0.0771 | 15.56 | 23600 | 0.3624 | 0.2268 |
| 0.0773 | 15.82 | 24000 | 0.3440 | 0.2252 |
| 0.0759 | 16.08 | 24400 | 0.3532 | 0.2170 |
| 0.0745 | 16.35 | 24800 | 0.3686 | 0.2188 |
| 0.0713 | 16.61 | 25200 | 0.3691 | 0.2195 |
| 0.0718 | 16.88 | 25600 | 0.3470 | 0.2108 |
| 0.0685 | 17.14 | 26000 | 0.3756 | 0.2179 |
| 0.0689 | 17.4 | 26400 | 0.3542 | 0.2149 |
| 0.0671 | 17.67 | 26800 | 0.3461 | 0.2165 |
| 0.0737 | 17.93 | 27200 | 0.3473 | 0.2238 |
| 0.0669 | 18.19 | 27600 | 0.3441 | 0.2138 |
| 0.0629 | 18.46 | 28000 | 0.3721 | 0.2155 |
| 0.0632 | 18.72 | 28400 | 0.3667 | 0.2126 |
| 0.0647 | 18.98 | 28800 | 0.3579 | 0.2097 |
| 0.0603 | 19.25 | 29200 | 0.3670 | 0.2130 |
| 0.0604 | 19.51 | 29600 | 0.3750 | 0.2142 |
| 0.0619 | 19.78 | 30000 | 0.3804 | 0.2160 |
| 0.0603 | 20.04 | 30400 | 0.3764 | 0.2124 |
| 0.0577 | 20.3 | 30800 | 0.3858 | 0.2097 |
| 0.0583 | 20.57 | 31200 | 0.3520 | 0.2089 |
| 0.0561 | 20.83 | 31600 | 0.3615 | 0.2079 |
| 0.0545 | 21.09 | 32000 | 0.3824 | 0.2032 |
| 0.0525 | 21.36 | 32400 | 0.3858 | 0.2091 |
| 0.0524 | 21.62 | 32800 | 0.3956 | 0.2099 |
| 0.0527 | 21.89 | 33200 | 0.3667 | 0.2025 |
| 0.0514 | 22.15 | 33600 | 0.3708 | 0.2032 |
| 0.0506 | 22.41 | 34000 | 0.3815 | 0.2053 |
| 0.0478 | 22.68 | 34400 | 0.3671 | 0.2007 |
| 0.049 | 22.94 | 34800 | 0.3758 | 0.2003 |
| 0.0477 | 23.2 | 35200 | 0.3786 | 0.2014 |
| 0.045 | 23.47 | 35600 | 0.3732 | 0.1998 |
| 0.0426 | 23.73 | 36000 | 0.3737 | 0.2010 |
| 0.0444 | 23.99 | 36400 | 0.3600 | 0.1990 |
| 0.0433 | 24.26 | 36800 | 0.3689 | 0.1976 |
| 0.0442 | 24.52 | 37200 | 0.3787 | 0.1968 |
| 0.0419 | 24.79 | 37600 | 0.3652 | 0.1961 |
| 0.042 | 25.05 | 38000 | 0.3820 | 0.1964 |
| 0.0419 | 25.31 | 38400 | 0.3786 | 0.1919 |
| 0.0376 | 25.58 | 38800 | 0.3842 | 0.1934 |
| 0.0385 | 25.84 | 39200 | 0.3767 | 0.1900 |
| 0.0396 | 26.1 | 39600 | 0.3688 | 0.1888 |
| 0.0371 | 26.37 | 40000 | 0.3815 | 0.1894 |
| 0.0363 | 26.63 | 40400 | 0.3748 | 0.1878 |
| 0.0377 | 26.9 | 40800 | 0.3713 | 0.1852 |
| 0.0352 | 27.16 | 41200 | 0.3734 | 0.1851 |
| 0.0355 | 27.42 | 41600 | 0.3776 | 0.1874 |
| 0.0333 | 27.69 | 42000 | 0.3867 | 0.1841 |
| 0.0348 | 27.95 | 42400 | 0.3823 | 0.1839 |
| 0.0329 | 28.21 | 42800 | 0.3795 | 0.1822 |
| 0.0325 | 28.48 | 43200 | 0.3711 | 0.1813 |
| 0.0328 | 28.74 | 43600 | 0.3721 | 0.1781 |
| 0.0312 | 29.0 | 44000 | 0.3803 | 0.1816 |
| 0.0318 | 29.27 | 44400 | 0.3758 | 0.1794 |
| 0.0302 | 29.53 | 44800 | 0.3792 | 0.1784 |
| 0.0339 | 29.8 | 45200 | 0.3763 | 0.1791 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
imvladikon/charbert-roberta-wiki
|
imvladikon
| 2022-01-30T11:37:26Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"language model",
"en",
"dataset:wikipedia",
"arxiv:2011.01513",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- language model
datasets:
- wikipedia
---
pre-trained model from [CharBERT: Character-aware Pre-trained Language Model](https://github.com/wtma/CharBERT)
```
@misc{ma2020charbert,
title={CharBERT: Character-aware Pre-trained Language Model},
author={Wentao Ma and Yiming Cui and Chenglei Si and Ting Liu and Shijin Wang and Guoping Hu},
year={2020},
eprint={2011.01513},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
imvladikon/charbert-bert-wiki
|
imvladikon
| 2022-01-30T11:35:48Z | 63 | 3 |
transformers
|
[
"transformers",
"pytorch",
"language model",
"en",
"dataset:wikipedia",
"arxiv:2011.01513",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- language model
datasets:
- wikipedia
---
pre-trained model from [CharBERT: Character-aware Pre-trained Language Model](https://github.com/wtma/CharBERT)
```
@misc{ma2020charbert,
title={CharBERT: Character-aware Pre-trained Language Model},
author={Wentao Ma and Yiming Cui and Chenglei Si and Ting Liu and Shijin Wang and Guoping Hu},
year={2020},
eprint={2011.01513},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
imvladikon/general_character_bert
|
imvladikon
| 2022-01-30T11:35:11Z | 20 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"language model",
"en",
"dataset:wikipedia",
"dataset:openwebtext",
"arxiv:2010.10392",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- language model
datasets:
- wikipedia
- openwebtext
---
Pretrained general_character_bert model
from the ['CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters' El Boukkouri H., et al., 2020](https://github.com/helboukkouri/character-bert)
```
@inproceedings{el-boukkouri-etal-2020-characterbert,
title = "{C}haracter{BERT}: Reconciling {ELM}o and {BERT} for Word-Level Open-Vocabulary Representations From Characters",
author = "El Boukkouri, Hicham and
Ferret, Olivier and
Lavergne, Thomas and
Noji, Hiroshi and
Zweigenbaum, Pierre and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year={2020},
eprint={2010.10392},
archivePrefix={arXiv},
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.coling-main.609",
doi = "10.18653/v1/2020.coling-main.609",
pages = "6903--6915",
abstract = "Due to the compelling improvements brought by BERT, many recent representation models adopted the Transformer architecture as their main building block, consequently inheriting the wordpiece tokenization system despite it not being intrinsically linked to the notion of Transformers. While this system is thought to achieve a good balance between the flexibility of characters and the efficiency of full words, using predefined wordpiece vocabularies from the general domain is not always suitable, especially when building models for specialized domains (e.g., the medical domain). Moreover, adopting a wordpiece tokenization shifts the focus from the word level to the subword level, making the models conceptually more complex and arguably less convenient in practice. For these reasons, we propose CharacterBERT, a new variant of BERT that drops the wordpiece system altogether and uses a Character-CNN module instead to represent entire words by consulting their characters. We show that this new model improves the performance of BERT on a variety of medical domain tasks while at the same time producing robust, word-level, and open-vocabulary representations.",
}
```
|
pinecone/mpnet-retriever-discourse
|
pinecone
| 2022-01-30T07:23:58Z | 4 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"question-answering",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- question-answering
---
# MPNet Retriever (Discourse)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used as a retriever model in open-domain question-answering tasks.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was fine-tuned on question-answer pairs scraper from several ML-focused Discourse forums \[HuggingFace, PyTorch, Streamlit, TensorFlow\].
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 105 with parameters:
```
{'batch_size': 12}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Fine-tuned by [James Briggs](https://www.youtube.com/c/jamesbriggs) at [Pinecone](https://www.pinecone.io). Learn more about the [fine-tuning process here](https://www.pinecone.io/learn/retriever-models/).
|
pablouribe/xls-r-ab-test
|
pablouribe
| 2022-01-30T05:13:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 133.2596
- Wer: 19.1571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.