modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 18:29:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 18:25:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tadejmagajna/flair-sl-pos
|
tadejmagajna
| 2022-01-05T15:07:06Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"sl",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language: sl
widget:
- text: "Danes je lep dan."
---
## Slovene Part-of-speech (PoS) Tagging for Flair
This is a Slovene part-of-speech (PoS) tagger trained on the [Slovenian UD Treebank](https://github.com/UniversalDependencies/UD_Slovenian-SSJ) using Flair NLP framework.
The tagger is trained using a combination of forward Slovene contextual string embeddings, backward Slovene contextual string embeddings and classic Slovene FastText embeddings.
F-score (micro): **94,96**
The model is trained on a large (500+) number of different tags that described at [https://universaldependencies.org/tagset-conversion/sl-multext-uposf.html](https://universaldependencies.org/tagset-conversion/sl-multext-uposf.html).
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("tadejmagajna/flair-sl-pos")
# make example sentence
sentence = Sentence("Danes je lep dan.")
# predict PoS tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted PoS spans
print('The following PoS tags are found:')
# iterate over parts of speech and print
for tag in sentence.get_spans('pos'):
print(tag)
```
This prints out the following output:
```
Sentence: "Danes je lep dan ." [− Tokens: 5 − Token-Labels: "Danes <Rgp> je <Va-r3s-n> lep <Agpmsnn> dan <Ncmsn> . <Z>"]
The following PoS tags are found:
Span [1]: "Danes" [− Labels: Rgp (1.0)]
Span [2]: "je" [− Labels: Va-r3s-n (1.0)]
Span [3]: "lep" [− Labels: Agpmsnn (0.9999)]
Span [4]: "dan" [− Labels: Ncmsn (1.0)]
Span [5]: "." [− Labels: Z (1.0)]
```
---
### Training: Script to train this model
The following standard Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import UD_SLOVENIAN
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the corpus
corpus: Corpus = UD_SLOVENIAN()
# 2. what tag do we want to predict?
tag_type = 'pos'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize embeddings
embedding_types = [
WordEmbeddings('sl'),
FlairEmbeddings('sl-forward'),
FlairEmbeddings('sl-backward'),
]
embeddings: StackedEmbeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger: SequenceTagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer: ModelTrainer = ModelTrainer(tagger, corpus)
# 7. start training
trainer.train('resources/taggers/pos-slovene',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
jonfd/electra-small-igc-is
|
jonfd
| 2022-01-05T14:56:02Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"is",
"dataset:igc",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- is
license: cc-by-4.0
datasets:
- igc
---
# Icelandic ELECTRA-Small
This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
|
Icelandic-lt/electra-small-igc-is
|
Icelandic-lt
| 2022-01-05T14:56:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"is",
"dataset:igc",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-27T11:38:14Z |
---
language:
- is
license: cc-by-4.0
datasets:
- igc
---
# Icelandic ELECTRA-Small
This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
|
Icelandic-lt/electra-base-igc-is
|
Icelandic-lt
| 2022-01-05T14:54:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"is",
"dataset:igc",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-27T13:01:43Z |
---
language:
- is
license: cc-by-4.0
datasets:
- igc
---
# Icelandic ELECTRA-Base
This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105.
# Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
|
kurone/cp_tags_prediction
|
kurone
| 2022-01-05T13:32:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
This model can predict which categories a specific competitive problem falls into
|
MingZhong/DialogLED-base-16384
|
MingZhong
| 2022-01-05T09:15:06Z | 80 | 6 |
transformers
|
[
"transformers",
"pytorch",
"led",
"text2text-generation",
"arxiv:2109.02492",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
[DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization](https://arxiv.org/abs/2109.02492).
## Introduction
DialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a base version of DialogLED, the input length is limited to 16,384 in the pre-training phase.
## Finetuning for Downstream Tasks
Please refer to [our GitHub page](https://github.com/microsoft/DialogLM).
|
huggingtweets/sporeball
|
huggingtweets
| 2022-01-05T08:02:01Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/sporeball/1641369716297/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1365405536401776642/Z17NbuYy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">lux</div>
<div style="text-align: center; font-size: 14px;">@sporeball</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from lux.
| Data | lux |
| --- | --- |
| Tweets downloaded | 1150 |
| Retweets | 171 |
| Short tweets | 120 |
| Tweets kept | 859 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2w9y6gn1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sporeball's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2tg3n5a5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2tg3n5a5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sporeball')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Prasadi/wav2vec2-base-timit-demo-colab-1
|
Prasadi
| 2022-01-05T06:18:01Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab-1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3857
- Wer: 0.3874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4285 | 2.01 | 500 | 1.4732 | 0.9905 |
| 0.7457 | 4.02 | 1000 | 0.5278 | 0.4960 |
| 0.3463 | 6.02 | 1500 | 0.4245 | 0.4155 |
| 0.2034 | 8.03 | 2000 | 0.3857 | 0.3874 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
abdelkader/distilbert-base-uncased-finetuned-emotion
|
abdelkader
| 2022-01-04T23:18:05Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9215604730468001
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2162
- Accuracy: 0.9215
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8007 | 1.0 | 250 | 0.3082 | 0.907 | 0.9045 |
| 0.2438 | 2.0 | 500 | 0.2162 | 0.9215 | 0.9216 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
marioarteaga/distilbert-base-uncased-finetuned-squad
|
marioarteaga
| 2022-01-04T20:26:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2493 | 1.0 | 5533 | 1.2052 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
huawei-noah/JABER
|
huawei-noah
| 2022-01-04T20:19:57Z | 1 | 0 | null |
[
"pytorch",
"arxiv:2112.04329",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Overview
<p align="center">
<img src="https://avatars.githubusercontent.com/u/12619994?s=200&v=4" width="150">
</p>
<!-- -------------------------------------------------------------------------------- -->
JABER (Junior Arabic BERt) is a 12-layer Arabic pretrained Language Model.
JABER obtained rank one on [ALUE leaderboard](https://www.alue.org/leaderboard) at `01/09/2021`.
This model is **only compatible** with the code in [this github repo](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/JABER-PyTorch) (not supported by the [Transformers](https://github.com/huggingface/transformers) library)
## Citation
Please cite the following [paper](https://arxiv.org/abs/2112.04329) when using our code and model:
``` bibtex
@misc{ghaddar2021jaber,
title={JABER: Junior Arabic BERt},
author={Abbas Ghaddar and Yimeng Wu and Ahmad Rashid and Khalil Bibi and Mehdi Rezagholizadeh and Chao Xing and Yasheng Wang and Duan Xinyu and Zhefeng Wang and Baoxing Huai and Xin Jiang and Qun Liu and Philippe Langlais},
year={2021},
eprint={2112.04329},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Anamika/autonlp-fa-473312409
|
Anamika
| 2022-01-04T20:08:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:Anamika/autonlp-data-fa",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Anamika/autonlp-data-fa
co2_eq_emissions: 25.128735714898614
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 473312409
- CO2 Emissions (in grams): 25.128735714898614
## Validation Metrics
- Loss: 0.6010786890983582
- Accuracy: 0.7990650945370823
- Macro F1: 0.7429662929144928
- Micro F1: 0.7990650945370823
- Weighted F1: 0.7977660363770382
- Macro Precision: 0.7744390888231261
- Micro Precision: 0.7990650945370823
- Weighted Precision: 0.800444194278352
- Macro Recall: 0.7198278524814119
- Micro Recall: 0.7990650945370823
- Weighted Recall: 0.7990650945370823
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Anamika/autonlp-fa-473312409
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Anamika/autonlp-fa-473312409", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
huggingtweets/funnyordie
|
huggingtweets
| 2022-01-04T19:39:10Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/894956741573525504/YFg6jiNP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Funny Or Die</div>
<div style="text-align: center; font-size: 14px;">@funnyordie</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Funny Or Die.
| Data | Funny Or Die |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 237 |
| Short tweets | 190 |
| Tweets kept | 2823 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zjkuy05u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @funnyordie's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2jaeb619) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2jaeb619/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/funnyordie')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Khanh/distilbert-base-multilingual-cased-finetuned-viquad
|
Khanh
| 2022-01-04T19:19:15Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-multilingual-cased-finetuned-viquad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-viquad
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 65 | 4.0975 |
| No log | 2.0 | 130 | 3.9315 |
| No log | 3.0 | 195 | 3.6742 |
| No log | 4.0 | 260 | 3.4878 |
| No log | 5.0 | 325 | 3.4241 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Khanh/bert-base-multilingual-cased-finetuned-viquad
|
Khanh
| 2022-01-04T19:07:54Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-finetuned-viquad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-viquad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 65 | 2.5534 |
| No log | 2.0 | 130 | 2.1165 |
| No log | 3.0 | 195 | 1.9815 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ericRosello/distilbert-base-uncased-finetuned-squad-frozen-v2
|
ericRosello
| 2022-01-04T18:06:41Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2104
## Model description
Most base model weights were frozen leaving only to finetune the last layer (qa outputs) and 3 last layers of the encoder.
## Training and evaluation data
Achieved EM: 73.519394512772, F1: 82.71779517079237
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3937 | 1.0 | 5533 | 1.2915 |
| 1.1522 | 2.0 | 11066 | 1.2227 |
| 1.0055 | 3.0 | 16599 | 1.2104 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Khanh/xlm-roberta-base-finetuned-squad
|
Khanh
| 2022-01-04T17:49:35Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-squad
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7665 | 1.0 | 2295 | 0.5231 |
| 0.5236 | 2.0 | 4590 | 0.5539 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ericRosello/bert-base-uncased-finetuned-squad-frozen-v1
|
ericRosello
| 2022-01-04T17:03:12Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0178
## Model description
Base model weights were frozen leaving only to finetune the last layer (qa outputs).
## Training and evaluation data
Achieved EM: 8.013245033112582, F1: 15.9706088498649
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.3602 | 1.0 | 5533 | 4.3460 |
| 4.0995 | 2.0 | 11066 | 4.0787 |
| 4.0302 | 3.0 | 16599 | 4.0178 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nvidia/megatron-gpt2-345m
|
nvidia
| 2022-01-04T15:19:18Z | 0 | 21 | null |
[
"arxiv:1909.08053",
"region:us"
] | null | 2022-03-02T23:29:05Z |
<!---
# ##############################################################################################
#
# Copyright (c) 2021-, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ##############################################################################################
-->
[Megatron](https://arxiv.org/pdf/1909.08053.pdf) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This particular Megatron model was trained from a generative, left-to-right transformer in the style of GPT-2. This model was trained on text sourced from Wikipedia, RealNews, OpenWebText, and CC-Stories. It contains 345 million parameters.
Find more information at [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
# How to run Megatron GPT2 using Transformers
## Prerequisites
In that guide, we run all the commands from a folder called `$MYDIR` and defined as (in `bash`):
```
export MYDIR=$HOME
```
Feel free to change the location at your convenience.
To run some of the commands below, you'll have to clone `Transformers`.
```
git clone https://github.com/huggingface/transformers.git $MYDIR/transformers
```
## Get the checkpoints from the NVIDIA GPU Cloud
You must create a directory called `nvidia/megatron-gpt2-345m`:
```
mkdir -p $MYDIR/nvidia/megatron-gpt2-345m
```
You can download the checkpoints from the [NVIDIA GPU Cloud (NGC)](https://ngc.nvidia.com/catalog/models/nvidia:megatron_lm_345m). For that you
have to [sign up](https://ngc.nvidia.com/signup) for and setup the NVIDIA GPU
Cloud (NGC) Registry CLI. Further documentation for downloading models can be
found in the [NGC
documentation](https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html#topic_6_4_1).
Alternatively, you can directly download the checkpoints using:
```
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O $MYDIR/nvidia/megatron-gpt2-345m/checkpoint.zip
```
## Converting the checkpoint
In order to be loaded into `Transformers`, the checkpoint has to be converted. You should run the following command for that purpose.
That command will create `config.json` and `pytorch_model.bin` in `$MYDIR/nvidia/megatron-gpt2-345m`.
You can move those files to different directories if needed.
```
python3 $MYDIR/transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py $MYDIR/nvidia/megatron-gpt2-345m/checkpoint.zip
```
As explained in [PR #14956](https://github.com/huggingface/transformers/pull/14956), if when running this conversion
script and you're getting an exception:
```
ModuleNotFoundError: No module named 'megatron.model.enums'
```
you need to tell python where to find the clone of Megatron-LM, e.g.:
```
cd /tmp
git clone https://github.com/NVIDIA/Megatron-LM
PYTHONPATH=/tmp/Megatron-LM python src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py ...
```
Or, if you already have it cloned elsewhere, simply adjust the path to the existing path.
If the training was done using a Megatron-LM fork, e.g. [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/) then
you may need to have that one in your path, i.e., /path/to/Megatron-DeepSpeed.
## Text generation
The following code shows how to use the Megatron GPT2 checkpoint and the Transformers API to generate text.
```
import os
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
# The tokenizer. Megatron was trained with standard tokenizer(s).
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
# The path to the config/checkpoint (see the conversion step above).
directory = os.path.join(os.environ['MYDIR'], 'nvidia/megatron-gpt2-345m')
# Load the model from $MYDIR/nvidia/megatron-gpt2-345m.
model = GPT2LMHeadModel.from_pretrained(directory)
# Copy to the device and use FP16.
assert torch.cuda.is_available()
device = torch.device("cuda")
model.to(device)
model.eval()
model.half()
# Generate the sentence.
output = model.generate(input_ids=None, max_length=32, num_return_sequences=1)
# Output the text.
for sentence in output:
sentence = sentence.tolist()
text = tokenizer.decode(sentence, clean_up_tokenization_spaces=True)
print(text)
```
# To use this as a normal HuggingFace model
If you want to use this model with HF Trainer, here is a quick way to do that:
1. Download nvidia checkpoint:
```
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O megatron_lm_345m_v0.0.zip
```
2. Convert:
```
python src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py megatron_lm_345m_v0.0.zip
```
3. Fetch missing files
```
git clone https://huggingface.co/nvidia/megatron-gpt2-345m/
```
4. Move the converted files into the cloned model dir
```
mv config.json pytorch_model.bin megatron-gpt2-345m/
```
5. The `megatron-gpt2-345m` dir should now have all the files which can be passed to HF Trainer as `--model_name_or_path megatron-gpt2-345m`
# Original code
The original Megatron code can be found here: [https://github.com/NVIDIA/Megatron-LM](https://github.com/NVIDIA/Megatron-LM).
|
scasutt/Prototype_training
|
scasutt
| 2022-01-04T14:59:34Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Prototype_training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Prototype_training
This model is a fine-tuned version of [scasutt/Prototype_training](https://huggingface.co/scasutt/Prototype_training) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3719
- Wer: 0.4626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3853 | 1.47 | 100 | 0.3719 | 0.4626 |
| 0.3867 | 2.94 | 200 | 0.3719 | 0.4626 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
sshasnain/wav2vec2-xls-r-timit-trainer
|
sshasnain
| 2022-01-04T14:49:41Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-timit-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-timit-trainer
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1064
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5537 | 4.03 | 500 | 0.6078 | 1.0 |
| 0.5444 | 8.06 | 1000 | 0.4990 | 0.9994 |
| 0.3744 | 12.1 | 1500 | 0.5530 | 1.0 |
| 0.2863 | 16.13 | 2000 | 0.6401 | 1.0 |
| 0.2357 | 20.16 | 2500 | 0.6485 | 1.0 |
| 0.1933 | 24.19 | 3000 | 0.7448 | 0.9994 |
| 0.162 | 28.22 | 3500 | 0.7502 | 1.0 |
| 0.1325 | 32.26 | 4000 | 0.7801 | 1.0 |
| 0.1169 | 36.29 | 4500 | 0.8334 | 1.0 |
| 0.1031 | 40.32 | 5000 | 0.8269 | 1.0 |
| 0.0913 | 44.35 | 5500 | 0.8432 | 1.0 |
| 0.0793 | 48.39 | 6000 | 0.8738 | 1.0 |
| 0.0694 | 52.42 | 6500 | 0.8897 | 1.0 |
| 0.0613 | 56.45 | 7000 | 0.8966 | 1.0 |
| 0.0548 | 60.48 | 7500 | 0.9398 | 1.0 |
| 0.0444 | 64.51 | 8000 | 0.9548 | 1.0 |
| 0.0386 | 68.55 | 8500 | 0.9647 | 1.0 |
| 0.0359 | 72.58 | 9000 | 0.9901 | 1.0 |
| 0.0299 | 76.61 | 9500 | 1.0151 | 1.0 |
| 0.0259 | 80.64 | 10000 | 1.0526 | 1.0 |
| 0.022 | 84.67 | 10500 | 1.0754 | 1.0 |
| 0.0189 | 88.71 | 11000 | 1.0688 | 1.0 |
| 0.0161 | 92.74 | 11500 | 1.0914 | 1.0 |
| 0.0138 | 96.77 | 12000 | 1.1064 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
federicopascual/finetuning-sentiment-model-3000-samples-testcopy
|
federicopascual
| 2022-01-04T14:34:49Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples-testcopy
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8761904761904761
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples-testcopy
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3374
- Accuracy: 0.87
- F1: 0.8762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Bhuvana/t5-base-spellchecker
|
Bhuvana
| 2022-01-04T12:46:55Z | 192 | 13 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
widget:
- text: "christmas is celbrated on decembr 25 evry ear"
---
# Spell checker using T5 base transformer
A simple spell checker built using T5-Base transformer. To use this model
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Bhuvana/t5-base-spellchecker")
model = AutoModelForSeq2SeqLM.from_pretrained("Bhuvana/t5-base-spellchecker")
def correct(inputs):
input_ids = tokenizer.encode(inputs,return_tensors='pt')
sample_output = model.generate(
input_ids,
do_sample=True,
max_length=50,
top_p=0.99,
num_return_sequences=1
)
res = tokenizer.decode(sample_output[0], skip_special_tokens=True)
return res
text = "christmas is celbrated on decembr 25 evry ear"
print(correct(text))
```
This should print the corrected statement
```
christmas is celebrated on december 25 every year
```
You can also type the text under the Hosted inference API and get predictions online.
|
NikolajMunch/danish-emotion-classification
|
NikolajMunch
| 2022-01-04T12:14:46Z | 28 | 6 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"sentiment",
"emotion",
"danish",
"da",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
widget:
- text: "Hold da op! Kan det virkelig passe?"
language:
- "da"
tags:
- sentiment
- emotion
- danish
---
# **-- EMODa --**
## BERT-model for danish multi-class classification of emotions
Classifies a danish sentence into one of 6 different emotions:
| Danish emotion | Ekman's emotion |
| ----- | ----- |
| 😞 **Afsky** | Disgust |
| 😨 **Frygt** | Fear |
| 😄 **Glæde** | Joy |
| 😱 **Overraskelse** | Surprise |
| 😢 **Tristhed** | Sadness |
| 😠 **Vrede** | Anger |
# How to use
```python
from transformers import pipeline
model_path = "NikolajMunch/danish-emotion-classification"
classifier = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
prediction = classifier("Jeg er godt nok ked af at mine SMS'er er slettet")
print(prediction)
# [{'label': 'Tristhed', 'score': 0.9725030660629272}]
```
or
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("NikolajMunch/danish-emotion-classification")
model = AutoModelForSequenceClassification.from_pretrained("NikolajMunch/danish-emotion-classification")
```
|
ericRosello/distilbert-base-uncased-finetuned-squad-frozen-v1
|
ericRosello
| 2022-01-04T12:14:41Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3629
## Model description
Base model weights were frozen leaving only to finetune the last layer (qa outputs).
## Training and evaluation data
Achieved EM: 4.7776726584673606, F1: 11.440882287905591
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.679 | 1.0 | 5533 | 4.6713 |
| 4.4171 | 2.0 | 11066 | 4.4218 |
| 4.3464 | 3.0 | 16599 | 4.3629 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
junnyu/roformer_chinese_base
|
junnyu
| 2022-01-04T11:46:28Z | 17 | 14 |
paddlenlp
|
[
"paddlenlp",
"pytorch",
"tf",
"jax",
"paddlepaddle",
"roformer",
"tf2.0",
"zh",
"arxiv:2104.09864",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: zh
tags:
- roformer
- pytorch
- tf2.0
widget:
- text: "今天[MASK]很好,我想去公园玩!"
---
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, RoFormerTokenizer
text = "今天[MASK]很好,我想去公园玩!"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_base")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_base")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天气||天||阳光||太阳||空气]很好,我想去公园玩!
```
## tensorflow2.0使用
```python
import tensorflow as tf
from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我想去公园玩!"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_base")
tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_base")
tf_inputs = tokenizer(text, return_tensors="tf")
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf2.0: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(
tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(tf_outputs_sentence)
# tf2.0: 今天[天气||天||阳光||太阳||空气]很好,我想去公园玩!
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
junnyu/roformer_chinese_char_base
|
junnyu
| 2022-01-04T11:45:40Z | 5 | 0 |
paddlenlp
|
[
"paddlenlp",
"pytorch",
"tf",
"jax",
"paddlepaddle",
"roformer",
"tf2.0",
"zh",
"arxiv:2104.09864",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: zh
tags:
- roformer
- pytorch
- tf2.0
widget:
- text: "今天[MASK]很好,我想去公园玩!"
---
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, RoFormerTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_base")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_base")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天||气||都||风||人]很好,我[想||要||就||也||还]去公园玩。
```
## tensorflow2.0使用
```python
import tensorflow as tf
from transformers import RoFormerTokenizer, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = RoFormerTokenizer.from_pretrained("junnyu/roformer_chinese_char_base")
tf_model = TFRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_char_base")
tf_inputs = tokenizer(text, return_tensors="tf")
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf2.0: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(
tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(tf_outputs_sentence)
# tf2.0 今天[天||气||都||风||人]很好,我[想||要||就||也||还]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
philschmid/gbert-base-germaner
|
philschmid
| 2022-01-04T08:55:58Z | 9 | 3 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"token-classification",
"de",
"dataset:germaner",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- de
license: mit
widget:
- text: |
Philipp ist 26 Jahre alt und lebt in Nürnberg, Deutschland. Derzeit arbeitet er als Machine Learning Engineer und Tech Lead bei Hugging Face, um künstliche Intelligenz durch Open Source und Open Science zu demokratisieren.
datasets:
- germaner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: gbert-base-germaner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: germaner
type: germaner
args: default
metrics:
- name: precision
type: precision
value: 0.8520523797532108
- name: recall
type: recall
value: 0.8754204398447607
- name: f1
type: f1
value: 0.8635783563042368
- name: accuracy
type: accuracy
value: 0.976147969774973
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gbert-base-germaner
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on the germaner dataset.
It achieves the following results on the evaluation set:
- precision: 0.8521
- recall: 0.8754
- f1: 0.8636
- accuracy: 0.9761
If you want to learn how to fine-tune BERT yourself using Keras and Tensorflow check out this blog post:
https://www.philschmid.de/huggingface-transformers-keras-tf
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- num_train_epochs: 5
- train_batch_size: 16
- eval_batch_size: 32
- learning_rate: 2e-05
- weight_decay_rate: 0.01
- num_warmup_steps: 0
- fp16: True
### Framework versions
- Transformers 4.14.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
pierreguillou/bert-large-cased-pt-lenerbr
|
pierreguillou
| 2022-01-04T08:52:43Z | 57 | 6 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"pt",
"dataset:pierreguillou/lener_br_finetuning_language_model",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- pt
tags:
- generated_from_trainer
datasets:
- pierreguillou/lener_br_finetuning_language_model
model-index:
- name: checkpoints
results:
- task:
name: Fill Mask
type: fill-mask
dataset:
name: pierreguillou/lener_br_finetuning_language_model
type: pierreguillou/lener_br_finetuning_language_model
metrics:
- name: Loss
type: loss
value: 1.127950
widget:
- text: "Com efeito, se tal fosse possível, o Poder [MASK] – que não dispõe de função legislativa – passaria a desempenhar atribuição que lhe é institucionalmente estranha (a de legislador positivo), usurpando, desse modo, no contexto de um sistema de poderes essencialmente limitados, competência que não lhe pertence, com evidente transgressão ao princípio constitucional da separação de poderes."
---
## (BERT large) Language modeling in the legal domain in Portuguese (LeNER-Br)
**bert-large-cased-pt-lenerbr** is a Language Model in the legal domain in Portuguese that was finetuned on 20/12/2021 in Google Colab from the model [BERTimbau large](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the dataset [LeNER-Br language modeling](https://huggingface.co/datasets/pierreguillou/lener_br_finetuning_language_model) by using a MASK objective.
You can check as well the [version base of this model](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr).
## Widget & APP
You can test this model into the widget of this page.
## Blog post
This language model is used to get a NER model on the Portuguese judicial domain. You can check the fine-tuned NER model at [pierreguillou/ner-bert-large-cased-pt-lenerbr](https://huggingface.co/pierreguillou/ner-bert-large-cased-pt-lenerbr).
All informations and links are in this blog post: [NLP | Modelos e Web App para Reconhecimento de Entidade Nomeada (NER) no domínio jurídico brasileiro](https://medium.com/@pierre_guillou/nlp-modelos-e-web-app-para-reconhecimento-de-entidade-nomeada-ner-no-dom%C3%ADnio-jur%C3%ADdico-b658db55edfb) (29/12/2021)
## Using the model for inference in production
````
# install pytorch: check https://pytorch.org/
# !pip install transformers
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("pierreguillou/bert-large-cased-pt-lenerbr")
model = AutoModelForMaskedLM.from_pretrained("pierreguillou/bert-large-cased-pt-lenerbr")
````
## Training procedure
## Notebook
The notebook of finetuning ([Finetuning_language_model_BERtimbau_LeNER_Br.ipynb](https://github.com/piegu/language-models/blob/master/Finetuning_language_model_BERtimbau_LeNER_Br.ipynb)) is in github.
### Training results
````
Num examples = 3227
Num Epochs = 5
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 4
Total optimization steps = 2015
Step Training Loss Validation Loss
100 1.616700 1.366015
200 1.452000 1.312473
300 1.431100 1.253055
400 1.407500 1.264705
500 1.301900 1.243277
600 1.317800 1.233684
700 1.319100 1.211826
800 1.303800 1.190818
900 1.262800 1.171898
1000 1.235900 1.146275
1100 1.221900 1.149027
1200 1.226200 1.127950
1300 1.201700 1.172729
1400 1.198200 1.145363
````
|
addy88/programming-lang-identifier
|
addy88
| 2022-01-04T04:22:07Z | 8 | 6 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
This model is funetune version of Codebert in roberta. On CodeSearchNet.
###
Quick start:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("addy88/programming-lang-identifier")
model = AutoModelForSequenceClassification.from_pretrained("addy88/programming-lang-identifier")
input_ids = tokenizer.encode(CODE_TO_IDENTIFY)
logits = model(input_ids)[0]
language_idx = logits.argmax() # index for the resulting label
###
|
hogger32/distilbert-base-uncased-finetuned-squad
|
hogger32
| 2022-01-03T15:39:48Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.316 | 1.0 | 2363 | 2.0234 |
| 2.0437 | 2.0 | 4726 | 1.7881 |
| 1.9058 | 3.0 | 7089 | 1.7004 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
rexxar96/autonlp-roberta-large-finetuned-467612250
|
rexxar96
| 2022-01-03T14:24:32Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"unk",
"dataset:rexxar96/autonlp-data-roberta-large-finetuned",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- rexxar96/autonlp-data-roberta-large-finetuned
co2_eq_emissions: 73.72876780772296
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 467612250
- CO2 Emissions (in grams): 73.72876780772296
## Validation Metrics
- Loss: 0.18261319398880005
- Accuracy: 0.9541659567217584
- Precision: 0.9530625832223701
- Recall: 0.9572049481778669
- AUC: 0.9901737875196123
- F1: 0.9551292743953294
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/rexxar96/autonlp-roberta-large-finetuned-467612250
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("rexxar96/autonlp-roberta-large-finetuned-467612250", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("rexxar96/autonlp-roberta-large-finetuned-467612250", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
ronanki/xlmr_02-02-2022
|
ronanki
| 2022-01-03T13:48:37Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ronanki/xlmr_02-02-2022
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/xlmr_02-02-2022')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ronanki/xlmr_02-02-2022')
model = AutoModel.from_pretrained('ronanki/xlmr_02-02-2022')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/xlmr_02-02-2022)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 160 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 16,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
impyadav/GPT2-FineTuned-Hinglish-Song-Generation
|
impyadav
| 2022-01-03T11:33:54Z | 51 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
GPT-2 model fine-tuned on Custom old Hindi songs (Hinglish) for text-generation task (AI Lyricist)
language:
- Hindi
- Hinglish
|
hiiamsid/sentence_similarity_hindi
|
hiiamsid
| 2022-01-03T11:25:33Z | 236 | 6 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"hi",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
language:
- hi
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# hiiamsid/sentence_similarity_hindi
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('hiiamsid/sentence_similarity_hindi')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
```
cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
0.825825032,0.8227195932,0.8127990959,0.8214681478,0.8111641963,0.8194870279,0.8096042841,0.8061808483
```
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 341 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 137,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Model: [setu4993/LaBSE]
(https://huggingface.co/setu4993/LaBSE)
- Sentence Transformers [Semantic Textual Similarity]
(https://www.sbert.net/examples/training/sts/README.html)
|
pratinavseth/biobert_squad2_cased-finetuned-squad
|
pratinavseth
| 2022-01-03T08:56:44Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: biobert_squad2_cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert_squad2_cased-finetuned-squad
This model is a fine-tuned version of [clagator/biobert_squad2_cased](https://huggingface.co/clagator/biobert_squad2_cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
huggingtweets/chheplo
|
huggingtweets
| 2022-01-03T05:23:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/chheplo/1641187409438/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1477561163961438208/7HnhxOo__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pratik Desai</div>
<div style="text-align: center; font-size: 14px;">@chheplo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pratik Desai.
| Data | Pratik Desai |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 362 |
| Short tweets | 139 |
| Tweets kept | 2747 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/4tv1dtfa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chheplo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/p7d97s36) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/p7d97s36/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chheplo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pinecone/mpnet-retriever-squad2
|
pinecone
| 2022-01-03T02:42:15Z | 6 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 5429 with parameters:
```
{'batch_size': 24}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 542,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
durgaamma2005/indic-transformers-te-distilbert
|
durgaamma2005
| 2022-01-02T17:56:41Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: indic-transformers-te-distilbert
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: te
metrics:
- name: Precision
type: precision
value: 0.5657225853304285
- name: Recall
type: recall
value: 0.6486261448792673
- name: F1
type: f1
value: 0.604344453064391
- name: Accuracy
type: accuracy
value: 0.9049186160277506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indic-transformers-te-distilbert
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2940
- Precision: 0.5657
- Recall: 0.6486
- F1: 0.6043
- Accuracy: 0.9049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 125 | 0.3629 | 0.4855 | 0.5287 | 0.5062 | 0.8826 |
| No log | 2.0 | 250 | 0.3032 | 0.5446 | 0.6303 | 0.5843 | 0.9002 |
| No log | 3.0 | 375 | 0.2940 | 0.5657 | 0.6486 | 0.6043 | 0.9049 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
juierror/wav2vec2-large-xls-r-thai-test
|
juierror
| 2022-01-02T14:18:08Z | 64 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-thai-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-thai-test
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7728
- eval_wer: 0.9490
- eval_runtime: 678.2819
- eval_samples_per_second: 3.226
- eval_steps_per_second: 0.404
- epoch: 2.56
- step: 600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
stefan-jo/bert-finetuned-ner
|
stefan-jo
| 2022-01-02T13:21:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9378727634194831
- name: Recall
type: recall
value: 0.9527095254123191
- name: F1
type: f1
value: 0.9452329270328937
- name: Accuracy
type: accuracy
value: 0.9866515570730559
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0619
- Precision: 0.9379
- Recall: 0.9527
- F1: 0.9452
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.088 | 1.0 | 1756 | 0.0625 | 0.9203 | 0.9399 | 0.9300 | 0.9835 |
| 0.0383 | 2.0 | 3512 | 0.0614 | 0.9348 | 0.9460 | 0.9404 | 0.9858 |
| 0.0209 | 3.0 | 5268 | 0.0619 | 0.9379 | 0.9527 | 0.9452 | 0.9867 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
addy88/perceiver_image_classifier
|
addy88
| 2022-01-02T13:05:37Z | 82 | 3 |
transformers
|
[
"transformers",
"pytorch",
"perceiver",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationLearned
import requests
from PIL import Image
feature_extractor = PerceiverFeatureExtractor.from_pretrained("addy88/perceiver_image_classifier")
model = PerceiverForImageClassificationLearned.from_pretrained("addy88/perceiver_image_classifier")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# prepare input
encoding = feature_extractor(image, return_tensors="pt")
inputs = encoding.pixel_values
# forward pass
outputs = model(inputs)
logits = outputs.logits
print("Predicted class:", model.config.id2label[logits.argmax(-1).item()])
>>> should print Predicted class: tabby, tabby cat
```
|
AlekseyKulnevich/Pegasus-QuestionGeneration
|
AlekseyKulnevich
| 2022-01-02T12:24:37Z | 29 | 1 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
**Usage HuggingFace Transformers for question generation task**
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-QuestionGeneration")
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
input_text # your text
input_ = tokenizer.batch_encode_plus([input_text], max_length=1024, pad_to_max_length=True,
truncation=True, padding='longest', return_tensors='pt')
input_ids = input_['input_ids']
input_mask = input_['attention_mask']
questions = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=10)
questions = tokenizer.batch_decode(questions, skip_special_tokens=True)
```
**Decoder configuration examples:**
[**Input text you can see here**](https://www.bbc.com/news/science-environment-59775105)
```
questions = model.generate(input_ids=input_ids,
attention_mask=input_mask,
num_beams=32,
no_repeat_ngram_size=2,
early_stopping=True,
num_return_sequences=10)
tokenizer.batch_decode(questions, skip_special_tokens=True)
```
output:
1. *What is the impact of human induced climate change on tropical cyclones?*
2. *What is the impact of climate change on tropical cyclones?*
3. *What is the impact of human induced climate change on tropical cyclone formation?*
4. *How many tropical cyclones will occur in the mid-latitudes?*
5. *What is the impact of climate change on the formation of tropical cyclones?*
6. *Is it possible for a tropical cyclone to form in the middle latitudes?*
7. *How many tropical cyclones will be formed in the mid-latitudes?*
8. *How many tropical cyclones will there be in the mid-latitudes?*
9. *How many tropical cyclones will form in the mid-latitudes?*
10. *What is the impact of global warming on tropical cyclones?*
11. *How long does it take for a tropical cyclone to form?*
12. 'What are the impacts of climate change on tropical cyclones?*
13. *What are the effects of climate change on tropical cyclones?*
14. *How many tropical cyclones will be able to form in the middle latitudes?*
15. *What is the impact of climate change on tropical cyclone formation?*
16. *What is the effect of climate change on tropical cyclones?*
17. *How long does it take for a tropical cyclone to form in the middle latitude?*
18. *How many tropical cyclones will occur in the middle latitudes?*
19. *How many tropical cyclones are likely to form in the midlatitudes?*
20. *How many tropical cyclones are likely to form in the middle latitudes?*
21. *How many tropical cyclones are expected to form in the midlatitudes?*
22. *How many tropical cyclones will be formed in the middle latitudes?*
23. *How many tropical cyclones will there be in the middle latitudes?*
24. *How long will it take for a tropical cyclone to form in the middle latitude?*
25. *What is the impact of global warming on tropical cyclone formation?*
26. *How many tropical cyclones will form in the middle latitudes?*
27. *How many tropical cyclones can we expect to form in the middle latitudes?*
28. *Is it possible for a tropical cyclone to form in the middle latitude?*
29. *What is the effect of climate change on tropical cyclone formation?*
30. *What are the effects of climate change on tropical cyclone formation?*
Also you can play with the following parameters in generate method:
-top_k
-top_p
[**Meaning of parameters to generate text you can see here**](https://huggingface.co/blog/how-to-generate)
|
LeoFeng/ChineseSequenceClassification
|
LeoFeng
| 2022-01-02T09:13:10Z | 4 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
利用THUC dataset 訓練的文章分類器,共支援14種種類
|
addy88/gpt-j-8bit
|
addy88
| 2022-01-02T06:34:27Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"arxiv:2106.09685",
"arxiv:2110.02861",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
This Model is 8bit Version of EleutherAI/gpt-j-6B. It is converted by Facebook's bitsandbytes library. The original GPT-J takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. So for finetuning on single GPU This model is converted into 8bit.
Here's how to run it: [](https://colab.research.google.com/drive/1KNf5siQdM7ILQM-pHsP6gNVPKl1SJdU1)
__The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive.
Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:
- large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication
- using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training
- scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861)
In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases).

__Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://colab.research.google.com/drive/1FxGeYQyE7cx9VNCBC4gUyRVZGORW7c6g) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant.
Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error.
__What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.
### How should I fine-tune the model?
We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf).
On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size.
As a result, the larger batch size you can fit, the more efficient you will train.
### Can I use this technique with other models?
The model was converted using [this notebook](https://colab.research.google.com/drive/1rwxh0XRdVi8VEbTx97l9xXr4JbRhZaq5#scrollTo=CX3VHn-J1Zer). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
|
addy88/t5-argument-anlyser
|
addy88
| 2022-01-02T06:32:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
Pretraining Dataset: debatelab/aaac
|
huggingtweets/michaeldrummey-theegaycomrade-vpukhanov
|
huggingtweets
| 2022-01-01T19:30:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/michaeldrummey-theegaycomrade-vpukhanov/1641065423081/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1413939279127011331/dVGeqlNN_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1468996975404228610/Etj-urSz_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1471632802894389249/2ubdnotf_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Vyacheslav Pukhanov & Michael Drummey & oh no zach had a thought</div>
<div style="text-align: center; font-size: 14px;">@michaeldrummey-theegaycomrade-vpukhanov</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Vyacheslav Pukhanov & Michael Drummey & oh no zach had a thought.
| Data | Vyacheslav Pukhanov | Michael Drummey | oh no zach had a thought |
| --- | --- | --- | --- |
| Tweets downloaded | 308 | 3246 | 3248 |
| Retweets | 50 | 231 | 55 |
| Short tweets | 63 | 1133 | 640 |
| Tweets kept | 195 | 1882 | 2553 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1udeu111/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @michaeldrummey-theegaycomrade-vpukhanov's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3h79hg6v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3h79hg6v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/michaeldrummey-theegaycomrade-vpukhanov')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
s3h/arabert-gec-v2-2
|
s3h
| 2022-01-01T18:50:19Z | 3 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: s3h/arabic-t5-small-finetuned-gec
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# s3h/arabic-t5-small-finetuned-gec
This model is a fine-tuned version of [flax-community/arabic-t5-small](https://huggingface.co/flax-community/arabic-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0930
- Validation Loss: 0.9132
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 573, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0930 | 0.9132 | 0 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
s3h/arabic-t5-small-finetuned-gec
|
s3h
| 2022-01-01T18:36:08Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: s3h/arabic-t5-small-finetuned-gec
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# s3h/arabic-t5-small-finetuned-gec
This model is a fine-tuned version of [flax-community/arabic-t5-small](https://huggingface.co/flax-community/arabic-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0930
- Validation Loss: 0.9132
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 573, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0930 | 0.9132 | 0 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
imthanhlv/gpt2news
|
imthanhlv
| 2022-01-01T18:14:53Z | 203 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"gpt",
"vi",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: vi
tags:
- gpt
widget:
- text: "Hôm qua những nhà khoa học Mỹ đã phát hiện ra loài cá lợn"
---
### GPT 2 News
**Update 02 Jan 2022**: Fixed mismatch tokenizer and model.wte size
### BibTex
```
@article{thanh21gpt2news,
author = {Thanh V. Le},
title = {Pretrained GPT-2 on Vietnamese news},
journal = {https://huggingface.co/imthanhlv/gpt2news},
year = {2021},
}
```
|
mattchurgin/distilbert-sst2
|
mattchurgin
| 2021-12-31T23:08:41Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: distilbert-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4182
- eval_accuracy: 0.8911
- eval_runtime: 1.8021
- eval_samples_per_second: 483.882
- eval_steps_per_second: 60.485
- epoch: 0.8
- step: 6700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nwl/DialoGPT-small-enhypen
|
nwl
| 2021-12-31T13:38:51Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- conversational
---
|
airKlizz/mt5-base-wikinewssum-english-100
|
airKlizz
| 2021-12-31T12:02:27Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-english-100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-english-100
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6225
- Rouge1: 3.909
- Rouge2: 0.9312
- Rougel: 3.3835
- Rougelsum: 3.7786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 0.96 | 12 | 14.4949 | 2.7398 | 0.7181 | 2.491 | 2.6561 |
| No log | 1.96 | 24 | 10.5056 | 4.4428 | 1.4293 | 3.8469 | 4.2869 |
| No log | 2.96 | 36 | 8.9856 | 4.1179 | 1.229 | 3.5726 | 3.9693 |
| No log | 3.96 | 48 | 7.7950 | 3.9217 | 1.1339 | 3.4256 | 3.7905 |
| No log | 4.96 | 60 | 7.0734 | 3.8004 | 1.0326 | 3.3246 | 3.6766 |
| No log | 5.96 | 72 | 6.7897 | 3.6351 | 0.9162 | 3.1839 | 3.5149 |
| No log | 6.96 | 84 | 6.6610 | 3.7486 | 0.8829 | 3.2583 | 3.6193 |
| No log | 7.96 | 96 | 6.6225 | 3.909 | 0.9312 | 3.3835 | 3.7786 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Muennighoff/SBERT-base-nli-stsb-v2
|
Muennighoff
| 2021-12-31T07:59:14Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
This model is used in "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning".
|
NahedAbdelgaber/distilbert-base-uncased-finetuned-evaluating-student-writing
|
NahedAbdelgaber
| 2021-12-31T06:28:07Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-evaluating-student-writing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-evaluating-student-writing
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3485 | 1.0 | 878 | 2.0959 |
| 2.1407 | 2.0 | 1756 | 2.0162 |
| 2.0843 | 3.0 | 2634 | 1.9846 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/hey_ash21
|
huggingtweets
| 2021-12-31T04:19:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/hey_ash21/1640924344980/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1364393973331021830/i7JjvUhX_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ash 🫀</div>
<div style="text-align: center; font-size: 14px;">@hey_ash21</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ash 🫀.
| Data | ash 🫀 |
| --- | --- |
| Tweets downloaded | 3242 |
| Retweets | 193 |
| Short tweets | 132 |
| Tweets kept | 2917 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2tujmcza/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hey_ash21's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pwdhn6q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pwdhn6q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hey_ash21')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
TrLOX/gpt2-tdk
|
TrLOX
| 2021-12-31T02:18:21Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: dgpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dgpt
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.14.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
hello
hello
|
federicopascual/finetune-sentiment-analysis-model-3000-samples
|
federicopascual
| 2021-12-30T19:29:48Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetune-sentiment-analysis-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8866666666666667
- name: F1
type: f1
value: 0.8944099378881988
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-sentiment-analysis-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4558
- Accuracy: 0.8867
- F1: 0.8944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
lgris/sew-tiny-pt
|
lgris
| 2021-12-30T17:37:50Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"sew",
"feature-extraction",
"speech",
"pt",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: pt
tags:
- speech
license: apache-2.0
---
# SEW-tiny-pt
This is a pretrained version of [SEW tiny by ASAPP Research](https://github.com/asappresearch/sew) trained over Brazilian Portuguese audio.
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`.
|
ysslang/autonlp-test-459011902
|
ysslang
| 2021-12-30T17:05:31Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"zh",
"dataset:ysslang/autonlp-data-test",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: zh
widget:
- text: "I love AutoNLP 🤗"
datasets:
- ysslang/autonlp-data-test
co2_eq_emissions: 10.9230691350863
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 459011902
- CO2 Emissions (in grams): 10.9230691350863
## Validation Metrics
- Loss: 0.7189690470695496
- Accuracy: 0.7453263867606497
- Macro F1: 0.630810193227066
- Micro F1: 0.7453263867606497
- Weighted F1: 0.7399327942874923
- Macro Precision: 0.656237447101913
- Micro Precision: 0.7453263867606497
- Weighted Precision: 0.7410161412822164
- Macro Recall: 0.6340140718425453
- Micro Recall: 0.7453263867606497
- Weighted Recall: 0.7453263867606497
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/ysslang/autonlp-test-459011902
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ysslang/autonlp-test-459011902", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ysslang/autonlp-test-459011902", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
scasutt/Prototype_training_large_model
|
scasutt
| 2021-12-30T14:40:39Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Prototype_training_large_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Prototype_training_large_model
This model is a fine-tuned version of [scasutt/Prototype_training_large_model](https://huggingface.co/scasutt/Prototype_training_large_model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2585
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.0545 | 1.47 | 100 | 3.2604 | 1.0 |
| 3.0413 | 2.93 | 200 | 3.2585 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
pinecone/bert-rte-cross-encoder
|
pinecone
| 2021-12-30T12:12:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# RTE Cross Encoder
Demo model for use as part of Augmented SBERT chapters of the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp).
|
pinecone/bert-mrpc-cross-encoder
|
pinecone
| 2021-12-30T12:12:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# MRPC Cross Encoder
Demo model for use as part of Augmented SBERT chapters of the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp).
|
pinecone/bert-medqp-cross-encoder
|
pinecone
| 2021-12-30T12:11:30Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# Med-QP Cross Encoder
Demo model for use as part of Augmented SBERT chapters of the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp).
|
pinecone/bert-stsb-cross-encoder
|
pinecone
| 2021-12-30T12:11:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# STSb Cross Encoder
Demo model for use as part of Augmented SBERT chapters of the [NLP for Semantic Search course](https://www.pinecone.io/learn/nlp).
|
NahedAbdelgaber/distilbert-base-uncased-finetuned-down-sampled-evaluating-student-writing
|
NahedAbdelgaber
| 2021-12-30T06:58:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-down-sampled-evaluating-student-writing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-down-sampled-evaluating-student-writing
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5869 | 1.0 | 157 | 2.3949 |
| 2.4142 | 2.0 | 314 | 2.3551 |
| 2.3792 | 3.0 | 471 | 2.2840 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
youngjae/bert-finetuned-squad
|
youngjae
| 2021-12-30T04:13:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.9.0.dev20210415+cu101
- Datasets 1.16.1
- Tokenizers 0.10.3
|
rkmt/wav2vec2-base-timit-demo-colab
|
rkmt
| 2021-12-30T00:39:31Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/hubert-large-ls960-ft](https://huggingface.co/facebook/hubert-large-ls960-ft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0280
- Wer: 0.0082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1152 | 1.42 | 500 | 0.0416 | 0.0159 |
| 0.0803 | 2.83 | 1000 | 0.0372 | 0.0144 |
| 0.0672 | 4.25 | 1500 | 0.0345 | 0.0119 |
| 0.0564 | 5.67 | 2000 | 0.0338 | 0.0106 |
| 0.0513 | 7.08 | 2500 | 0.0307 | 0.0100 |
| 0.0448 | 8.5 | 3000 | 0.0343 | 0.0098 |
| 0.0374 | 9.92 | 3500 | 0.0300 | 0.0084 |
| 0.0368 | 11.33 | 4000 | 0.0314 | 0.0086 |
| 0.0388 | 12.75 | 4500 | 0.0283 | 0.0089 |
| 0.0277 | 14.16 | 5000 | 0.0302 | 0.0089 |
| 0.0298 | 15.58 | 5500 | 0.0298 | 0.0089 |
| 0.0271 | 17.0 | 6000 | 0.0320 | 0.0098 |
| 0.024 | 18.41 | 6500 | 0.0286 | 0.0088 |
| 0.0236 | 19.83 | 7000 | 0.0284 | 0.0084 |
| 0.0238 | 21.25 | 7500 | 0.0290 | 0.0086 |
| 0.0227 | 22.66 | 8000 | 0.0284 | 0.0093 |
| 0.0198 | 24.08 | 8500 | 0.0280 | 0.0088 |
| 0.0225 | 25.5 | 9000 | 0.0281 | 0.0086 |
| 0.018 | 26.91 | 9500 | 0.0280 | 0.0082 |
| 0.0178 | 28.33 | 10000 | 0.0280 | 0.0082 |
| 0.0209 | 29.75 | 10500 | 0.0280 | 0.0082 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
lgris/distilxlsr_bp_4-12
|
lgris
| 2021-12-30T00:38:04Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"speech",
"pt",
"arxiv:2110.01900",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: pt
tags:
- speech
license: apache-2.0
---
# DistilXLSR-53 for BP
[DistilXLSR-53 for BP: DistilHuBERT applied to Wav2vec XLSR-53 for Brazilian Portuguese](https://github.com/s3prl/s3prl/tree/master/s3prl/upstream/distiller)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
Paper: [DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT](https://arxiv.org/abs/2110.01900)
Authors: Heng-Jui Chang, Shu-wen Yang, Hung-yi Lee
**Note 2**: The XLSR-53 model was distilled using [Brazilian Portuguese Datasets](https://huggingface.co/lgris/bp400-xlsr) for test purposes. The dataset is quite small to perform such task (the performance might not be so good as the [original work](https://arxiv.org/abs/2110.01900)).
**Abstract**
Self-supervised speech representation learning methods like wav2vec 2.0 and Hidden-unit BERT (HuBERT) leverage unlabeled speech data for pre-training and offer good representations for numerous speech processing tasks. Despite the success of these methods, they require large memory and high pre-training costs, making them inaccessible for researchers in academia and small companies. Therefore, this paper introduces DistilHuBERT, a novel multi-task learning framework to distill hidden representations from a HuBERT model directly. This method reduces HuBERT's size by 75% and 73% faster while retaining most performance in ten different tasks. Moreover, DistilHuBERT required little training time and data, opening the possibilities of pre-training personal and on-device SSL models for speech.
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model.
|
SophieTr/distil-pegasus-reddit
|
SophieTr
| 2021-12-29T23:58:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
This is the model so far before time out
|
danicodes/autonlp-legal-text-summary-457311749
|
danicodes
| 2021-12-29T22:18:48Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"unk",
"dataset:danicodes/autonlp-data-legal-text-summary",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- danicodes/autonlp-data-legal-text-summary
co2_eq_emissions: 10.148805588432941
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 457311749
- CO2 Emissions (in grams): 10.148805588432941
## Validation Metrics
- Loss: 1.647747278213501
- Rouge1: 32.4854
- Rouge2: 19.8974
- RougeL: 30.0602
- RougeLsum: 29.9377
- Gen Len: 46.6556
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/danicodes/autonlp-legal-text-summary-457311749
```
|
tbochens/test-train
|
tbochens
| 2021-12-29T19:25:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: test-train
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8455882352941176
- name: F1
type: f1
value: 0.8926746166950595
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-train
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7268
- Accuracy: 0.8456
- F1: 0.8927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.3470 | 0.8627 | 0.9014 |
| 0.4987 | 2.0 | 918 | 0.5782 | 0.8382 | 0.8914 |
| 0.2796 | 3.0 | 1377 | 0.7268 | 0.8456 | 0.8927 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
airKlizz/mt5-base-wikinewssum-english
|
airKlizz
| 2021-12-29T19:10:05Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-english
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3040
- Rouge1: 8.9565
- Rouge2: 3.6563
- Rougel: 7.1346
- Rougelsum: 8.3802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 1010 | 2.4360 | 8.7287 | 3.5817 | 7.0093 | 8.1879 |
| No log | 2.0 | 2020 | 2.3922 | 8.7227 | 3.5385 | 6.96 | 8.1887 |
| No log | 3.0 | 3030 | 2.3422 | 8.8565 | 3.5772 | 7.0203 | 8.2957 |
| No log | 4.0 | 4040 | 2.3288 | 8.89 | 3.645 | 7.0602 | 8.3314 |
| 3.1253 | 5.0 | 5050 | 2.3209 | 8.868 | 3.6109 | 7.0537 | 8.299 |
| 3.1253 | 6.0 | 6060 | 2.3127 | 8.9488 | 3.6615 | 7.1044 | 8.3785 |
| 3.1253 | 7.0 | 7070 | 2.3056 | 8.9366 | 3.6507 | 7.1338 | 8.3615 |
| 3.1253 | 8.0 | 8080 | 2.3040 | 8.9565 | 3.6563 | 7.1346 | 8.3802 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
LPM/AI_1
|
LPM
| 2021-12-29T18:54:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:04Z |
git lfs install
git clone https://huggingface.co/LPM/AI_1
|
patrickvonplaten/wav2vec2-2-bart-base
|
patrickvonplaten
| 2021-12-29T15:53:10Z | 373 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"librispeech_asr",
"generated_from_trainer",
"asr_seq2esq",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
- asr_seq2esq
model-index:
- name: wav2vec2-2-bart-base
results: []
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
- example_title: Common Voice sample
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3
---
To rerun this experiment, please clone this directory and run:
```bash
python create_model.py
```
followed by
```bash
./run_librispeech.sh
```
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-2-bart-base
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) and [bart-base](https://huggingface.co/facebook/bart-base) on the librispeech_asr - clean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.405
- Wer: 0.0728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
See Training Metrics Tab.
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-2-bart-large
|
patrickvonplaten
| 2021-12-29T15:49:52Z | 6 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"librispeech_asr",
"generated_from_trainer",
"asr_seq2esq",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
- asr_seq2esq
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
- example_title: Common Voice sample
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3
model-index:
- name: wav2vec2-2-bart-large
results: []
---
To rerun this experiment, please clone this directory and run:
```bash
python create_model.py
```
followed by
```bash
./run_librispeech.sh
```
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-2-bart-large
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) and [bart-large](https://huggingface.co/facebook/bart-large) on the librispeech_asr - clean dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3204
- Wer: 0.0486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- gradient_accumulation_steps: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
See Training Metrics Tab.
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
rexxar96/autonlp-sentiment-analysis-456211724
|
rexxar96
| 2021-12-29T14:47:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"unk",
"dataset:rexxar96/autonlp-data-sentiment-analysis",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- rexxar96/autonlp-data-sentiment-analysis
co2_eq_emissions: 22.28263989637389
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 456211724
- CO2 Emissions (in grams): 22.28263989637389
## Validation Metrics
- Loss: 0.23710417747497559
- Accuracy: 0.9119100357812234
- Precision: 0.8882611424984307
- Recall: 0.9461718488799733
- AUC: 0.974790366001874
- F1: 0.9163024121741946
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/rexxar96/autonlp-sentiment-analysis-456211724
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("rexxar96/autonlp-sentiment-analysis-456211724", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("rexxar96/autonlp-sentiment-analysis-456211724", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
airKlizz/mt5-base-wikinewssum-italian
|
airKlizz
| 2021-12-29T10:55:47Z | 39 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-italian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-italian
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.5739
- Rouge1: 2.1728
- Rouge2: 0.1516
- Rougel: 2.0846
- Rougelsum: 2.0515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 8 | 16.6193 | 2.4011 | 0.3829 | 2.1505 | 2.2161 |
| No log | 2.0 | 16 | 15.8909 | 2.5165 | 0.2799 | 2.3403 | 2.3523 |
| No log | 3.0 | 24 | 15.4843 | 2.2794 | 0.2252 | 2.1849 | 2.1382 |
| 17.2559 | 4.0 | 32 | 13.0850 | 2.2448 | 0.1516 | 2.1426 | 2.0859 |
| 17.2559 | 5.0 | 40 | 11.7838 | 2.2448 | 0.1516 | 2.1426 | 2.0859 |
| 17.2559 | 6.0 | 48 | 11.3207 | 2.2424 | 0.1516 | 2.1423 | 2.1171 |
| 17.2559 | 7.0 | 56 | 10.7871 | 2.1081 | 0.1516 | 2.0227 | 1.9838 |
| 14.6026 | 8.0 | 64 | 10.5739 | 2.1728 | 0.1516 | 2.0846 | 2.0515 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
csukuangfj/test-data-for-optimized-transducer
|
csukuangfj
| 2021-12-29T09:31:30Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
See
https://colab.research.google.com/drive/14MozS-9jWD3XQ0o-dZ-meqnblgHs70P2?usp=sharing
|
huggingtweets/ihyjuju
|
huggingtweets
| 2021-12-29T01:31:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/ihyjuju/1640741515385/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1448859687449862147/frVD6mW3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">juju 💰</div>
<div style="text-align: center; font-size: 14px;">@ihyjuju</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from juju 💰.
| Data | juju 💰 |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 1 |
| Short tweets | 478 |
| Tweets kept | 2769 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n82hqbg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ihyjuju's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1t6rclcz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1t6rclcz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ihyjuju')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mrm8488/deberta-v3-small-goemotions
|
mrm8488
| 2021-12-28T23:12:12Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: deberta-v3-snall-goemotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-snall-goemotions
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5638
- F1: 0.4241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.614 | 1.0 | 3082 | 1.5577 | 0.3663 |
| 1.4338 | 2.0 | 6164 | 1.5580 | 0.4084 |
| 1.2936 | 3.0 | 9246 | 1.5006 | 0.4179 |
| 1.1531 | 4.0 | 12328 | 1.5348 | 0.4276 |
| 1.0536 | 5.0 | 15410 | 1.5638 | 0.4241 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
sw005320/aidatatang_200zh_conformer
|
sw005320
| 2021-12-28T16:07:10Z | 2 | 3 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:aidatatang_200zh",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: zh
datasets:
- aidatatang_200zh
license: cc-by-4.0
---
## ESPnet2 ASR model
### `sw005320/aidatatang_200zh_conformer`
This model was trained by Shinji Watanabe using aidatatang_200zh recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 8ab3d9f2191f250cb62deff222d2e6addb3842dc
pip install -e .
cd egs2/aidatatang_200zh/asr1
./run.sh --skip_data_prep false --skip_train true --download_model sw005320/aidatatang_200zh_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Dec 24 23:34:58 EST 2021`
- python version: `3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0]`
- espnet version: `espnet 0.10.5a1`
- pytorch version: `pytorch 1.7.1`
- Git hash: `a5bacd349a47889aef795f999563018cf201ae64`
- Commit date: `Wed Dec 22 14:08:29 2021 -0500`
## asr_train_asr_conformer_raw_zh_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/dev|24216|24216|81.5|18.5|0.0|0.0|18.5|18.5|
|decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/test|48144|48144|79.0|21.0|0.0|0.0|21.0|21.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/dev|24216|234524|96.6|3.0|0.5|0.1|3.6|18.5|
|decode_asr_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/test|48144|468933|95.9|3.6|0.4|0.2|4.3|21.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_zh_char_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 4000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char_sp/train/speech_shape
- exp/asr_stats_raw_zh_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char_sp/valid/speech_shape
- exp/asr_stats_raw_zh_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- sound
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- 我
- 的
- 你
- 么
- 不
- 是
- 了
- 一
- 有
- 天
- 什
- 好
- 在
- 个
- 怎
- 吗
- 话
- 要
- 给
- 电
- 上
- 没
- 人
- 说
- 到
- 啊
- 就
- 这
- 时
- 来
- 下
- 想
- 打
- 点
- 去
- 还
- 看
- 道
- 多
- 明
- 那
- 知
- 以
- 今
- 能
- 会
- 哪
- 都
- 可
- 大
- 吧
- 机
- 样
- 里
- 十
- 现
- 们
- 过
- 吃
- 开
- 家
- 回
- 发
- 中
- 呢
- 听
- 候
- 为
- 也
- 日
- 爱
- 歌
- 三
- 起
- 小
- 二
- 心
- 子
- 手
- 生
- 最
- 儿
- 学
- 放
- 信
- 女
- 号
- 几
- 和
- 老
- 晚
- 少
- 车
- 叫
- 快
- 用
- 自
- 年
- 睡
- 问
- 事
- 后
- 五
- 乐
- 安
- 出
- 找
- 帮
- 意
- 觉
- 气
- 国
- 得
- 情
- 请
- 早
- 地
- 做
- 首
- 真
- 公
- 近
- 对
- 办
- 很
- 行
- 己
- 呀
- 八
- 友
- 如
- 六
- 节
- 喜
- 新
- 欢
- 西
- 间
- 月
- 班
- 他
- 网
- 方
- 分
- 播
- 笑
- 查
- 息
- 名
- 四
- 成
- 东
- 美
- 零
- 市
- 饭
- 世
- 朋
- 玩
- 州
- 果
- 才
- 七
- 别
- 把
- 谁
- 九
- 再
- 平
- 太
- 干
- 思
- 关
- 谢
- 高
- 语
- 理
- 些
- 界
- 着
- 长
- 钱
- 动
- 曲
- 感
- 聊
- 片
- 何
- 面
- 男
- 音
- 工
- 南
- 午
- 本
- 通
- 火
- 经
- 路
- 星
- 唱
- Q
- 业
- 讲
- 英
- 北
- 服
- 短
- 妈
- 海
- 文
- 跟
- 作
- 票
- 只
- 等
- 刚
- 码
- 字
- 影
- 附
- 婆
- 见
- 又
- 祝
- 无
- 该
- 提
- 末
- 让
- 法
- 定
- 买
- 告
- 照
- 体
- 考
- 床
- 醒
- 记
- 前
- 题
- 走
- 加
- 主
- 从
- 视
- 张
- 身
- 两
- 钟
- 京
- 于
- 收
- 阳
- 哈
- 店
- 山
- 院
- 站
- 百
- 宝
- 所
- 诉
- 期
- 之
- 嘛
- 夜
- 第
- 游
- 比
- 系
- 昨
- 费
- 交
- 水
- 应
- 次
- 周
- 亲
- 联
- 全
- 福
- 江
- 孩
- 区
- 广
- 头
- 接
- O
- 校
- 已
- 空
- 门
- 认
- 相
- 度
- 实
- 活
- 色
- 假
- 白
- 算
- 外
- 流
- 啦
- 花
- 然
- 结
- 每
- 休
- 边
- 部
- 位
- 场
- 半
- 王
- 声
- 件
- 力
- 金
- 重
- 识
- 正
- 华
- 光
- 衣
- 载
- 死
- 价
- 翻
- 图
- 城
- 脑
- 同
- 久
- 译
- 特
- 物
- 搜
- 务
- 报
- 线
- 哦
- 卡
- E
- 当
- A
- 爸
- 圣
- 完
- 幺
- 合
- P
- 雨
- 黄
- 种
- 司
- 直
- I
- 她
- 哥
- 书
- 银
- 试
- 解
- 穿
- 酒
- 准
- 换
- 望
- 被
- S
- 原
- 内
- 诞
- 带
- 介
- 口
- 清
- N
- 马
- 习
- 否
- 置
- 啥
- 索
- 戏
- 与
- 懂
- 飞
- 需
- 性
- 错
- 送
- 级
- 器
- 单
- 离
- 远
- 备
- 师
- 课
- 注
- 因
- 难
- 其
- 像
- 元
- 消
- 表
- 便
- 球
- 风
- 教
- 故
- 科
- 李
- 常
- 林
- 龙
- 呵
- 数
- 代
- 总
- 忘
- 商
- 变
- 婚
- 苹
- 红
- 格
- 坐
- 绍
- 答
- 量
- 冷
- 青
- 询
- 春
- 神
- 省
- 蛋
- 姐
- 陪
- 兴
- 利
- 台
- 句
- 万
- 计
- 保
- 刘
- 传
- 深
- 管
- 运
- 德
- 医
- 容
- 品
- 越
- 亮
- 词
- 河
- 化
- 宁
- 始
- 武
- 希
- 洗
- 复
- 设
- 处
- 技
- 房
- T
- 您
- 取
- 眼
- 县
- 笨
- 术
- 温
- 永
- 受
- 更
- 先
- 尔
- 程
- 彩
- 演
- 忙
- 专
- 愿
- 进
- 湖
- 建
- 况
- 伤
- 喝
- 底
- 卖
- 功
- 录
- 改
- H
- 剧
- 预
- 梦
- L
- 达
- 连
- 馆
- 包
- 写
- 客
- C
- 汉
- 条
- G
- 幸
- 民
- 读
- 职
- 目
- 但
- 贝
- 妹
- 资
- 较
- 雪
- 赛
- 除
- 招
- 园
- 住
- 超
- 汽
- 病
- B
- 软
- 反
- 而
- 证
- 员
- 黑
- 庆
- D
- 求
- 排
- 装
- 岁
- 顾
- 产
- 航
- 言
- 斯
- 拨
- 历
- 烦
- 及
- 药
- 入
- 式
- 军
- 餐
- 志
- 至
- 双
- 米
- 版
- 掉
- 千
- 者
- 充
- 微
- 失
- 转
- M
- 亚
- 克
- 座
- 丽
- 络
- 战
- 使
- 猪
- 具
- 闹
- 限
- 址
- 基
- 油
- 漂
- 陈
- Y
- 川
- 强
- 挺
- 奇
- 杰
- 政
- 向
- 速
- 康
- 差
- 贵
- 搞
- 义
- 奖
- 份
- 户
- 楼
- 苏
- 任
- 健
- 易
- 毛
- 型
- 石
- 礼
- 款
- 持
- 卫
- 怕
- 恋
- 邮
- 集
- R
- 铁
- 圳
- 拿
- 云
- 队
- 鱼
- 慢
- 顺
- 害
- 属
- 傻
- 营
- 菜
- 货
- 麻
- 咋
- 坏
- 冒
- 累
- 杨
- 闻
- 治
- 选
- 段
- K
- 香
- 闭
- 兰
- 牌
- 局
- 留
- 舍
- 非
- 推
- 室
- 简
- 拉
- 修
- 终
- 郑
- 切
- U
- 将
- 村
- 沙
- 存
- 帅
- 诗
- 率
- 密
- 巴
- 频
- 士
- 初
- 楚
- 股
- 热
- 古
- 制
- 支
- 肉
- 岛
- 统
- 适
- 肥
- 鸡
- 调
- 街
- 类
- 牛
- 导
- 农
- 值
- 食
- 镇
- 棍
- 移
- 韩
- W
- 嗯
- 订
- 呼
- 命
- V
- 必
- 宿
- 皮
- 升
- 确
- 随
- 步
- 育
- 标
- 唐
- 精
- 决
- 木
- 由
- 弟
- 往
- 肯
- 够
- 或
- 指
- 阿
- 象
- 料
- 念
- 助
- 许
- 共
- 母
- 约
- 罗
- 板
- 秋
- 配
- 魔
- 宜
- 般
- 荐
- 扰
- 舒
- 逼
- 狗
- 嘿
- 博
- 售
- 满
- 疼
- 脸
- 整
- 抱
- 季
- 减
- 养
- 怀
- 免
- 未
- 乘
- F
- 社
- 妇
- 列
- 爷
- 删
- 旦
- 弄
- 概
- 停
- 拜
- 维
- 领
- 示
- 套
- 汇
- 昌
- 晨
- 痛
- 购
- 奥
- 铃
- 案
- 济
- 鬼
- 背
- 港
- 待
- 浪
- 桥
- 血
- 冬
- 烧
- 优
- 拍
- 际
- 急
- 杭
- 称
- 遇
- 赶
- 旅
- 智
- 角
- 财
- 玉
- 团
- 形
- 论
- 静
- 景
- 退
- 普
- 呗
- 乡
- 参
- 胡
- 伦
- 讨
- 艺
- 辈
- 毒
- 此
- 轻
- 苦
- 咱
- 画
- 泰
- 宾
- 雄
- 销
- 奶
- 突
- 波
- 各
- 冰
- 块
- 夏
- 低
- 兵
- 厅
- 羊
- 杀
- 紧
- 泉
- 朝
- 谈
- 足
- 孕
- 夫
- 厂
- 聪
- 续
- 庄
- 诺
- 牙
- 质
- 立
- 依
- 仙
- 跑
- 盘
- 豆
- 它
- 怪
- 猜
- 漫
- 毕
- 兄
- 颜
- 险
- 厦
- 验
- 防
- 登
- 敢
- 乖
- 晓
- 护
- 迎
- 逗
- 摩
- 佳
- 观
- 骗
- 烟
- 细
- 临
- 惠
- 围
- 寞
- 效
- 源
- 寂
- 肚
- 暖
- 饺
- 斗
- 模
- 端
- 疗
- 付
- 绝
- 秘
- 展
- 乎
- 按
- 富
- 靠
- 范
- 规
- 刻
- 折
- 娘
- 厌
- 申
- 章
- 补
- 笔
- 锅
- 破
- 田
- 齐
- 滨
- 皇
- 族
- 典
- 史
- 左
- 蓝
- 灵
- 澡
- 秀
- 诚
- 土
- 测
- 凤
- 剑
- 响
- 倒
- 睛
- 惯
- 乌
- 币
- 扣
- 吴
- 输
- 徐
- 弃
- 纪
- 堂
- 环
- 甲
- 菲
- 缘
- 讯
- 根
- 落
- 启
- 泡
- 饿
- 积
- 府
- 递
- 绩
- 择
- 吉
- 布
- 显
- 童
- 租
- 洋
- 组
- 划
- 编
- 签
- 舞
- 困
- 贴
- 负
- 派
- 裤
- 担
- 桂
- 却
- 丝
- 丰
- 箱
- 赵
- 群
- 序
- 训
- 酸
- 惜
- 圆
- 评
- 压
- 俩
- 状
- 官
- 酷
- 鲁
- 孙
- 草
- 极
- 势
- 斤
- 腾
- 泽
- 素
- 尽
- 姓
- 屏
- 聚
- 莞
- 乱
- 雅
- 尼
- 趣
- 伟
- 肤
- 勇
- 右
- 徽
- 投
- 丹
- 尾
- 托
- 争
- 鸟
- 激
- 印
- 良
- 眠
- 松
- 跳
- 途
- 篮
- 粉
- 脚
- 屁
- 鞋
- 麦
- 则
- 估
- 津
- 努
- 距
- 胸
- 央
- 珍
- 盖
- 哭
- 洲
- 练
- 敏
- 雷
- 曾
- 恩
- 挂
- 据
- 览
- 耳
- 材
- 泪
- 吸
- 味
- 劳
- 父
- 孤
- 玛
- 旁
- 阴
- 态
- 创
- 树
- 脱
- 研
- 驾
- 拾
- 灯
- 虎
- 爆
- 嘉
- 湾
- 躺
- 猫
- 莫
- 昆
- 痘
- 阅
- 射
- 刷
- 卓
- 珠
- 峰
- 胖
- 坚
- 造
- 举
- 棒
- 梅
- 引
- 吵
- 蒙
- 详
- 借
- 瓜
- 池
- 束
- 芳
- 淘
- 寻
- 释
- 沈
- 虑
- 锦
- 胜
- 荣
- 委
- 默
- 另
- 浏
- 并
- 检
- 冠
- 独
- 厉
- 顶
- 钓
- 骂
- 且
- 欧
- 威
- 熟
- 获
- 兽
- 严
- 炎
- 含
- 厕
- 盛
- 翼
- 盟
- 余
- 姨
- 洛
- 映
- 狼
- 谅
- 众
- 宽
- 断
- 止
- 狂
- 凉
- 姑
- 辉
- 若
- 册
- 谷
- 幻
- 篇
- 瓶
- 席
- 恐
- 柔
- 迪
- 供
- 追
- 控
- 爽
- 互
- 嫁
- 宋
- 宫
- 瑞
- 滚
- 增
- 额
- 页
- 刀
- 娱
- 茶
- 钢
- 疯
- 梁
- 承
- 娜
- 须
- 陆
- 燕
- 迟
- 君
- 恶
- 遍
- 纸
- 项
- 丁
- 腿
- 误
- 殊
- 迅
- 锁
- 宇
- 媳
- 培
- 居
- 寄
- 纯
- 嘴
- 浙
- 境
- 搭
- 杯
- 插
- 朱
- 溪
- 甘
- 权
- 窝
- 警
- 糖
- 迷
- 圈
- 凯
- 帝
- 暴
- 逛
- 艳
- 击
- 颗
- 坦
- 杂
- 冲
- 谓
- 救
- 轮
- 晕
- 虽
- 塔
- 叔
- 凰
- 懒
- 议
- 肖
- 郎
- 辛
- 透
- 拥
- 鼠
- 顿
- 批
- 兔
- 尚
- 聘
- 藏
- 赚
- 继
- 享
- 欺
- 潮
- 即
- 甜
- 骨
- 悲
- 幕
- 滴
- 闲
- 液
- 缺
- 琴
- 蜜
- 善
- 暗
- 镜
- 蔡
- 吹
- 核
- 忆
- 键
- 辑
- 岗
- 例
- 涛
- 宏
- 刺
- 郭
- 降
- 秦
- 剩
- 绿
- 桌
- 咖
- 呐
- 叶
- 贸
- 架
- 账
- 亡
- 佛
- 哎
- 乳
- 归
- 忍
- 异
- 侠
- 龄
- 炒
- 洁
- 似
- 虚
- 贷
- 征
- 抽
- 败
- 枪
- 幼
- 丫
- 危
- 慰
- 究
- 婷
- 肃
- 箭
- 灰
- 届
- 律
- 秒
- 淡
- 偷
- 炫
- 鲜
- 浦
- 萨
- 旧
- 硬
- 操
- 混
- 施
- 散
- 咨
- 妻
- 吻
- 榜
- 呆
- 废
- 野
- 糕
- 骑
- 炼
- 震
- 恭
- 悔
- 跨
- 曼
- 啡
- 俊
- 晶
- 胃
- 汤
- 尊
- 貌
- 封
- 羽
- 赞
- 尸
- 隐
- 丢
- 霸
- 醉
- 盗
- 盐
- 浩
- 著
- 档
- 赢
- 幽
- 责
- 鼻
- 辣
- 恒
- 朵
- 慕
- 旗
- 娃
- 饰
- 仁
- 亦
- 竟
- 柳
- 郁
- 唯
- 夕
- 钻
- 均
- 劲
- 庭
- 巧
- 饮
- 涨
- 辆
- 傅
- 企
- 趟
- 避
- 党
- 染
- 扬
- 玲
- 筋
- 烤
- 桃
- 唉
- 慧
- 欲
- 寒
- 闷
- 某
- 恨
- 私
- 淮
- 惊
- 弱
- 弹
- 沉
- 兼
- 弯
- 残
- 偶
- 锋
- 贺
- 咯
- 纳
- 戴
- 抢
- 宗
- 浴
- 宵
- 莲
- 嗨
- 喊
- 奕
- 壁
- 症
- 冻
- 致
- 屋
- 喽
- 伊
- 绵
- 玫
- 固
- 籍
- 监
- 耐
- 井
- 寝
- 露
- 虫
- 盒
- 凡
- 摇
- 傲
- 烈
- 姿
- 陕
- 裸
- 袋
- 帐
- 凌
- 寿
- 茂
- 鹏
- 寓
- 柴
- 妞
- 森
- 既
- 紫
- 萝
- 层
- 苗
- 腊
- 邓
- 宣
- 锡
- 袜
- 陌
- 狮
- 碰
- 晴
- 塘
- 妃
- 祥
- 苍
- 针
- 敌
- 腰
- 犯
- 欠
- 垃
- 卸
- 迹
- 暑
- 祖
- 泳
- 阵
- 熊
- 励
- 澳
- 添
- 拳
- 岳
- 益
- 瘦
- 虹
- 圾
- 植
- 坡
- 攻
- 略
- 墙
- 描
- 遗
- 噢
- 窗
- 吐
- 肌
- 陵
- 逃
- 浮
- 摸
- 戒
- 哟
- 翰
- 勿
- 库
- 涯
- 妖
- 宠
- 脾
- 革
- 探
- 糊
- 采
- 惹
- 衡
- 赤
- 魏
- 羡
- 综
- 舟
- 疆
- 痴
- 催
- 朗
- 坛
- 悠
- 岭
- 驶
- 括
- 嘻
- 辽
- 粥
- 煮
- 灭
- 杜
- 域
- 令
- 替
- 翔
- 坤
- 潘
- 抓
- 铜
- 构
- 卷
- 茫
- 丑
- 涂
- 掌
- 饱
- 肝
- 疾
- 罩
- 谱
- 愚
- 抗
- 琳
- 夸
- 汪
- 墨
- 沟
- 翅
- 肠
- 患
- 柏
- 僵
- 稳
- 延
- 胆
- 伴
- 爬
- 滋
- 歉
- 轩
- 尿
- 铺
- 忠
- 黎
- 膀
- 邯
- 郸
- 愉
- 霉
- 翁
- 妙
- 隆
- 鸭
- 锻
- 涵
- 挣
- 副
- 罪
- 穷
- 恢
- 巨
- 吓
- 眉
- 棉
- 汗
- 溜
- 奏
- 滩
- 愁
- X
- 执
- 霞
- 魂
- 姆
- 摄
- 偏
- 纠
- 瑰
- 洪
- 协
- 牧
- 飘
- 炸
- 悦
- 艾
- 织
- 敬
- 驹
- 欣
- 董
- 邦
- 勒
- 守
- 伙
- 狐
- 税
- 湘
- 遥
- 储
- 脏
- 坊
- 腐
- 横
- 仔
- 仪
- 判
- 忽
- 哇
- 罚
- 爹
- 怖
- 竹
- 孔
- 捡
- 挑
- 肿
- 漠
- 尘
- 焦
- 塞
- 熬
- 谊
- 樱
- 返
- 莉
- 堵
- 捷
- 惑
- 绕
- 蛇
- 竞
- 耍
- 违
- 卧
- 蝶
- J
- 俗
- 滑
- 占
- 怜
- 舅
- 乔
- 泸
- 臭
- 策
- 骚
- 莱
- 岩
- 魅
- 兑
- 姥
- 兆
- 萍
- 烂
- 损
- 述
- 撒
- 烫
- 炮
- 忧
- 遵
- 桑
- 俺
- 彭
- 净
- 胶
- 柯
- 绑
- 碟
- 卜
- 饼
- 船
- 佩
- 妆
- 齿
- 厚
- 娟
- 醋
- 丘
- 恼
- 萧
- 析
- 润
- 潭
- 番
- 鹰
- 葡
- 萄
- 唤
- 胎
- 逊
- 峡
- 舰
- 障
- 伯
- 猴
- 膜
- 访
- 贤
- 耀
- 晒
- 狠
- 豪
- 剪
- 帖
- 幂
- 融
- 诱
- 韶
- 晋
- 拼
- 洞
- 氧
- 察
- 裁
- 寨
- 熙
- 喂
- 拖
- 污
- 乾
- 湿
- 嫌
- 拒
- 蕉
- 哲
- 薇
- 绒
- 婴
- 莎
- 稿
- 瞎
- 寺
- 徒
- 伞
- 碎
- 阜
- 填
- 琪
- 敦
- 柜
- 侣
- 搬
- 孟
- 蓉
- 筒
- 偿
- 献
- 径
- 畅
- 粤
- 悟
- 隔
- 赖
- 慈
- 哄
- 襄
- 扮
- 睁
- 彻
- 陶
- 瓷
- 荷
- 寸
- 牵
- 痒
- 芝
- 繁
- 倍
- 闪
- 梧
- 怒
- 蝴
- 嵩
- 赣
- 嘞
- 狱
- 猛
- 咳
- 媒
- 斌
- 斑
- 奋
- 叉
- 龟
- 贱
- 疑
- 暂
- 靓
- 叹
- 仓
- 撞
- 姜
- 疤
- 矿
- 芬
- 勤
- 纱
- 帆
- 迁
- 囧
- 佑
- 囊
- 侯
- 鼓
- 葛
- 沃
- 莹
- 诊
- 筑
- 酱
- 咬
- 糟
- 拯
- 鹤
- 驴
- 胞
- 枝
- 俄
- 呃
- 鹿
- 磨
- 姚
- 灾
- 扫
- 荡
- 吊
- 犬
- 菊
- 茹
- 链
- 嫉
- 妒
- 旺
- 夺
- 裙
- 湛
- 氏
- 鞍
- 抵
- 娇
- 耶
- 截
- 辞
- 硫
- 禁
- 怡
- 跌
- 刮
- 苑
- 媛
- 摆
- 盾
- 械
- 旋
- 卢
- 霆
- 驰
- 擦
- 符
- 肺
- 谜
- 霍
- 仅
- 迈
- 碗
- 邪
- 曹
- 咪
- 煌
- 疫
- 屠
- 握
- 奔
- Z
- 燃
- 沧
- 谦
- 馨
- 嫖
- 阻
- 冯
- 振
- 雕
- 闯
- 薄
- 宙
- 倾
- 嗽
- 椒
- 墓
- 尤
- 夹
- 潇
- 骤
- 壮
- 屈
- 颖
- 菠
- 吞
- 鸣
- 渴
- 堰
- 厨
- 督
- 驻
- 腹
- 岸
- 蛮
- 翠
- 肾
- 娼
- 券
- 尖
- 丸
- 鸿
- 厘
- 召
- 劝
- 牡
- 韦
- 拔
- 灏
- 弦
- 萌
- 惩
- 倩
- 诸
- 扎
- 庙
- 炉
- 潜
- 措
- 磊
- 脂
- 郊
- 虾
- 霜
- 猎
- 蝎
- 玄
- 钰
- 审
- 蜂
- 巷
- 敷
- 拟
- 钥
- 匙
- 婉
- 纽
- 芜
- 贾
- 串
- 靖
- 抛
- 彼
- 亏
- 挽
- 贼
- 穴
- 授
- 鼎
- 孝
- 玮
- 氓
- 劫
- 俞
- 谎
- 莆
- 隋
- 钠
- 赔
- 谐
- 纶
- 闰
- 昏
- 逆
- 璇
- 樊
- 禽
- 宅
- 碳
- 妮
- 亭
- 杆
- 蠢
- 鄙
- 蜀
- 阶
- 贫
- 辰
- 盼
- 呜
- 芦
- 株
- 腔
- 巾
- 羞
- 堡
- 亿
- 踩
- 憾
- 浓
- 阔
- 塑
- 趋
- 蓄
- 桶
- 葱
- 菇
- 咒
- 蟹
- 肩
- 柿
- 缓
- 漳
- 祸
- 挤
- 巢
- 抚
- 詹
- 豫
- 俱
- 悉
- 溶
- 粒
- 谭
- 诛
- 贡
- 沿
- 躲
- 慌
- 芙
- 蒋
- 乃
- 雀
- 姻
- 岂
- 悄
- 辕
- 斜
- 捕
- 扇
- 割
- 啤
- 纲
- 纤
- 祛
- 躁
- 殖
- 珊
- 氢
- 允
- 丈
- 蹈
- 邀
- 哼
- 坑
- 吾
- 淋
- 扩
- 愤
- 潍
- 尺
- 耗
- 鉴
- 闽
- 乙
- 渭
- 触
- 撑
- 咸
- 灿
- 缩
- 蔬
- 凑
- 渡
- 梭
- 粗
- 袁
- 菌
- 妓
- 稍
- 辐
- 哀
- 浆
- 厢
- 荆
- 踪
- 桐
- 邢
- 蜡
- 奉
- 淑
- 洒
- 扁
- 蕾
- 燥
- 硕
- 牢
- 蛙
- 仍
- 侵
- 稀
- 芒
- 吕
- 跪
- 绪
- 誓
- 旭
- 阁
- 屌
- 凭
- 裹
- 崇
- 纬
- 援
- 怨
- 茄
- 埋
- 棋
- 誉
- 瑜
- 蹲
- 扯
- 跃
- 昧
- 螺
- 毅
- 叮
- 喷
- 壶
- 喉
- 脆
- 瓦
- 碧
- 奴
- 煤
- 伍
- 娶
- 雁
- 骄
- 泣
- 眷
- 屯
- 赏
- 覆
- 揍
- 绯
- 逸
- 屎
- 彦
- 辨
- 攀
- 涉
- 泥
- 廊
- 菱
- 薛
- 衍
- 荒
- 铭
- 沂
- 麟
- 咏
- 扑
- 祈
- 喔
- 磁
- 歇
- 栋
- 沫
- 漏
- 玻
- 璃
- 逝
- 葵
- 溃
- 堆
- 锐
- 楠
- 毫
- 谋
- 勾
- 梯
- 氯
- 杏
- 赌
- 鑫
- 崔
- 颠
- 邱
- 肪
- 掘
- 昭
- 悬
- 奈
- 筷
- 轨
- 诵
- 葫
- 挡
- 梨
- 缠
- 僧
- 抬
- 邻
- 栏
- 饶
- 庚
- 灌
- 呦
- 摊
- 狄
- 汕
- 缴
- 罢
- 瞌
- 腺
- 辖
- 摔
- 棵
- 弗
- 琼
- 揭
- 淀
- 仑
- 粮
- 扔
- 剂
- 邵
- 辅
- 悍
- 袖
- 侨
- 巡
- 仗
- 逢
- 挥
- 翘
- 柱
- 狸
- 赫
- 耽
- 押
- 昂
- 瘤
- 枣
- 癌
- 伏
- 秤
- 脉
- 穹
- 敲
- 贪
- 促
- 拆
- 勉
- 祷
- 弊
- 膏
- 禾
- 契
- 挨
- 纵
- 疲
- 蜘
- 蛛
- 冈
- 雾
- 娄
- 甫
- 裂
- 侦
- 愈
- 臂
- 甩
- 戈
- 钙
- 簿
- 淄
- 蓬
- 夷
- 汁
- 凶
- 匹
- 皆
- 凝
- 仰
- 叛
- 蒲
- 谣
- 砖
- 呈
- 浅
- 瞬
- 丞
- 粘
- 痕
- 癫
- 禺
- 靴
- 尝
- 枫
- 鹅
- 衷
- 暮
- 媚
- 堪
- 臣
- 瑟
- 榕
- 蘑
- 遂
- 舌
- 藤
- 遭
- 芭
- 暧
- 犹
- 砸
- 浇
- 晰
- 矮
- 禹
- 隶
- 蚊
- 塌
- 峪
- 渊
- 摘
- 崩
- 瞧
- 炭
- 瑶
- 纷
- 毁
- 瞒
- 橙
- 渣
- 霹
- 雳
- 粽
- 侧
- 胀
- 捐
- 栈
- 颈
- 伪
- 役
- 予
- 钝
- 菏
- 铠
- 稻
- 赠
- 芽
- 龚
- 幅
- 莓
- 轿
- 炖
- 炬
- 溢
- 扭
- 垂
- 坎
- 嚏
- 枯
- 绣
- 蒸
- 旬
- 迫
- 浒
- 肇
- 庸
- 蒂
- 踏
- 雯
- 埃
- 础
- 狙
- 陷
- 伽
- 滔
- 沦
- 祭
- 唠
- 瀑
- 矛
- 乒
- 乓
- 窍
- 渠
- 泛
- 陇
- 蒜
- 捉
- 扶
- 诀
- 纹
- 踢
- 馋
- 薪
- 坪
- 廉
- 荔
- 骏
- 颁
- 伸
- 贞
- 沾
- 疮
- 兮
- 擎
- 驱
- 馒
- 挖
- 韵
- 姬
- 砍
- 矫
- 巫
- 疙
- 瘩
- 峨
- 抄
- 函
- 歪
- 倚
- 昔
- 涕
- 憨
- 淇
- 宴
- 埠
- 渐
- 胳
- 膊
- 趁
- 擅
- 刑
- 渝
- 噬
- 斋
- 妍
- 债
- 邹
- 嫂
- 娥
- 践
- 禅
- 牲
- 帽
- 吨
- 腻
- 掖
- 榴
- 啸
- 纺
- 鞭
- 豚
- 爵
- 蹄
- 咙
- 澈
- 疹
- 氛
- 抑
- 绸
- 抹
- 奎
- 酬
- 坟
- 诶
- 勋
- 卑
- 沪
- 蚁
- 揉
- 锄
- 泌
- 槽
- 镖
- 卿
- 甸
- 帕
- 镁
- 盲
- 汾
- 携
- 宰
- 虞
- 瓣
- 辩
- 豌
- 樟
- 璐
- 沁
- 钦
- 蔚
- 彬
- 卦
- 轰
- 锈
- 茎
- 蹦
- 拐
- 坝
- 饥
- 捏
- 碑
- 嗓
- 澄
- 惨
- 沽
- 鄂
- 逻
- 谍
- 屿
- 聋
- 憋
- 泼
- 枕
- 盆
- 衫
- 慎
- 黛
- 轶
- 咽
- 匠
- 蚂
- 捶
- 脊
- 蚌
- 剥
- 穆
- 喇
- 叭
- 凳
- 滥
- 撤
- 蓑
- 笠
- 黔
- 诡
- 颐
- 闵
- 稚
- 茨
- 捆
- 芯
- 涩
- 哑
- 盈
- 衰
- 奢
- 贩
- 循
- 韭
- 绘
- 鸳
- 唇
- 恳
- 妥
- 杠
- 刊
- 戚
- 巩
- 胁
- 蜗
- 筝
- 漆
- 劈
- 泄
- 噩
- 椎
- 渔
- 氨
- 橘
- 仲
- 洱
- 绥
- 仿
- 耿
- 蚕
- 倦
- 葬
- 捞
- 拓
- 冤
- 御
- 忌
- 慨
- 弥
- 寡
- 昵
- 撕
- 鲤
- 隧
- 倡
- 臀
- 毙
- 蕊
- 甚
- 睹
- 哒
- 仇
- 栓
- 抒
- 滁
- 讶
- 皱
- 剖
- 闸
- 耻
- 顽
- 茅
- 碱
- 霏
- 坠
- 邑
- 嗦
- 缝
- 枚
- 垫
- 畜
- 侄
- 悴
- 庞
- 鸯
- 俏
- 铅
- 衔
- 浑
- 抖
- 逮
- 犀
- 滕
- 遮
- 淹
- 挪
- 柠
- 檬
- 荨
- 沛
- 喻
- 尹
- 抉
- 爪
- 甄
- 冀
- 蝉
- 汰
- 丧
- 愧
- 畏
- 屑
- 屉
- 娩
- 艰
- 弓
- 炜
- 框
- 娅
- 酵
- 掩
- 宪
- 枉
- 淫
- 糗
- 奸
- 岚
- 诅
- 釜
- 萱
- 窦
- 喆
- 浣
- 庐
- 阑
- 劣
- 窄
- 赈
- 茉
- 帜
- 缸
- 嫩
- 迦
- 憔
- 鸽
- 朴
- 洽
- 榆
- 烹
- 箫
- 荚
- 箍
- 稣
- 肢
- 磷
- 袭
- 橡
- 鸦
- 瞅
- 匡
- 禧
- 痣
- 勃
- 翡
- 篱
- 烽
- 衢
- 讪
- 烛
- 宥
- 铝
- 镯
- 钉
- 披
- 昼
- 跆
- 笈
- 喘
- 惫
- 唧
- 螂
- 涌
- 揣
- 旨
- 袄
- 笼
- 蛔
- 毯
- 凸
- 倪
- 碌
- 懈
- 履
- 鱿
- 菩
- 汝
- 赴
- 焉
- 钛
- 畔
- 掰
- 骆
- 崖
- 髓
- 彪
- 啰
- 撸
- 拌
- 漯
- 犒
- 蔽
- 漱
- 赐
- 饪
- 玖
- 弘
- 卵
- 沭
- 梓
- 禄
- 晖
- 籁
- 熏
- 祠
- 荟
- 伐
- 柄
- 昕
- 琶
- 鞠
- 豹
- 萎
- 裕
- 曰
- 苇
- 沌
- 牺
- 轴
- 薯
- 潞
- 痫
- 曦
- 腋
- 坞
- 隙
- 妊
- 娠
- 蝙
- 蝠
- 赘
- 咧
- 翩
- 棚
- 冕
- 旱
- 棱
- 巍
- 偕
- 杉
- 梵
- 嫦
- 煎
- 泊
- 辟
- 丛
- 艘
- 懦
- 郫
- 搅
- 佬
- 阖
- 焰
- 澜
- 琢
- 挚
- 嫣
- 啧
- 兜
- 趴
- 皂
- 窃
- 嘟
- 崛
- 睿
- 刃
- 绳
- 哗
- 窟
- 嗑
- 吭
- 朔
- 喵
- 粹
- 酶
- 辜
- 诫
- 筹
- 亩
- 椅
- 佐
- 俑
- 狡
- 陛
- 曙
- 攒
- 诈
- 叙
- 杖
- 馅
- 锌
- 矜
- 绮
- 刁
- 阙
- 亢
- 讼
- 驼
- 晃
- 逍
- 仕
- 芋
- 拇
- 掏
- 瘾
- 腕
- 魁
- 鲍
- 殷
- 荤
- 亨
- 凄
- 硝
- 嬛
- 藻
- 诣
- 桔
- 疡
- 氰
- 佰
- 鸠
- 埔
- 皋
- 谚
- 麒
- 廖
- 鳄
- 蹉
- 阎
- 琦
- 丙
- 烯
- 涮
- 絮
- 潢
- 郴
- 遛
- 琵
- 殿
- 蹭
- 笛
- 钾
- 辙
- 炊
- 廷
- 拦
- 哆
- 逐
- 钞
- 赋
- 孽
- 沸
- 龈
- 雌
- 玟
- 麓
- 焊
- 谨
- 衬
- 灸
- 栖
- 卉
- 脐
- 栽
- 扒
- 酚
- 肱
- 闺
- 猥
- 钩
- 羁
- 吱
- 吼
- 蹊
- 跷
- 磕
- 坷
- 蝇
- 唔
- 褶
- 钮
- 鹭
- 咔
- 沐
- 棠
- 锷
- 滞
- 肛
- 糜
- 噜
- 涧
- 儒
- 琅
- 捎
- 泵
- 葩
- 芥
- 轲
- 猾
- 拱
- 墅
- 蕲
- 馁
- 佚
- 渤
- 崎
- 峻
- 赎
- 霄
- 羯
- 缅
- 韧
- 勘
- 皖
- 顷
- 喀
- 忏
- 圭
- 槟
- 榔
- 兹
- 坂
- 镒
- 堕
- 蟒
- 芹
- 浃
- 哉
- 晏
- 绐
- 陀
- 茵
- 倘
- 缆
- 浊
- 碍
- 惰
- 濮
- 杵
- 削
- 裘
- 嗅
- 呕
- 绊
- 哩
- 腩
- 撇
- 郝
- 铿
- 锵
- 赃
- 缪
- 卤
- 吝
- 涟
- 冶
- 匪
- 婿
- 蛳
- 搏
- 圩
- 旷
- 汞
- 鹦
- 茱
- 粪
- 崂
- 陋
- 掐
- 郡
- 哮
- 邸
- 帘
- 柚
- 鬓
- 剃
- 忻
- 羔
- 聆
- 刹
- 嗷
- 罕
- 沥
- 钗
- 尴
- 尬
- 莽
- 捧
- 拽
- 懵
- 噶
- 虐
- 囚
- 囡
- 颓
- 亥
- 傍
- 疏
- 乞
- 丐
- 皓
- 孜
- 愣
- 檐
- 橱
- 绅
- 噻
- 痊
- 鳞
- 瞳
- 衩
- 捂
- 吔
- 螳
- 暇
- 嘎
- 缤
- 镍
- 吟
- 斥
- 饲
- 鲢
- 猩
- 狒
- 腼
- 腆
- 轼
- 梗
- 熨
- 荫
- 糙
- 妾
- 粕
- 烘
- 壹
- 骥
- 秽
- 熔
- 歹
- 谬
- 侈
- 蜈
- 蚣
- 婵
- 渍
- 斩
- 棕
- 辱
- 醇
- 磅
- 礴
- 颊
- 彝
- 庾
- 叠
- 忒
- 稽
- 幢
- 嘱
- 醛
- 砂
- 炳
- 拂
- 殇
- 邬
- 冥
- 擒
- 汶
- 罐
- 镑
- 祁
- 氮
- 怆
- 羌
- 拧
- 芸
- 堀
- 婊
- 暄
- 挎
- 躬
- 噎
- 菅
- 奂
- 龌
- 龊
- 睬
- 燎
- 鲈
- 拢
- 啬
- 脖
- 尧
- 馗
- 皎
- 滤
- 镶
- 椭
- 狈
- 澎
- 阉
- 侃
- 婕
- 脓
- 桨
- 阪
- 湃
- 溏
- 箕
- 蚯
- 蚓
- 呛
- 矩
- 彤
- 惟
- 鹉
- 讽
- 募
- 惦
- 飓
- 抠
- 肮
- 溟
- 膝
- 芗
- 逞
- 娌
- 湮
- 舵
- 挫
- 椰
- 螃
- 绽
- 蟑
- 聂
- 拘
- 萸
- 洼
- 弛
- 澧
- 玺
- 芊
- 枢
- 鲨
- 毋
- 搂
- 跎
- 趾
- 琐
- 徘
- 徊
- 濡
- 咩
- 钏
- 舔
- 烷
- 胺
- 拙
- 溺
- 竖
- 蕴
- 巅
- 魄
- 吖
- 啵
- 庇
- 灼
- 遣
- 怠
- 枭
- 乏
- 缕
- 掂
- 秩
- 蜕
- 泾
- 汀
- 肆
- 倔
- 吒
- 矣
- 豁
- 仨
- 俯
- 嘲
- 瞪
- 唬
- 骋
- 辍
- 曝
- 泻
- 鼾
- 捣
- 妨
- 撵
- 撮
- 猕
- 浜
- 哺
- 睫
- 荧
- 噪
- 栗
- 垣
- 獒
- 冼
- 瞄
- 刍
- 硅
- 翊
- 泓
- 枥
- 凋
- 匣
- 孢
- 飙
- 俭
- 珑
- 嵊
- 佣
- 祟
- 枞
- 蓟
- 斧
- 镕
- 棺
- 痔
- 娴
- 苔
- 笙
- 蔻
- 芮
- 迭
- 暨
- 诏
- 癜
- 芷
- 臧
- 驿
- 珂
- 藕
- 笋
- 竭
- 歼
- 铉
- 恹
- 雇
- 诲
- 漓
- 扳
- 寰
- 颂
- 缈
- 砣
- 戳
- 疣
- 寮
- 甥
- 牦
- 衅
- 湄
- 汨
- 褐
- 腑
- 啼
- 惭
- 痰
- 梳
- 驮
- 阮
- 壳
- 慷
- 牟
- 捺
- 瘁
- 锂
- 狩
- 沱
- 烁
- 摞
- 楷
- 楞
- 瑾
- 饯
- 灶
- 薰
- 伎
- 忐
- 忑
- 煽
- 骁
- 娲
- 赁
- 锑
- 嵌
- 苞
- 咫
- 锴
- 岐
- 蓓
- 毽
- 黏
- 攸
- 恰
- 惶
- 矶
- 簸
- 坨
- 踝
- 掺
- 榨
- 阀
- 婢
- 纨
- 搓
- 闫
- 瘫
- 垢
- 蚀
- 貂
- 壑
- 婧
- 腥
- 兖
- 觅
- 壤
- 珉
- 胭
- 惧
- 僻
- 峥
- 炀
- 蔗
- 铂
- 宛
- 巳
- 氟
- 秸
- 菁
- 鹃
- 疱
- 矢
- 拭
- 缀
- 朦
- 胧
- 筏
- 贯
- 汐
- 蛤
- 蟆
- 迩
- 犁
- 馈
- 叽
- 喳
- 袈
- 裟
- 啃
- 敞
- 踊
- 雏
- 朽
- 撩
- 恙
- 亵
- 淤
- 垦
- 眺
- 熄
- 衲
- 伺
- 墟
- 孚
- 墩
- 猬
- 堤
- 鞘
- 署
- 陂
- 鬟
- 萤
- 悯
- 恃
- 峙
- 咄
- 奠
- 跺
- 笆
- 啄
- 殆
- 赅
- 锭
- 铛
- 枷
- 姗
- 驭
- 嘀
- 煲
- 腚
- 霖
- 孪
- 翟
- 濒
- 邂
- 逅
- 筱
- 霓
- 窈
- 窕
- 眨
- 耸
- 羚
- 尉
- 谀
- 竿
- 蛟
- 籽
- 铲
- 潼
- 匆
- 肽
- 戬
- 岔
- 奚
- 裴
- 嘏
- 玥
- 妯
- 昙
- 烨
- 吏
- 鼹
- 筵
- 崭
- 涪
- 來
- 瘆
- 彰
- 杞
- 疽
- 琥
- A
- 栾
- 庵
- 窘
- 擀
- 痤
- 蟾
- 唾
- 嚼
- 癖
- 蛹
- 浸
- 狭
- 迂
- 脍
- 炙
- 覃
- 悖
- 阆
- 铸
- 洮
- 瑙
- 呷
- 呸
- 谛
- 膨
- 柑
- 眯
- 奘
- 吆
- 孰
- 珈
- 曜
- 拈
- 麝
- 嘘
- 缚
- 徕
- 糸
- 崴
- 藓
- 婺
- 揽
- 溧
- 熠
- 膳
- 犊
- 贬
- 脯
- 剿
- 鼬
- 焕
- 胛
- 拷
- 勺
- 鲫
- 炅
- 卒
- 刨
- 糯
- 瘪
- 雍
- 襟
- 酋
- 胤
- 戟
- 褔
- 惆
- 怅
- 阂
- 扉
- 锚
- 砌
- 祺
- 淅
- 濠
- 匀
- 隍
- 氦
- 绫
- 濑
- 佝
- 偻
- 翎
- 颌
- 咚
- 疖
- 媲
- 祗
- 寅
- 靡
- 稞
- 骝
- 锏
- 焖
- 栀
- 蝗
- 甭
- 罄
- 酪
- 酮
- 嘢
- 钨
- 涎
- 沼
- 嚯
- 阱
- 驸
- 爰
- 酌
- 绛
- 畴
- 辄
- 藜
- 碚
- 馥
- 茧
- 鲛
- 溅
- 浯
- 沮
- 蹿
- 诠
- 姊
- 藉
- 骡
- 褪
- 酞
- 臻
- 靛
- 譬
- 粼
- 肘
- 孺
- 苟
- 瓯
- 蕨
- 冉
- 稠
- 蒿
- 锤
- 焙
- 蜃
- 淌
- 瘸
- 汲
- 噼
- 啪
- 橇
- 虔
- 裳
- 煞
- 淳
- 锟
- 摧
- 篷
- 癞
- 凹
- 汹
- 樵
- 睐
- 叁
- 飒
- 舶
- 驷
- 嘚
- 垮
- 妩
- 焚
- 扪
- 溥
- 鹊
- 鹄
- 汴
- 妁
- 廓
- 谙
- 苛
- 喏
- 嬉
- 裆
- 谔
- 哝
- 岑
- 喧
- 咆
- 茁
- 霎
- 泷
- 笃
- 沣
- 戮
- 蓦
- 滢
- 碜
- 滇
- 妤
- 盯
- 眶
- 婶
- 侍
- 崽
- 辘
- 轳
- 斓
- 郢
- 泞
- 窖
- 镭
- 痹
- 缉
- 镐
- 膛
- 睦
- 歧
- 扦
- 筛
- 嵘
- 茗
- 戎
- 萦
- 柒
- 咀
- 诋
- 搁
- 婪
- 漾
- 瀚
- 绎
- 盏
- 庹
- 吩
- 咐
- 堇
- 矾
- 茯
- 苓
- 潦
- 嘁
- 噫
- 窑
- 鳗
- 孵
- 彷
- 徨
- 耕
- 晗
- 撂
- 猿
- 昊
- 淼
- 驯
- 垒
- 铤
- 胱
- 桦
- 铮
- 坳
- 厥
- 叨
- 烙
- 苷
- 殴
- 鸥
- 蜥
- 蜴
- 湟
- 衙
- 敖
- 阐
- 穗
- 攥
- 俾
- 锥
- 粱
- 绰
- 漕
- 钕
- 硼
- 蚤
- 铢
- 疚
- 挟
- 昱
- 栅
- 煦
- 鳝
- 枸
- 锯
- 茜
- 悼
- 跤
- 犍
- 衿
- 筐
- 恪
- 琛
- 砝
- 秆
- 歆
- 晾
- 慑
- 蜍
- 诃
- 盔
- 寇
- 璧
- 鹩
- 恤
- 匿
- 踉
- 焗
- 戍
- 憎
- 桓
- 裔
- 梢
- 蝼
- 贿
- 诽
- 橄
- 榄
- 蔺
- 鲅
- 鳖
- 荞
- 槐
- 砚
- 癣
- 胚
- 沅
- 菀
- 荀
- 亳
- 铵
- 垌
- 釉
- 摁
- 瑕
- 疵
- 泗
- 逵
- 饵
- 旌
- 磺
- 彗
- 娣
- 晟
- 惘
- 棘
- 屹
- 逾
- 淞
- 逑
- 茴
- 楹
- 珀
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.10.5a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
huggingtweets/amnananadeem-talal916
|
huggingtweets
| 2021-12-28T12:50:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433365322313043974/gPI08qaY_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1377835980552474624/sxTjuspv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">halal talal & amna</div>
<div style="text-align: center; font-size: 14px;">@amnananadeem-talal916</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from halal talal & amna.
| Data | halal talal | amna |
| --- | --- | --- |
| Tweets downloaded | 3187 | 3132 |
| Retweets | 484 | 778 |
| Short tweets | 532 | 369 |
| Tweets kept | 2171 | 1985 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/42dvu161/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @amnananadeem-talal916's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2irbhtmu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2irbhtmu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/amnananadeem-talal916')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
luomingshuang/icefall_avsr_grid_combinenet_ctc
|
luomingshuang
| 2021-12-28T12:46:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Pre-trained CombineNet-CTC models for the GRID audio-visual dataset with icefall.
The model was trained on full [GRID](https://zenodo.org/record/3625687#.Ybn7HagzY2w) with the scripts in [icefall](https://github.com/k2-fsa/icefall).
See (https://github.com/k2-fsa/icefall/tree/master/egs/grid/AVSR/combinenet_ctc_avsr) for more details of this model.
## How to use
See (https://github.com/k2-fsa/icefall/blob/master/egs/grid/AVSR/combinenet_ctc_avsr/Pre-trained.md)
## Training procedure
The main repositories are list below, we will update the training and decoding scripts with the update of version.
k2: https://github.com/k2-fsa/k2
icefall: https://github.com/k2-fsa/icefall
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
* Preparing data.
```
cd egs/grid/AVSR
bash ./prepare.sh
```
* Training
```
export CUDA_VISIBLE_DEVICES="0"
python combinenet_ctc_avsr/train.py --world-size 1
```
## Evaluation results
The best decoding results (WER) on GRID TEST are listed below, we got this result by averaging models from epoch 25 to 29, the decoding method is `whole-lattice-rescoring`, when lm scale is 0.01.
||TEST|
|--|--|
|WER|1.71%|
|
luomingshuang/icefall_asr_grid_audionet_ctc
|
luomingshuang
| 2021-12-28T12:25:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Pre-trained AudioNet-CTC models for the GRID audio dataset with icefall.
The model was trained on full [GRID](https://zenodo.org/record/3625687#.Ybn7HagzY2w) with the scripts in [icefall](https://github.com/k2-fsa/icefall).
See (https://github.com/k2-fsa/icefall/tree/master/egs/grid/AVSR/audionet_ctc_asr) for more details of this model.
## How to use
See (https://github.com/k2-fsa/icefall/blob/master/egs/grid/AVSR/audionet_ctc_asr/Pre-trained.md)
## Training procedure
The main repositories are list below, we will update the training and decoding scripts with the update of version.
k2: https://github.com/k2-fsa/k2
icefall: https://github.com/k2-fsa/icefall
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
* Preparing data.
```
cd egs/grid/AVSR
bash ./prepare.sh
```
* Training
```
export CUDA_VISIBLE_DEVICES="0"
python audionet_ctc_asr/train.py --world-size 1
```
## Evaluation results
The best decoding results (WER) on GRID TEST are listed below, we got this result by averaging models from epoch 25 to 29, the decoding method is `1best`.
||TEST|
|--|--|
|WER|2.35%|
|
huggingtweets/talal916
|
huggingtweets
| 2021-12-28T09:23:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/talal916/1640683407279/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433365322313043974/gPI08qaY_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">halal talal</div>
<div style="text-align: center; font-size: 14px;">@talal916</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from halal talal.
| Data | halal talal |
| --- | --- |
| Tweets downloaded | 3187 |
| Retweets | 483 |
| Short tweets | 533 |
| Tweets kept | 2171 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2q5bns0k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @talal916's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/20wq85ea) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/20wq85ea/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/talal916')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/ngrossman81
|
huggingtweets
| 2021-12-28T04:15:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/ngrossman81/1640664926929/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/805525876808892417/nSCRZS58_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nicholas Grossman</div>
<div style="text-align: center; font-size: 14px;">@ngrossman81</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nicholas Grossman.
| Data | Nicholas Grossman |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 272 |
| Short tweets | 113 |
| Tweets kept | 2864 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3gkanovn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ngrossman81's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18u9hhz0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18u9hhz0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ngrossman81')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
caioamb/bert-base-uncased-finetuned-md
|
caioamb
| 2021-12-28T01:22:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-md
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-md
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2415 | 1.0 | 1044 | 0.2084 |
| 0.1244 | 2.0 | 2088 | 0.2903 |
| 0.0427 | 3.0 | 3132 | 0.3329 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
|
flexudy/cheapity3
|
flexudy
| 2021-12-27T13:06:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
# Cheapity3 🐷
GPT-like T5 model trained to generate text in multiple languages.
## Motivation
- GPT models are expensive to run.
- GPT models are monolingual.
## Solution
- Maybe, Small Models aren't Terrible (*SMarT*)
- Plus, they are cheaper to run.
I fine-tuned T5 on multiple languages (🇬🇧 English, 🇩🇪 German, 🇫🇷 French) and multiple academic text snippets from
various domains like tech, law, finance and science etc. to generate text, just like GPT models do.
## Usage - [NLPlayStore](https://github.com/flexudy/NLPlayStore) 👈
```python
from store.service_management import ServiceManager
service_manager = ServiceManager().get_service("cheapity3")
service.install()
service = service.launch()
input_text = "The mechanical engineering field requires ... "
generated_texts = service.play(input_text, 15) # A list a generated text
```
## Usage - Hugging Face Transformers 🤗
- Provide some text e.g `"Italy, officially the Italian Republic is a country consisting of"`
- Tell Cheapity3 how many words you want to generate e.g `15` -- 😃 Yes, you can control the length.
- Cheapity3 reads your text and generates a continuation containing approximately 15 words.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("flexudy/cheapity3")
model = AutoModelWithLMHead.from_pretrained("flexudy/cheapity3")
input_text = """The mechanical engineering field requires an understanding of core areas including mechanics, dynamics,
thermodynamics, materials science, structural analysis, and
electricity. { _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ }""" # 15 words
inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=512)
input_ids = inputs["input_ids"]
attention_mask = inputs["attention_mask"]
outputs = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
max_length=128,
do_sample=True,
early_stopping=True,
num_return_sequences=4,
repetition_penalty=2.5
)
for i in range(4):
print(tokenizer.decode(outputs[i], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
**INPUT: The mechanical engineering field requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, structural analysis, and electricity.**
```
> Cheapity3 continues with beam search:
... The field of mechanical engineering is a broad field that includes many core areas of engineering.
> Cheapity3 continues with sampling and top_k=50:
... Developing the knowledge base for these core areas will enable engineers to build their capabilities rapidly and efficiently. ...
... The field of mechanics offers a variety and broad range for applications throughout the engineering/technological fields. ...
... Mechanics generally is not understood by students. While they can be employed in the field, mechanical engineering ...
... Introduction to mechanical engineering and core fields including chemical products, materials science, structural analysis, and geomatics ...
```
## Pretty decent right?
Hence, whenever you feel like GPT3 is too expensive, Cheapity3 comes to the rescue 🤗.
## Model Training FYI
- T5-base model
- Trained on ONLY 1M sentences from English, French and German text
- Mostly text from Wikipedia, arxiv and QA datasets
- Learning rate: 0.00003
- 2 epochs
- Max input: 512 tokens
- Max output: 128 tokens
|
tiennvcs/layoutlmv2-large-uncased-finetuned-vi-infovqa
|
tiennvcs
| 2021-12-27T11:54:10Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2022-03-02T23:29:05Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-large-uncased-finetuned-vi-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-large-uncased-finetuned-vi-infovqa
This model is a fine-tuned version of [microsoft/layoutlmv2-large-uncased](https://huggingface.co/microsoft/layoutlmv2-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.5806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.17 | 100 | 4.6181 |
| No log | 0.33 | 200 | 4.3357 |
| No log | 0.5 | 300 | 4.3897 |
| No log | 0.66 | 400 | 4.8238 |
| 4.4277 | 0.83 | 500 | 3.9088 |
| 4.4277 | 0.99 | 600 | 3.6063 |
| 4.4277 | 1.16 | 700 | 3.4278 |
| 4.4277 | 1.32 | 800 | 3.5428 |
| 4.4277 | 1.49 | 900 | 3.4331 |
| 3.0413 | 1.65 | 1000 | 3.3699 |
| 3.0413 | 1.82 | 1100 | 3.3622 |
| 3.0413 | 1.98 | 1200 | 3.5294 |
| 3.0413 | 2.15 | 1300 | 3.7918 |
| 3.0413 | 2.31 | 1400 | 3.4007 |
| 2.0843 | 2.48 | 1500 | 4.0296 |
| 2.0843 | 2.64 | 1600 | 4.1852 |
| 2.0843 | 2.81 | 1700 | 3.6690 |
| 2.0843 | 2.97 | 1800 | 3.6089 |
| 2.0843 | 3.14 | 1900 | 5.5534 |
| 1.7527 | 3.3 | 2000 | 4.7498 |
| 1.7527 | 3.47 | 2100 | 5.2691 |
| 1.7527 | 3.63 | 2200 | 5.1324 |
| 1.7527 | 3.8 | 2300 | 4.5912 |
| 1.7527 | 3.96 | 2400 | 4.1727 |
| 1.2037 | 4.13 | 2500 | 6.1174 |
| 1.2037 | 4.29 | 2600 | 5.7172 |
| 1.2037 | 4.46 | 2700 | 5.8843 |
| 1.2037 | 4.62 | 2800 | 6.4232 |
| 1.2037 | 4.79 | 2900 | 7.4486 |
| 0.8386 | 4.95 | 3000 | 7.1946 |
| 0.8386 | 5.12 | 3100 | 7.9869 |
| 0.8386 | 5.28 | 3200 | 8.0310 |
| 0.8386 | 5.45 | 3300 | 8.2954 |
| 0.8386 | 5.61 | 3400 | 8.5361 |
| 0.4389 | 5.78 | 3500 | 8.6040 |
| 0.4389 | 5.94 | 3600 | 8.5806 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.8.0+cu101
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tiennvcs/bert-large-uncased-finetuned-vi-infovqa
|
tiennvcs
| 2021-12-27T08:30:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-finetuned-vi-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-vi-infovqa
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.4878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.11 | 100 | 4.6256 |
| No log | 0.21 | 200 | 4.4042 |
| No log | 0.32 | 300 | 5.0021 |
| No log | 0.43 | 400 | 4.2825 |
| 4.6758 | 0.53 | 500 | 4.3886 |
| 4.6758 | 0.64 | 600 | 4.2519 |
| 4.6758 | 0.75 | 700 | 4.2977 |
| 4.6758 | 0.85 | 800 | 3.9916 |
| 4.6758 | 0.96 | 900 | 4.1650 |
| 4.1715 | 1.07 | 1000 | 4.5001 |
| 4.1715 | 1.17 | 1100 | 4.0898 |
| 4.1715 | 1.28 | 1200 | 4.1623 |
| 4.1715 | 1.39 | 1300 | 4.3271 |
| 4.1715 | 1.49 | 1400 | 3.9661 |
| 3.7926 | 1.6 | 1500 | 3.8727 |
| 3.7926 | 1.71 | 1600 | 3.8934 |
| 3.7926 | 1.81 | 1700 | 3.7262 |
| 3.7926 | 1.92 | 1800 | 3.7701 |
| 3.7926 | 2.03 | 1900 | 3.7653 |
| 3.5041 | 2.13 | 2000 | 3.9261 |
| 3.5041 | 2.24 | 2100 | 4.0915 |
| 3.5041 | 2.35 | 2200 | 4.0348 |
| 3.5041 | 2.45 | 2300 | 4.0212 |
| 3.5041 | 2.56 | 2400 | 4.4653 |
| 2.8475 | 2.67 | 2500 | 4.2959 |
| 2.8475 | 2.77 | 2600 | 4.1039 |
| 2.8475 | 2.88 | 2700 | 3.8037 |
| 2.8475 | 2.99 | 2800 | 3.7552 |
| 2.8475 | 3.09 | 2900 | 4.2476 |
| 2.5488 | 3.2 | 3000 | 4.6716 |
| 2.5488 | 3.3 | 3100 | 4.7058 |
| 2.5488 | 3.41 | 3200 | 4.6266 |
| 2.5488 | 3.52 | 3300 | 4.5697 |
| 2.5488 | 3.62 | 3400 | 5.1017 |
| 2.0347 | 3.73 | 3500 | 4.6254 |
| 2.0347 | 3.84 | 3600 | 4.4822 |
| 2.0347 | 3.94 | 3700 | 4.9413 |
| 2.0347 | 4.05 | 3800 | 5.3600 |
| 2.0347 | 4.16 | 3900 | 5.7323 |
| 1.6566 | 4.26 | 4000 | 5.8822 |
| 1.6566 | 4.37 | 4100 | 6.0173 |
| 1.6566 | 4.48 | 4200 | 5.6688 |
| 1.6566 | 4.58 | 4300 | 6.0617 |
| 1.6566 | 4.69 | 4400 | 6.6631 |
| 1.3348 | 4.8 | 4500 | 6.0290 |
| 1.3348 | 4.9 | 4600 | 6.2455 |
| 1.3348 | 5.01 | 4700 | 6.0963 |
| 1.3348 | 5.12 | 4800 | 7.0983 |
| 1.3348 | 5.22 | 4900 | 7.5483 |
| 1.0701 | 5.33 | 5000 | 7.7187 |
| 1.0701 | 5.44 | 5100 | 7.4630 |
| 1.0701 | 5.54 | 5200 | 7.1394 |
| 1.0701 | 5.65 | 5300 | 7.0703 |
| 1.0701 | 5.76 | 5400 | 7.5611 |
| 0.9414 | 5.86 | 5500 | 7.6038 |
| 0.9414 | 5.97 | 5600 | 7.4878 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
SEISHIN/distilbert-base-uncased-finetuned-ner
|
SEISHIN
| 2021-12-27T07:53:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9289272666888077
- name: Recall
type: recall
value: 0.9386956035350711
- name: F1
type: f1
value: 0.933785889160917
- name: Accuracy
type: accuracy
value: 0.9842565968195466
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0605
- Precision: 0.9289
- Recall: 0.9387
- F1: 0.9338
- Accuracy: 0.9843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2388 | 1.0 | 878 | 0.0671 | 0.9162 | 0.9211 | 0.9187 | 0.9813 |
| 0.0504 | 2.0 | 1756 | 0.0602 | 0.9225 | 0.9366 | 0.9295 | 0.9834 |
| 0.0299 | 3.0 | 2634 | 0.0605 | 0.9289 | 0.9387 | 0.9338 | 0.9843 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
xkang/distilbert-base-uncased-finetuned-imdb
|
xkang
| 2021-12-27T07:30:09Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7096 | 1.0 | 157 | 2.4920 |
| 2.5741 | 2.0 | 314 | 2.4237 |
| 2.5386 | 3.0 | 471 | 2.4355 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
SEISHIN/distilbert-base-uncased-finetuned-squad
|
SEISHIN
| 2021-12-27T05:27:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2172 | 1.0 | 5533 | 1.1532 |
| 0.9446 | 2.0 | 11066 | 1.1184 |
| 0.7671 | 3.0 | 16599 | 1.1605 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Ayham/roberta_gpt2_new_max64_summarization_cnndm
|
Ayham
| 2021-12-27T00:19:01Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: roberta_gpt2_new_max64_summarization_cnndm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_gpt2_new_max64_summarization_cnndm
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
lakahaga/novel_reading_tts
|
lakahaga
| 2021-12-26T17:45:00Z | 0 | 4 |
espnet
|
[
"espnet",
"audio",
"text-to-speech",
"ko",
"dataset:novelspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
text-to-speech
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- text-to-speech
language: ko
datasets:
- novelspeech
license: cc-by-4.0
---
## ESPnet2 TTS model
### `lakahaga/novel_reading_tts`
This model was trained by lakahaga using novelspeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 9827dfe37f69e8e55f902dc4e340de5108596311
pip install -e .
cd egs2/novelspeech/tts1
./run.sh --skip_data_prep false --skip_train true --download_model lakahaga/novel_reading_tts
```
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_conformer_fastspeech2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_conformer_fastspeech2_raw_phn_tacotron_none
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 34177
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 10
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 25600000
valid_batch_bins: null
train_shape_file:
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/text_shape.phn
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/speech_shape
valid_shape_file:
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/text_shape.phn
- exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/tr_no_dev/durations
- durations
- text_int
- - dump/raw/tr_no_dev/wav.scp
- speech
- sound
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/collect_feats/energy.scp
- energy
- npy
- - dump/raw/tr_no_dev/utt2sid
- sids
- text_int
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/dev/durations
- durations
- text_int
- - dump/raw/dev/wav.scp
- speech
- sound
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//valid/collect_feats/energy.scp
- energy
- npy
- - dump/raw/dev/utt2sid
- sids
- text_int
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 384
warmup_steps: 4000
token_list:
- <blank>
- <unk>
- '='
- _
- A
- Y
- N
- O
- E
- U
- L
- G
- S
- D
- M
- J
- H
- B
- ZERO
- TWO
- C
- .
- Q
- ','
- P
- T
- SEVEN
- X
- W
- THREE
- ONE
- NINE
- K
- EIGHT
- '@'
- '!'
- Z
- '?'
- F
- SIX
- FOUR
- '#'
- $
- +
- '%'
- FIVE
- '~'
- AND
- '*'
- '...'
- ''
- ^
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: tacotron
g2p: null
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/feats_stats.npz
tts: fastspeech2
tts_conf:
adim: 384
aheads: 2
elayers: 4
eunits: 1536
dlayers: 4
dunits: 1536
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 256
duration_predictor_kernel_size: 3
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
use_masking: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
encoder_type: conformer
decoder_type: conformer
conformer_pos_enc_layer_type: rel_pos
conformer_self_attn_layer_type: rel_selfattn
conformer_activation_type: swish
use_macaron_style_in_conformer: true
use_cnn_in_conformer: true
conformer_enc_kernel_size: 7
conformer_dec_kernel_size: 31
init_type: xavier_uniform
transformer_enc_dropout_rate: 0.2
transformer_enc_positional_dropout_rate: 0.2
transformer_enc_attn_dropout_rate: 0.2
transformer_dec_dropout_rate: 0.2
transformer_dec_positional_dropout_rate: 0.2
transformer_dec_attn_dropout_rate: 0.2
pitch_predictor_layers: 5
pitch_predictor_chans: 256
pitch_predictor_kernel_size: 5
pitch_predictor_dropout: 0.5
pitch_embed_kernel_size: 1
pitch_embed_dropout: 0.0
stop_gradient_from_pitch_predictor: true
energy_predictor_layers: 2
energy_predictor_chans: 256
energy_predictor_kernel_size: 3
energy_predictor_dropout: 0.5
energy_embed_kernel_size: 1
energy_embed_dropout: 0.0
stop_gradient_from_energy_predictor: false
pitch_extract: dio
pitch_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
f0max: 400
f0min: 80
reduction_factor: 1
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/pitch_stats.npz
energy_extract: energy
energy_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
reduction_factor: 1
energy_normalize: global_mvn
energy_normalize_conf:
stats_file: exp/tts_train_raw_phn_tacotron_none/decode_use_teacher_forcingtrue_train.loss.best/stats//train/energy_stats.npz
required:
- output_dir
- token_list
version: 0.10.5a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
wilsontam/gpt2-dstc9
|
wilsontam
| 2021-12-26T14:02:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"dstc9",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: "en"
tags:
- dstc9
widget:
- text: "Yes, I'm going to be in Chinatown, San Francisco and am looking"
- text: "Can you find me one that is in the"
---
This GPT2 model is trained using DSTC9 data for dialogue modeling purpose.
Data link: https://github.com/alexa/alexa-with-dstc9-track1-dataset
Credit: Jia-Chen Jason Gu, Wilson Tam
|
airKlizz/mt5-base-wikinewssum-spanish
|
airKlizz
| 2021-12-25T23:19:15Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-wikinewssum-spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-spanish
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2394
- Rouge1: 7.9732
- Rouge2: 3.5041
- Rougel: 6.6713
- Rougelsum: 7.5229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 528 | 2.3707 | 6.687 | 2.9169 | 5.6793 | 6.2978 |
| No log | 2.0 | 1056 | 2.3140 | 7.9518 | 3.4529 | 6.7265 | 7.4984 |
| No log | 3.0 | 1584 | 2.2848 | 7.9708 | 3.5344 | 6.7272 | 7.534 |
| No log | 4.0 | 2112 | 2.2668 | 8.0252 | 3.5323 | 6.7319 | 7.5819 |
| 3.2944 | 5.0 | 2640 | 2.2532 | 8.0143 | 3.534 | 6.7155 | 7.582 |
| 3.2944 | 6.0 | 3168 | 2.2399 | 7.9525 | 3.4849 | 6.6716 | 7.5155 |
| 3.2944 | 7.0 | 3696 | 2.2376 | 7.9405 | 3.4661 | 6.6559 | 7.5043 |
| 3.2944 | 8.0 | 4224 | 2.2394 | 7.9732 | 3.5041 | 6.6713 | 7.5229 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Andry/1111
|
Andry
| 2021-12-25T20:04:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:04Z |
C:\Users\andry\Desktop\Выжигание 24-12-2021.jpg
|
s3h/finetuned-mt5-gec
|
s3h
| 2021-12-25T18:38:46Z | 61 | 1 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: s3h/finetuned-mt5-gec
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# s3h/finetuned-mt5-gec
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 23.1236
- Validation Loss: 26.8482
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 3, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 23.1236 | 26.8482 | 0 |
### Framework versions
- Transformers 4.14.1
- TensorFlow 2.6.2
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.