modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 06:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 06:28:51
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
arnolfokam/bert-base-uncased-kin
|
arnolfokam
| 2021-11-24T11:07:08Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"kin",
"dataset:masakhaner",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- kin
tags:
- NER
datasets:
- masakhaner
metrics:
- f1
- precision
- recall
license: apache-2.0
widget:
- text: "Ambasaderi Bellomo yavuze ko bishimira ubufatanye burambye hagati ya EU n’u Rwanda, bushingiye nanone ku bufatanye hagati y’imigabane ya Afurika n’u Burayi."
---
# Model description
**bert-base-uncased-kin** is a model based on the fine-tuned BERT base uncased model. It has been trained to recognize four types of entities:
- dates & time (DATE)
- Location (LOC)
- Organizations (ORG)
- Person (PER)
# Intended Use
- Intended to be used for research purposes concerning Named Entity Recognition for African Languages.
- Not intended for practical purposes.
# Training Data
This model was fine-tuned on the Kinyarwanda corpus **(kin)** of the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. However, we thresholded the number of entity groups per sentence in this dataset to 10 entity groups.
# Training procedure
This model was trained on a single NVIDIA P5000 from [Paperspace](https://www.paperspace.com)
#### Hyperparameters
- **Learning Rate:** 5e-5
- **Batch Size:** 32
- **Maximum Sequence Length:** 164
- **Epochs:** 30
# Evaluation Data
We evaluated this model on the test split of the Kinyarwandan corpus **(kin)** present in the [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) with no thresholding.
# Metrics
- Precision
- Recall
- F1-score
# Limitations
- The size of the pre-trained language model prevents its usage in anything other than research.
- Lack of analysis concerning the bias and fairness in these models may make them dangerous if deployed into production system.
- The train data is a less populated version of the original dataset in terms of entity groups per sentence. Therefore, this can negatively impact the performance.
# Caveats and Recommendations
- The topics in the dataset corpus are centered around **News**. Future training could be done with a more diverse corpus.
# Results
Model Name| Precision | Recall | F1-score
-|-|-|-
**bert-base-uncased-kin**| 75.00 |80.09|77.47
# Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("arnolfokam/bert-base-uncased-kin")
model = AutoModelForTokenClassification.from_pretrained("arnolfokam/bert-base-uncased-kin")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Rayon Sports yasinyishije rutahizamu w’Umurundi"
ner_results = nlp(example)
print(ner_results)
```
|
Peterard/distilbert_bug_classifier
|
Peterard
| 2021-11-24T04:01:55Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- en
tags:
- text-classification
widget:
- text: "The app crashed when I opened it this morning. Can you fix this please?"
example_title: "Likely bug report"
- text: "Please add a like button!"
example_title: "Unlikely bug report"
---
How to use this classifier:
```
from transformers import pipeline
pipe = pipeline("text-classification", model="Peterard/distilbert_bug_classifier")
pipe("The app crashed when I opened it this morning. Can you fix this please?")
# [{'label': 'bug', 'score': 0.9042391180992126}]
pipe("Please add a like button!")
# [{'label': 'no_bug', 'score': 0.9977496266365051}]
```
N.B. The label will change depending on which is the likelier class
|
jb2k/bert-base-multilingual-cased-language-detection
|
jb2k
| 2021-11-24T01:36:01Z | 4,142 | 14 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# bert-base-multilingual-cased-language-detection
A model for language detection with support for 45 languages
## Model description
This model was created by fine-tuning
[bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the [common language](https://huggingface.co/datasets/common_language) dataset.
This dataset has support for 45 languages, which are listed below:
```
Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh
```
## Evaluation
This model was evaluated on the test split of the [common language](https://huggingface.co/datasets/common_language) dataset, and achieved the following metrics:
* Accuracy: 97.8%
|
vitusya/distilbert-base-uncased-finetuned-squad
|
vitusya
| 2021-11-23T21:15:03Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2137 | 1.0 | 5533 | 1.1625 |
| 0.9496 | 2.0 | 11066 | 1.1263 |
| 0.7591 | 3.0 | 16599 | 1.1610 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
AryanLala/autonlp-Scientific_Title_Generator-34558227
|
AryanLala
| 2021-11-23T16:51:34Z | 8 | 19 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"en",
"dataset:AryanLala/autonlp-data-Scientific_Title_Generator",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags: autonlp
language: en
widget:
- text: "The scale, variety, and quantity of publicly-available NLP datasets has grown rapidly as researchers propose new tasks, larger models, and novel benchmarks. Datasets is a community library for contemporary NLP designed to support this ecosystem. Datasets aims to standardize end-user interfaces, versioning, and documentation, while providing a lightweight front-end that behaves similarly for small datasets as for internet-scale corpora. The design of the library incorporates a distributed, community-driven approach to adding datasets and documenting usage. After a year of development, the library now includes more than 650 unique datasets, has more than 250 contributors, and has helped support a variety of novel cross-dataset research projects and shared tasks. The library is available at https://github.com/huggingface/datasets."
datasets:
- AryanLala/autonlp-data-Scientific_Title_Generator
co2_eq_emissions: 137.60574081887984
---
# Model Trained Using AutoNLP
- Model: Google's Pegasus (https://huggingface.co/google/pegasus-xsum)
- Problem type: Summarization
- Model ID: 34558227
- CO2 Emissions (in grams): 137.60574081887984
- Spaces: https://huggingface.co/spaces/TitleGenerators/ArxivTitleGenerator
- Dataset: arXiv Dataset (https://www.kaggle.com/Cornell-University/arxiv)
- Data subset used: https://huggingface.co/datasets/AryanLala/autonlp-data-Scientific_Title_Generator
## Validation Metrics
- Loss: 2.578599214553833
- Rouge1: 44.8482
- Rouge2: 24.4052
- RougeL: 40.1716
- RougeLsum: 40.1396
- Gen Len: 11.4675
## Social
- LinkedIn: https://www.linkedin.com/in/aryanlala/
- Twitter: https://twitter.com/AryanLala20
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/AryanLala/autonlp-Scientific_Title_Generator-34558227
```
|
DeepPavlov/rubert-base-cased
|
DeepPavlov
| 2021-11-23T08:03:04Z | 205,575 | 95 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"ru",
"arxiv:1905.07213",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:04Z |
---
language:
- ru
---
# rubert-base-cased
RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on the Russian part of Wikipedia and news data. We used this training data to build a vocabulary of Russian subtokens and took a multilingual version of BERT‑base as an initialization for RuBERT\[1\].
08.11.2021: upload model with MLM and NSP heads
\[1\]: Kuratov, Y., Arkhipov, M. \(2019\). Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. arXiv preprint [arXiv:1905.07213](https://arxiv.org/abs/1905.07213).
|
gayanin/t5-small-mlm-pubmed-45
|
gayanin
| 2021-11-22T23:47:01Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-mlm-pubmed-45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-mlm-pubmed-45
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6395
- Rouge2 Precision: 0.3383
- Rouge2 Recall: 0.2424
- Rouge2 Fmeasure: 0.2753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 2.519 | 0.75 | 500 | 1.9659 | 0.3178 | 0.1888 | 0.2299 |
| 2.169 | 1.51 | 1000 | 1.8450 | 0.3256 | 0.2138 | 0.25 |
| 2.0796 | 2.26 | 1500 | 1.7900 | 0.3368 | 0.2265 | 0.2636 |
| 1.9978 | 3.02 | 2000 | 1.7553 | 0.3427 | 0.234 | 0.2709 |
| 1.9686 | 3.77 | 2500 | 1.7172 | 0.3356 | 0.2347 | 0.2692 |
| 1.9142 | 4.52 | 3000 | 1.6986 | 0.3358 | 0.238 | 0.2715 |
| 1.921 | 5.28 | 3500 | 1.6770 | 0.3349 | 0.2379 | 0.2709 |
| 1.8848 | 6.03 | 4000 | 1.6683 | 0.3346 | 0.2379 | 0.2708 |
| 1.8674 | 6.79 | 4500 | 1.6606 | 0.3388 | 0.2419 | 0.2752 |
| 1.8606 | 7.54 | 5000 | 1.6514 | 0.3379 | 0.2409 | 0.274 |
| 1.8515 | 8.3 | 5500 | 1.6438 | 0.3356 | 0.2407 | 0.2731 |
| 1.8403 | 9.05 | 6000 | 1.6401 | 0.3367 | 0.2421 | 0.2744 |
| 1.8411 | 9.8 | 6500 | 1.6395 | 0.3383 | 0.2424 | 0.2753 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
huggingtweets/dril-horse_ebooks-pukicho
|
huggingtweets
| 2021-11-22T22:54:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/dril-horse_ebooks-pukicho/1637621684272/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/866045441942487041/xRAnnstd_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1096005346/1_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Pukicho & Horse ebooks</div>
<div style="text-align: center; font-size: 14px;">@dril-horse_ebooks-pukicho</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Pukicho & Horse ebooks.
| Data | wint | Pukicho | Horse ebooks |
| --- | --- | --- | --- |
| Tweets downloaded | 3226 | 2989 | 3200 |
| Retweets | 466 | 90 | 0 |
| Short tweets | 308 | 292 | 421 |
| Tweets kept | 2452 | 2607 | 2779 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29iqmln0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-horse_ebooks-pukicho's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29cfj39j) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29cfj39j/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-horse_ebooks-pukicho')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gayanin/bart-mlm-pubmed-35
|
gayanin
| 2021-11-22T21:16:10Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-mlm-pubmed-35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-pubmed-35
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9359
- Rouge2 Precision: 0.5451
- Rouge2 Recall: 0.4232
- Rouge2 Fmeasure: 0.4666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.4156 | 1.0 | 663 | 1.0366 | 0.5165 | 0.3967 | 0.4394 |
| 1.1773 | 2.0 | 1326 | 0.9841 | 0.5354 | 0.4168 | 0.4589 |
| 1.0894 | 3.0 | 1989 | 0.9554 | 0.5346 | 0.4133 | 0.4563 |
| 0.9359 | 4.0 | 2652 | 0.9440 | 0.5357 | 0.4163 | 0.4587 |
| 0.8758 | 5.0 | 3315 | 0.9340 | 0.5428 | 0.4226 | 0.465 |
| 0.8549 | 6.0 | 3978 | 0.9337 | 0.5385 | 0.422 | 0.4634 |
| 0.7743 | 7.0 | 4641 | 0.9330 | 0.542 | 0.422 | 0.4647 |
| 0.7465 | 8.0 | 5304 | 0.9315 | 0.5428 | 0.4231 | 0.4654 |
| 0.7348 | 9.0 | 5967 | 0.9344 | 0.5462 | 0.4244 | 0.4674 |
| 0.7062 | 10.0 | 6630 | 0.9359 | 0.5451 | 0.4232 | 0.4666 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
renBaikau/alphaDelay
|
renBaikau
| 2021-11-22T12:21:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: alphaDelay
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alphaDelay
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6648
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 82.3335 | 5.0 | 25 | 14.0648 | 1.0 |
| 6.1049 | 10.0 | 50 | 3.7145 | 1.0 |
| 3.9873 | 15.0 | 75 | 3.6648 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
malteos/aspect-cord19-scibert-scivocab-uncased
|
malteos
| 2021-11-22T10:13:31Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"classification",
"similarity",
"sci",
"en",
"dataset:cord19",
"arxiv:2010.06395",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- sci
- en
tags:
- classification
- similarity
license: mit
datasets:
- cord19
---
# Aspect-based Document Similarity for Research Papers
A `scibert-scivocab-uncased` model fine-tuned on the CORD-19 corpus as in [Aspect-based Document Similarity for Research Papers](https://arxiv.org/abs/2010.06395).
<img src="https://raw.githubusercontent.com/malteos/aspect-document-similarity/master/docrel.png">
See GitHub for more details: https://github.com/malteos/aspect-document-similarity
## Demo
<a href="https://colab.research.google.com/github/malteos/aspect-document-similarity/blob/master/demo.ipynb"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Google Colab"></a>
You can try our trained models directly on Google Colab on all papers available on Semantic Scholar (via DOI, ArXiv ID, ACL ID, PubMed ID):
<a href="https://colab.research.google.com/github/malteos/aspect-document-similarity/blob/master/demo.ipynb"><img src="https://raw.githubusercontent.com/malteos/aspect-document-similarity/master/demo.gif" alt="Click here for demo"></a>
|
ThomasSimonini/mlagents-snowballfight-1vs1-ppo
|
ThomasSimonini
| 2021-11-22T09:54:35Z | 0 | 0 | null |
[
"deep-reinforcement-learning",
"reinforcement-learning",
"mlagents",
"license:apache-2.0",
"region:us"
] |
reinforcement-learning
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- deep-reinforcement-learning
- reinforcement-learning
- mlagents
environment:
- MLAgents: Snowballfight-1vs1-ppo
model-index:
- name: mlagents-snowballfight-1vs1-ppo
---
# mlagents-snowballfight-1vs1-ppo ☃️
This is a saved model of a PPO 1vs1 agent playing Snowball Fight.
|
khalidalt/DeBERTa-v3-large-mnli
|
khalidalt
| 2021-11-22T08:38:23Z | 54 | 5 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"en",
"arxiv:2006.03654",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
widget:
- text: "The Movie have been criticized for the story. However, I think it is a great movie. [SEP] I liked the movie."
---
# DeBERTa-v3-large-mnli
## Model description
This model was trained on the Multi-Genre Natural Language Inference ( MultiNLI ) dataset, which consists of 433k sentence pairs textual entailment information.
The model used is [DeBERTa-v3-large from Microsoft](https://huggingface.co/microsoft/deberta-large). The v3 DeBERTa outperforms the result of Bert and RoBERTa in majority of NLU benchmarks by using disentangled attention and enhanced mask decoder. More information about the orginal model is on [official repository](https://github.com/microsoft/DeBERTa) and the [paper](https://arxiv.org/abs/2006.03654)
## Intended uses & limitations
#### How to use the model
```python
premise = "The Movie have been criticized for the story. However, I think it is a great movie."
hypothesis = "I liked the movie."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1)
label_names = ["entailment", "neutral", "contradiction"]
print(label_names[prediction.argmax(0).tolist()])
```
### Training data
This model was trained on the MultiNLI dataset, which consists of 392K sentence textual entitlement.
### Training procedure
DeBERTa-v3-large-mnli was trained using the Hugging Face trainer with the following hyperparameters.
```
train_args = TrainingArguments(
learning_rate=2e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=3,
warmup_ratio=0.06,
weight_decay=0.1,
fp16=True,
seed=42,
)
```
### BibTeX entry and citation info
Please cite the [DeBERTa paper](https://arxiv.org/abs/2006.03654) and [MultiNLI Dataset](https://cims.nyu.edu/~sbowman/multinli/paper.pdf) if you use this model and include this Huggingface hub.
|
wukevin/tcr-bert-mlm-only
|
wukevin
| 2021-11-22T08:32:41Z | 4,032 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
Pretrained on:
* Masked amino acid modeling
Please see our [main model](https://huggingface.co/wukevin/tcr-bert) for additional details.
|
snunlp/KR-Medium
|
snunlp
| 2021-11-22T06:19:42Z | 173 | 7 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"ko",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- ko
---
# KR-BERT-MEDIUM
A pretrained Korean-specific BERT model developed by Computational Linguistics Lab at Seoul National University.
It is based on our character-level [KR-BERT](https://github.com/snunlp/KR-BERT) model which utilize WordPiece tokenizer.
Here, the model name has a suffix 'MEDIUM' since its training data grew from KR-BERT's original dataset. We have another additional model, KR-BERT-EXPANDED with more extensive training data expanded from those of KR-BERT-MEDIUM, so the suffix 'MEDIUM' is used.
<br>
### Vocab, Parameters and Data
| | Mulitlingual BERT<br>(Google) | KorBERT<br>(ETRI) | KoBERT<br>(SKT) | KR-BERT character | KR-BERT-MEDIUM |
| -------------: | ---------------------------------------------: | ---------------------: | ----------------------------------: | -------------------------------------: | -------------------------------------: |
| vocab size | 119,547 | 30,797 | 8,002 | 16,424 | 20,000 |
| parameter size | 167,356,416 | 109,973,391 | 92,186,880 | 99,265,066 | 102,015,010 |
| data size | -<br>(The Wikipedia data<br>for 104 languages) | 23GB<br>4.7B morphemes | -<br>(25M sentences,<br>233M words) | 2.47GB<br>20M sentences,<br>233M words | 12.37GB<br>91M sentences,<br>1.17B words |
<br>
The training data for this model is expanded from those of KR-BERT, texts from Korean Wikipedia, and news articles, by addition of legal texts crawled from the National Law Information Center and [Korean Comments dataset](https://www.kaggle.com/junbumlee/kcbert-pretraining-corpus-korean-news-comments). This data expansion is to collect texts from more various domains than those of KR-BERT. The total data size is about 12.37GB, consisting of 91M and 1.17B words.
The user-generated comment dataset is expected to have similar stylistic properties to the task datasets of NSMC and HSD. Such text includes abbreviations, coinages, emoticons, spacing errors, and typos. Therefore, we added the dataset containing such on-line properties to our existing formal data such as news articles and Wikipedia texts to compose the training data for KR-BERT-MEDIUM. Accordingly, KR-BERT-MEDIUM reported better results in sentiment analysis than other models, and the performances improved with the model of the more massive, more various training data.
This model’s vocabulary size is 20,000, whose tokens are trained based on the expanded training data using the WordPiece tokenizer.
KR-BERT-MEDIUM is trained for 2M steps with the maxlen of 128, training batch size of 64, and learning rate of 1e-4, taking 22 hours to train the model using a Google Cloud TPU v3-8.
### Models
#### TensorFlow
* BERT tokenizer, character-based model ([download](https://drive.google.com/file/d/1OWXGqr2Z2PWD6ST3MsFmcjM8c2mr8PkE/view?usp=sharing))
#### PyTorch
* You can import it from Transformers!
```sh
# pytorch, transformers
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("snunlp/KR-Medium", do_lower_case=False)
model = AutoModel.from_pretrained("snunlp/KR-Medium")
```
### Requirements
- transformers == 4.0.0
- tensorflow < 2.0
## Downstream tasks
* Movie Review Classification on Naver Sentiment Movie Corpus [(NSMC)](https://github.com/e9t/nsmc)
* Hate Speech Detection [(Moon et al., 2020)](https://github.com/kocohub/korean-hate-speech)
#### tensorflow
* After downloading our pre-trained models, put them in a `models` directory.
* Set the output directory (for fine-tuning)
* Select task name: `NSMC` for Movie Review Classification, and `HATE` for Hate Speech Detection
```sh
# tensorflow
python3 run_classifier.py \
--task_name={NSMC, HATE} \
--do_train=true \
--do_eval=true \
--do_predict=true \
--do_lower_case=False\
--max_seq_length=128 \
--train_batch_size=128 \
--learning_rate=5e-05 \
--num_train_epochs=5.0 \
--output_dir={output_dir}
```
<br>
### Performances
TensorFlow, test set performances
| | multilingual BERT | KorBERT<br>character | KR-BERT<br>character<br>WordPiece | KR-BERT-MEDIUM |
|:-----:|-------------------:|----------------:|----------------------------:|-----------------------------------------:|
| NSMC (Acc) | 86.82 | 89.81 | 89.74 | 90.29 |
| Hate Speech (F1) | 52.03 | 54.33 | 54.53 | 57.91 |
<br>
## Contacts
nlp.snu@gmail.com
|
kaggleodin/distilbert-base-uncased-finetuned-squad
|
kaggleodin
| 2021-11-22T04:08:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2291 | 1.0 | 5533 | 1.1581 |
| 0.9553 | 2.0 | 11066 | 1.1249 |
| 0.7767 | 3.0 | 16599 | 1.1639 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
yuekai/espnet-slu-snips
|
yuekai
| 2021-11-22T02:04:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
Fine-tune snips dataset for SLU task using pretrained ASR model with hubert feature
---
language:
- en
receipe: "https://github.com/espnet/espnet/tree/master/egs2/snips/asr1"
datasets:
- snips: smart-lights-en-close-field
metrics:
- F1 score: 91.7
---
|
Ulto/pythonCoPilot3
|
Ulto
| 2021-11-22T01:24:16Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: pythonCoPilot3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythonCoPilot3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
teven/roberta_kelm_tekgen
|
teven
| 2021-11-22T01:04:55Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# teven/roberta_kelm_tekgen
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/roberta_kelm_tekgen')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('teven/roberta_kelm_tekgen')
model = AutoModel.from_pretrained('teven/roberta_kelm_tekgen')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/roberta_kelm_tekgen)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 976035 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 394379 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
[
{
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
]
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Ulto/pythonCoPilot2
|
Ulto
| 2021-11-22T00:24:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: pythonCoPilot2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythonCoPilot2
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 427 | 4.3782 |
| 4.6698 | 2.0 | 854 | 4.0718 |
| 3.3953 | 3.0 | 1281 | 4.0479 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Ulto/pythonCoPilot
|
Ulto
| 2021-11-21T23:49:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: pythonCoPilot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythonCoPilot
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
KrishParikh/gpt2_imdb_movie_plots
|
KrishParikh
| 2021-11-21T20:11:06Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-plot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-plot
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
rafanegrette/t5_spa_gua
|
rafanegrette
| 2021-11-21T17:53:33Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
## Translator of Spanish/Wayuunaiki with T5 model ##
This is a finetuned model based on T5 using a corpus of spanish-wayuunaiki.
Wayuunaiki is the native language of the Wayuus, the major indigenous people in the north of Colombia.
|
Abirate/bert_fine_tuned_cola
|
Abirate
| 2021-11-21T16:41:00Z | 10 | 1 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"arxiv:1810.04805",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
## Petrained Model BERT: base model (cased)
BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/1810.04805) and first released in this [repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference between english and English.
## Pretained Model Description
BERT is an auto-encoder transformer model pretrained on a large corpus of English data (English Wikipedia + Books Corpus) in a self-supervised fashion. This means the targets are computed from the inputs themselves, and humans are not needed to label the data. It was pretrained with two objectives:
- Masked language modeling (MLM)
- Next sentence prediction (NSP)
## Fine-tuned Model Description: BERT fine-tuned Cola
The pretrained model could be fine-tuned on other NLP tasks. The BERT model has been fine-tuned on a cola dataset from the GLUE BENCHAMRK, which is an academic benchmark that aims to measure the performance of ML models. Cola is one of the 11 datasets in this GLUE BENCHMARK.
By fine-tuning BERT on cola dataset, the model is now able to classify a given setence gramatically and semantically as acceptable or not acceptable
## How to use ?
###### Directly with a pipeline for a text-classification NLP task
```python
from transformers import pipeline
cola = pipeline('text-classification', model='Abirate/bert_fine_tuned_cola')
cola("Tunisia is a beautiful country")
[{'label': 'acceptable', 'score': 0.989352285861969}]
```
###### Breaking down all the steps (Tokenization, Modeling, Postprocessing)
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
import tensorflow as tf
import numpy as np
tokenizer = AutoTokenizer.from_pretrained('Abirate/bert_fine_tuned_cola')
model = TFAutoModelForSequenceClassification.from_pretrained("Abirate/bert_fine_tuned_cola")
text = "Tunisia is a beautiful country."
encoded_input = tokenizer(text, return_tensors='tf')
#The logits
output = model(encoded_input)
#Postprocessing
probas_output = tf.math.softmax(tf.squeeze(output['logits']), axis = -1)
class_preds = np.argmax(probas_output, axis = -1)
#Predicting the class acceptable or not acceptable
model.config.id2label[class_preds]
#Result
'acceptable'
```
|
huggingtweets/mo_turse
|
huggingtweets
| 2021-11-21T11:39:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/mo_turse/1637494790715/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1458151390505734144/QnD5NomB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">⬅️To_Murse💉</div>
<div style="text-align: center; font-size: 14px;">@mo_turse</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ⬅️To_Murse💉.
| Data | ⬅️To_Murse💉 |
| --- | --- |
| Tweets downloaded | 3199 |
| Retweets | 1128 |
| Short tweets | 198 |
| Tweets kept | 1873 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18gmbfdi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mo_turse's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/72halqv5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/72halqv5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mo_turse')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/prathkum
|
huggingtweets
| 2021-11-21T09:58:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/prathkum/1637488688526/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1418652395119153153/dvMUbHmM_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pratham</div>
<div style="text-align: center; font-size: 14px;">@prathkum</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pratham.
| Data | Pratham |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 455 |
| Short tweets | 318 |
| Tweets kept | 2473 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2lnm0sab/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @prathkum's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2w7zt05t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2w7zt05t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/prathkum')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
emeraldgoose/bert-base-v1-sports
|
emeraldgoose
| 2021-11-21T05:45:05Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: ko
mask_token: "[MASK]"
widget:
- text: 산악 자전거 경기는 상대적으로 새로운 [MASK] 1990년대에 활성화 되었다.
---
## Data-annotation-nlp-10 (BoostCamp AI)
위키피디아(스포츠) dataset 구축을 진행하면서 얻은 문장을 통해 bert 사전학습을 진행
## How to use
```python
from transformers import AutoTokenizer, BertForMaskedLM
model = BertForMaskedLM.from_pretrained("emeraldgoose/bert-base-v1-sports")
tokenizer = AutoTokenizer.from_pretrained("emeraldgoose/bert-base-v1-sports")
text = "산악 자전거 경기는 상대적으로 새로운 [MASK] 1990년대에 활성화 되었다."
inputs = tokenizer.encode(text, return_tensors='pt')
model.eval()
outputs = model(inputs)['logits']
predict = outputs.argmax(-1)[0]
print(tokenizer.decode(predict))
```
|
Leisa/marian-finetuned-kde4-en-to-fr
|
Leisa
| 2021-11-21T05:25:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.94538305859332
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8558
- Bleu: 52.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
xiongjie/lightweight-real-ESRGAN-anime
|
xiongjie
| 2021-11-21T04:36:38Z | 0 | 1 | null |
[
"onnx",
"region:us"
] | null | 2022-03-02T23:29:05Z |
This is super resolution model for anime like illustration that can upscale image 4x.
This model can upscale 256x256 image to 1024x1024 within around 30[ms] on GPU and around 300[ms] on CPU.
Example is [here](https://github.com/xiong-jie-y/ml-examples/tree/master/lightweight_real_esrgan_anime).
License: MIT License
|
mgreenbe/bertlet-base-uncased-for-sequence-classification
|
mgreenbe
| 2021-11-20T17:23:02Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: bertlet-base-uncased-for-sequence-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertlet-base-uncased-for-sequence-classification
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Sindhu/muril-large-squad2
|
Sindhu
| 2021-11-20T09:43:56Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
# Muril Large Squad2
This model is finetuned for QA task on Squad2 from [Muril Large checkpoint](https://huggingface.co/google/muril-large-cased).
## Hyperparameters
```
Batch Size: 4
Grad Accumulation Steps = 8
Total epochs = 3
MLM Checkpoint = google/muril-large-cased
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_ratio = 0.1
doc_stride = 128
```
## Squad 2 Evaluation stats:
Generated from [the official Squad2 evaluation script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/)
```json
{
"exact": 82.0180240882675,
"f1": 85.10110304685352,
"total": 11873,
"HasAns_exact": 81.6970310391363,
"HasAns_f1": 87.87203044454981,
"HasAns_total": 5928,
"NoAns_exact": 82.3380992430614,
"NoAns_f1": 82.3380992430614,
"NoAns_total": 5945
}
```
## Limitations
MuRIL is specifically trained to work on 18 Indic languages and English. This model is not expected to perform well in any other languages. See the MuRIL checkpoint for further details.
For any questions, you can reach out to me [on Twitter](https://twitter.com/batw0man)
|
overfit/twiner-bert-base-mtl
|
overfit
| 2021-11-20T06:52:55Z | 6 | 2 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
## ParsTwiNER: Transformer-based Model for Named Entity Recognition at Informal Persian
An open, broad-coverage corpus and model for informal Persian named entity recognition collected from Twitter.
Paper presenting ParsTwiNER: [2021.wnut-1.16](https://aclanthology.org/2021.wnut-1.16/)
---
## Results
The following table summarizes the F1 score on our corpus obtained by ParsTwiNER as compared to ParsBERT as a SoTA for Persian NER.
### Named Entity Recognition on Our Corpus
| Entity Type | ParsTwiNER F1 | ParsBert F1 |
|:-----------:|:-------------:|:--------------:|
| PER | 91 | 80 |
| LOC | 82 | 68 |
| ORG | 69 | 55 |
| EVE | 41 | 12 |
| POG | 85 | - |
| NAT | 82.3 | - |
| Total | 81.5 | 69.5 |
## How to use
### TensorFlow 2.0
```python
from transformers import TFAutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("overfit/twiner-bert-base-mtl")
model = TFAutoModelForTokenClassification.from_pretrained("overfit/twiner-bert-base-mtl")
twiner_mtl = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[])
```
## Cite
Please cite the following paper in your publication if you are using [ParsTwiNER](https://aclanthology.org/2021.wnut-1.16/) in your research:
```markdown
@inproceedings{aghajani-etal-2021-parstwiner,
title = "{P}ars{T}wi{NER}: A Corpus for Named Entity Recognition at Informal {P}ersian",
author = "Aghajani, MohammadMahdi and
Badri, AliAkbar and
Beigy, Hamid",
booktitle = "Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.wnut-1.16",
pages = "131--136",
abstract = "As a result of unstructured sentences and some misspellings and errors, finding named entities in a noisy environment such as social media takes much more effort. ParsTwiNER contains about 250k tokens, based on standard instructions like MUC-6 or CoNLL 2003, gathered from Persian Twitter. Using Cohen{'}s Kappa coefficient, the consistency of annotators is 0.95, a high score. In this study, we demonstrate that some state-of-the-art models degrade on these corpora, and trained a new model using parallel transfer learning based on the BERT architecture. Experimental results show that the model works well in informal Persian as well as in formal Persian.",
}
```
## Acknowledgments
The authors would like to thank Dr. Momtazi for her support. Furthermore, we would like to acknowledge the accompaniment provided by Mohammad Mahdi Samiei and Abbas Maazallahi.
## Contributors
- Mohammad Mahdi Aghajani: [Linkedin](https://www.linkedin.com/in/mohammadmahdi-aghajani-821843147/), [Github](https://github.com/mmaghajani)
- Ali Akbar Badri: [Linkedin](https://www.linkedin.com/in/aliakbarbadri/), [Github](https://github.com/AliAkbarBadri)
- Dr. Hamid Beigy: [Linkedin](https://www.linkedin.com/in/hamid-beigy-8982604b/)
- Overfit Team: [Github](https://github.com/overfit-ir), [Telegram](https://t.me/nlp_stuff)
## Releases
### Release v1.0.0 (Aug 01, 2021)
This is the first version of our ParsTwiNER.
|
huggingtweets/temeton_blue-temeton_pink
|
huggingtweets
| 2021-11-19T22:17:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1461728895623995394/17gDcblW_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1421180251812638720/erd-JZoZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🌜 Normiemon (Sonic's Creed) 🌛 & 🌛 ℕormiemon's 𝔼xtra 𝕍iolent 𝔸lt 🌜</div>
<div style="text-align: center; font-size: 14px;">@temeton_blue-temeton_pink</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🌜 Normiemon (Sonic's Creed) 🌛 & 🌛 ℕormiemon's 𝔼xtra 𝕍iolent 𝔸lt 🌜.
| Data | 🌜 Normiemon (Sonic's Creed) 🌛 | 🌛 ℕormiemon's 𝔼xtra 𝕍iolent 𝔸lt 🌜 |
| --- | --- | --- |
| Tweets downloaded | 3241 | 685 |
| Retweets | 827 | 65 |
| Short tweets | 385 | 78 |
| Tweets kept | 2029 | 542 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2rvfxw6c/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @temeton_blue-temeton_pink's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/19opzvs5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/19opzvs5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/temeton_blue-temeton_pink')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
JazibEijaz/bert-base-uncased-finetuned-semeval2020-task4a
|
JazibEijaz
| 2021-11-19T20:43:53Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
name: bert-base-uncased-finetuned-semeval2020-task4a-e2-b32-l5e5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-semeval2020-task4a
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ComVE dataset which was part of SemEval 2020 Task 4.
It achieves the following results on the test set:
- Loss: 0.2782
- Accuracy: 0.9040
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 344 | 0.2700 | 0.8940 |
| 0.349 | 2.0 | 688 | 0.2782 | 0.9040 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
kensho/dummy_full_language_model
|
kensho
| 2021-11-19T16:06:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
This is an example of how a kenLM model can be downloaded with [PyCTCDecode](https://github.com/kensho-technologies/pyctcdecode) .
Simply run the following code:
```python
from pyctcdecode import LanguageModel
language_model = LanguageModel.load_from_hf_hub("kensho/dummy_full_language_model")
```
The model was created by [Patrick von Platen](https://huggingface.co/patrickvonplaten) for demonstration purposes.
|
alvp/alberti-stanzas
|
alvp
| 2021-11-19T13:41:53Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:alvp/autonlp-data-alberti-stanza-names",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- alvp/autonlp-data-alberti-stanza-names
co2_eq_emissions: 8.612473981829835
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 34318169
- CO2 Emissions (in grams): 8.612473981829835
## Validation Metrics
- Loss: 1.3520570993423462
- Accuracy: 0.6083916083916084
- Macro F1: 0.5420169617715481
- Micro F1: 0.6083916083916084
- Weighted F1: 0.5963328136975058
- Macro Precision: 0.5864033493660455
- Micro Precision: 0.6083916083916084
- Weighted Precision: 0.6364793882921277
- Macro Recall: 0.5545405576555766
- Micro Recall: 0.6083916083916084
- Weighted Recall: 0.6083916083916084
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alvp/autonlp-alberti-stanza-names-34318169
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alvp/autonlp-alberti-stanza-names-34318169", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alvp/autonlp-alberti-stanza-names-34318169", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
shivkumarganesh/vision-transformer-fmri-classification-ft
|
shivkumarganesh
| 2021-11-19T13:21:37Z | 69 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vision-transformer-fmri-classification-ft
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7955589294433594
---
# vision-transformer-fmri-classification-ft
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
|
readerbench/jurBERT-large
|
readerbench
| 2021-11-19T11:55:47Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
Model card for jurBERT-large
---
language:
- ro
---
# jurBERT-large
## Pretrained juridical BERT model for Romanian
BERT Romanian juridical model trained using a masked language modeling (MLM) and next sentence prediction (NSP) objective.
It was introduced in this [paper](https://aclanthology.org/2021.nllp-1.8/). Two BERT models were released: **jurBERT-base** and **jurBERT-large**, all versions uncased.
| Model | Weights | L | H | A | MLM accuracy | NSP accuracy |
|----------------|:---------:|:------:|:------:|:------:|:--------------:|:--------------:|
| jurBERT-base | 111M | 12 | 768 | 12 | 0.8936 | 0.9923 |
| *jurBERT-large* | *337M* | *24* | *1024* | *24* | *0.9005* | *0.9929* |
All models are available:
* [jurBERT-base](https://huggingface.co/readerbench/jurBERT-base)
* [jurBERT-large](https://huggingface.co/readerbench/jurBERT-large)
#### How to use
```python
# tensorflow
from transformers import AutoModel, AutoTokenizer, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/jurBERT-large")
model = TFAutoModel.from_pretrained("readerbench/jurBERT-large")
inputs = tokenizer("exemplu de propoziție", return_tensors="tf")
outputs = model(inputs)
# pytorch
from transformers import AutoModel, AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/jurBERT-large")
model = AutoModel.from_pretrained("readerbench/jurBERT-large")
inputs = tokenizer("exemplu de propoziție", return_tensors="pt")
outputs = model(**inputs)
```
## Datasets
The model is trained on a private corpus (that can nevertheless be rented for a fee), that is comprised of all the final ruling, containing both civil and criminal cases, published by any Romanian civil court between 2010 and 2018. Validation is performed on RoBanking datase. We extracted from RoJur common types of cases pertinent to the banking domain (e.g. administration fee litigations, enforcement appeals), kept only the summary of the arguments provided by both the plaitiffs and the defendants and the final verdict (in the form of a boolean value) to build RoBanking.
| Corpus | Scope |Entries | Size (GB)|
|-----------|:------------:|:---------:|:---------:|
| RoJur | pre-training | 11M | 160 |
| RoBanking | downstream | 108k | - |
## Downstream performance
We report Mean AUC and Std AUC on the task of predicting the outcome of a case.
### Results on RoBanking using only the plea of the plaintiff.
| Model | Mean AUC | Std AUC |
|--------------------|:--------:|:--------:|
| CNN | 79.60 | - |
| BI-LSTM | 80.99 | 0.26 |
| RoBERT-small | 70.54 | 0.28 |
| RoBERT-base | 79.74 | 0.21 |
| RoBERT-base + hf | 79.82 | 0.11 |
| RoBERT-large | 76.53 | 5.43 |
| jurBERT-base | **81.47**| **0.18** |
| jurBERT-base + hf | 81.40 | 0.18 |
| *jurBERT-large* | *78.38* | *1.77* |
### Results on RoBanking using pleas from both the plaintiff and defendant.
| Model | Mean AUC | Std AUC |
|---------------------|:--------:|:--------:|
| BI-LSTM | 84.60 | 0.59 |
| RoBERT-base | 84.40 | 0.26 |
| RoBERT-base + hf | 84.43 | 0.15 |
| jurBERT-base | 86.63 | 0.18 |
| jurBERT-base + hf | **86.73**| **0.22** |
| *jurBERT-large* | *82.04* | *0.64* |
For complete results and discussion please refer to the [paper](https://aclanthology.org/2021.nllp-1.8/).
### BibTeX entry and citation info
```bibtex
@inproceedings{masala2021jurbert,
title={jurBERT: A Romanian BERT Model for Legal Judgement Prediction},
author={Masala, Mihai and Iacob, Radu Cristian Alexandru and Uban, Ana Sabina and Cidota, Marina and Velicu, Horia and Rebedea, Traian and Popescu, Marius},
booktitle={Proceedings of the Natural Legal Language Processing Workshop 2021},
pages={86--94},
year={2021}
}
```
|
momo/gpt2-kiosk
|
momo
| 2021-11-19T07:42:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
# kiosk_bot
KoGPT2를 이용한 간단한 kiosk chatbot 입니다.
데이터는 AiHub의 [한국어대화 데이터](https://aihub.or.kr/aidata/85)를 사용했습니다.
데이터는 학습만 진행하였고 공개는 하지 않습니다.
## Architecture
Hugging face의 예제들을 보며 구현하였습니다.
<img width="549" alt="gpt" src="https://user-images.githubusercontent.com/60643542/142431681-85db3d74-172d-45f0-9433-de43a8aeae17.png">
```input``` 은 ```User + <BOS> + System + <EOS>```로 구현하였다.
```output``` 은 ```P(system|user)```를 최대화 시키기위해 모델을 학습하였다.
## Install
```
git clone https://github.com/momozzing/kiosk_bot.git
pip install requirements.txt
```
## How to train?
기존 학습된 데이터로 대화를 하고 싶으시면 ```install``` 후 ```How to chat``` 으로 넘어가셔도 됩니다.
```
cd kiosk_bot/data ## 안에 원하시는 데이터를 넣으시면 됩니다.
kiosk_bot/GPT-2_fine_tune.py ## 안에 데이터 loader들의 파일이름을 변경해주시면 됩니다.
deepspeed --num_gpus=1 GPT-2_fine_tune.py ## 변경후 이것으로 실행하시면 됩니다.
그후 생성된 skt-kogpt2-base-v2.pt 를 사용하시면 됩니다.
```
## How to chat?
```
python interactive.py ##기존 데이터로 채팅하기
python new_data_interactive.py ##새로운 데이터로 파인튜닝한것 pt파일 불러와서 채팅하기.
--기존 데이터 채팅--
User: 아메리카노 주세요
System: 따뜻한 거 드릴까요?
User: 아이스로 주세요
System: 사이즈는 톨 괜찮으세요?
User: 그란데 사이즈로 주세요
System: 죄송한데 저희 가게에는 레귤러랑 라지 사이즈 밖에 없어요
User: 그럼 라지사이즈로 주세요
System: 네 고객님 테이크아웃 하시나요?
User: 먹고갈게요
System: 네 그럼 머그잔으로 준비해드리겠습니다
User: 감사합니다
System: 좋은 하루 보내세요
```
## Reference
[HuggingFace](https://huggingface.co/transformers/index.html)
[KoGPT2](https://github.com/SKT-AI/KoGPT2)
[AIHUB](https://aihub.or.kr/)
|
MrBananaHuman/kogpt_6b_fp16
|
MrBananaHuman
| 2021-11-19T06:23:58Z | 55 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
kakao brain에서 공개한 kogpt 6b model('kakaobrain/kogpt')을 fp16으로 저장한 모델입니다.
### 카카오브레인 모델을 fp16으로 로드하는 방법
```python
import torch
from transformers import GPTJForCausalLM
model = GPTJForCausalLM.from_pretrained('kakaobrain/kogpt', cache_dir='./my_dir', revision='KoGPT6B-ryan1.5b', torch_dtype=torch.float16)
```
### fp16 모델 로드 후 문장 생성
[](https://colab.research.google.com/drive/1_rLDzhGohJPbOD5I_eTIOdx4aOTp43uK?usp=sharing)
```python
import torch
from transformers import GPTJForCausalLM, AutoTokenizer
model = GPTJForCausalLM.from_pretrained('MrBananaHuman/kogpt_6b_fp16', low_cpu_mem_usage=True))
model.to('cuda')
tokenizer = AutoTokenizer.from_pretrained('MrBananaHuman/kogpt_6b_fp16')
input_text = '이순신은'
input_ids = tokenizer(input_text, return_tensors='pt').input_ids.to('cuda')
output = model.generate(input_ids, max_length=64)
print(tokenizer.decode(output[0]))
>>> 이순신은 우리에게 무엇인가? 1. 머리말 이글은 임진왜란 당시 이순인이 보여준
```
### 참고 링크
https://github.com/kakaobrain/kogpt/issues/6?fbclid=IwAR1KpWhuHnevQvEWV18o16k2z9TLgrXkbWTkKqzL-NDXHfDnWcIq7I4SJXM
|
huyue012/wav2vec2-base-cynthia-tedlium-2500-v2
|
huyue012
| 2021-11-19T04:09:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-cynthia-tedlium-2500-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cynthia-tedlium-2500-v2
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6425
- Wer: 0.2033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1196 | 6.58 | 500 | 0.6498 | 0.2103 |
| 0.1176 | 13.16 | 1000 | 0.6490 | 0.2169 |
| 0.1227 | 19.73 | 1500 | 0.6241 | 0.2127 |
| 0.1078 | 26.31 | 2000 | 0.6359 | 0.2118 |
| 0.0956 | 32.89 | 2500 | 0.6330 | 0.2073 |
| 0.1008 | 39.47 | 3000 | 0.6816 | 0.2036 |
| 0.09 | 46.05 | 3500 | 0.6425 | 0.2033 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
huggingtweets/stockstotrade
|
huggingtweets
| 2021-11-19T03:41:39Z | 10 | 3 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/stockstotrade/1637293295111/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/469936583416610816/EZt8Vl04_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">StocksToTrade</div>
<div style="text-align: center; font-size: 14px;">@stockstotrade</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from StocksToTrade.
| Data | StocksToTrade |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 663 |
| Short tweets | 360 |
| Tweets kept | 2215 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/c33zwruj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stockstotrade's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1upgfq9z) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1upgfq9z/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/stockstotrade')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
dhairya2303/bert-base-uncased-emotion-AD
|
dhairya2303
| 2021-11-19T03:31:29Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
This the repo for the final project
|
aozorahime/my-new-model
|
aozorahime
| 2021-11-19T03:15:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: my-new-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-new-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
kevinzyz/chinese-bert-wwm-ext-finetuned-cola
|
kevinzyz
| 2021-11-19T03:13:39Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: chinese-bert-wwm-ext-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-bert-wwm-ext-finetuned-cola
This model is a fine-tuned version of [hfl/chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5747
- Matthews Correlation: 0.4085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.5824 | 1.0 | 66375 | 0.5746 | 0.4083 |
| 0.5824 | 2.0 | 66376 | 0.5747 | 0.4085 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.7.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-xlsr-53-300m-mls-german-ft
|
patrickvonplaten
| 2021-11-18T22:30:46Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"multilingual_librispeech",
"generated_from_trainer",
"dataset:multilingual_librispeech",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- multilingual_librispeech
- generated_from_trainer
datasets:
- multilingual_librispeech
model-index:
- name: wav2vec2-xlsr-53-300m-mls-german-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-300m-mls-german-ft
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MULTILINGUAL_LIBRISPEECH - GERMAN 10h dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2219
- Wer: 0.1288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 2.9888 | 7.25 | 500 | 2.9192 | 1.0 |
| 2.9313 | 14.49 | 1000 | 2.8698 | 1.0 |
| 1.068 | 21.74 | 1500 | 0.2647 | 0.2565 |
| 0.8151 | 28.99 | 2000 | 0.2067 | 0.1719 |
| 0.764 | 36.23 | 2500 | 0.1975 | 0.1568 |
| 0.7332 | 43.48 | 3000 | 0.1812 | 0.1463 |
| 0.5952 | 50.72 | 3500 | 0.1923 | 0.1428 |
| 0.6655 | 57.97 | 4000 | 0.1900 | 0.1404 |
| 0.574 | 65.22 | 4500 | 0.1822 | 0.1370 |
| 0.6211 | 72.46 | 5000 | 0.1937 | 0.1355 |
| 0.5883 | 79.71 | 5500 | 0.1872 | 0.1335 |
| 0.5666 | 86.96 | 6000 | 0.1874 | 0.1324 |
| 0.5526 | 94.2 | 6500 | 0.1998 | 0.1368 |
| 0.5671 | 101.45 | 7000 | 0.2054 | 0.1365 |
| 0.5514 | 108.7 | 7500 | 0.1987 | 0.1340 |
| 0.5382 | 115.94 | 8000 | 0.2104 | 0.1344 |
| 0.5819 | 123.19 | 8500 | 0.2125 | 0.1334 |
| 0.5277 | 130.43 | 9000 | 0.2063 | 0.1330 |
| 0.4626 | 137.68 | 9500 | 0.2105 | 0.1310 |
| 0.5842 | 144.93 | 10000 | 0.2087 | 0.1307 |
| 0.535 | 152.17 | 10500 | 0.2137 | 0.1309 |
| 0.5081 | 159.42 | 11000 | 0.2215 | 0.1302 |
| 0.6033 | 166.67 | 11500 | 0.2162 | 0.1302 |
| 0.5549 | 173.91 | 12000 | 0.2198 | 0.1286 |
| 0.5389 | 181.16 | 12500 | 0.2241 | 0.1293 |
| 0.4912 | 188.41 | 13000 | 0.2190 | 0.1290 |
| 0.4671 | 195.65 | 13500 | 0.2218 | 0.1290 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
anindabitm/sagemaker-BioclinicalBERT-ADR
|
anindabitm
| 2021-11-18T19:24:42Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:ade_corpus_v2",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- ade_corpus_v2
model-index:
- name: sagemaker-BioclinicalBERT-ADR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-BioclinicalBERT-ADR
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the ade_corpus_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 171 | 0.9441 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
sszyr/finetuned-bert-bounti
|
sszyr
| 2021-11-18T18:44:50Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned-bert-bounti
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-bounti
This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-uncased](https://huggingface.co/dbmdz/bert-base-turkish-128k-uncased) on the BounTi Turkish Twitter sentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1188
- Accuracy: 0.7246
- F1: 0.6845
- Precision: 0.6892
- Recall: 0.6806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 36
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0974 | 0.02 | 5 | 1.0790 | 0.3756 | 0.3064 | 0.3255 | 0.3232 |
| 1.1345 | 0.04 | 10 | 1.0784 | 0.3725 | 0.3037 | 0.3219 | 0.3197 |
| 1.1441 | 0.06 | 15 | 1.0776 | 0.3772 | 0.3072 | 0.3250 | 0.3234 |
| 1.122 | 0.08 | 20 | 1.0774 | 0.3787 | 0.3077 | 0.3244 | 0.3228 |
| 1.1201 | 0.1 | 25 | 1.0776 | 0.3787 | 0.3047 | 0.3193 | 0.3216 |
| 1.1489 | 0.13 | 30 | 1.0783 | 0.3787 | 0.3012 | 0.3120 | 0.3189 |
| 1.0716 | 0.15 | 35 | 1.0783 | 0.3897 | 0.3093 | 0.3212 | 0.3282 |
| 1.082 | 0.17 | 40 | 1.0767 | 0.3865 | 0.3060 | 0.3203 | 0.3238 |
| 1.1113 | 0.19 | 45 | 1.0738 | 0.3897 | 0.3058 | 0.3219 | 0.3211 |
| 1.0892 | 0.21 | 50 | 1.0715 | 0.4069 | 0.3290 | 0.3475 | 0.3374 |
| 1.0913 | 0.23 | 55 | 1.0719 | 0.4178 | 0.3283 | 0.3398 | 0.3361 |
| 1.1114 | 0.25 | 60 | 1.0694 | 0.4397 | 0.3479 | 0.3605 | 0.3538 |
| 1.1129 | 0.27 | 65 | 1.0682 | 0.4491 | 0.3593 | 0.3731 | 0.3648 |
| 1.1283 | 0.29 | 70 | 1.0671 | 0.4664 | 0.3719 | 0.3775 | 0.3780 |
| 1.1267 | 0.31 | 75 | 1.0714 | 0.4507 | 0.3826 | 0.3834 | 0.3835 |
| 1.1325 | 0.33 | 80 | 1.0762 | 0.4335 | 0.3909 | 0.3918 | 0.3954 |
| 1.0919 | 0.36 | 85 | 1.0723 | 0.4335 | 0.3930 | 0.3937 | 0.3982 |
| 1.0545 | 0.38 | 90 | 1.0694 | 0.4507 | 0.4161 | 0.4180 | 0.4279 |
| 1.1121 | 0.4 | 95 | 1.0698 | 0.4491 | 0.4151 | 0.4280 | 0.4324 |
| 1.0675 | 0.42 | 100 | 1.0711 | 0.4382 | 0.4005 | 0.4349 | 0.4494 |
| 1.0954 | 0.44 | 105 | 1.0720 | 0.4085 | 0.3690 | 0.4233 | 0.4326 |
| 1.1087 | 0.46 | 110 | 1.0562 | 0.4820 | 0.4463 | 0.4762 | 0.4841 |
| 1.0669 | 0.48 | 115 | 1.0459 | 0.5086 | 0.4746 | 0.4844 | 0.4997 |
| 1.0529 | 0.5 | 120 | 1.0364 | 0.5243 | 0.4935 | 0.4946 | 0.5119 |
| 1.0348 | 0.52 | 125 | 1.0248 | 0.5321 | 0.4953 | 0.4977 | 0.5067 |
| 1.0454 | 0.54 | 130 | 1.0169 | 0.5415 | 0.5089 | 0.5084 | 0.5232 |
| 1.0366 | 0.56 | 135 | 1.0071 | 0.5493 | 0.5176 | 0.5156 | 0.5344 |
| 1.0197 | 0.59 | 140 | 1.0010 | 0.5446 | 0.5132 | 0.5150 | 0.5350 |
| 1.0459 | 0.61 | 145 | 0.9966 | 0.5399 | 0.5094 | 0.5184 | 0.5383 |
| 1.0059 | 0.63 | 150 | 1.0011 | 0.5477 | 0.5222 | 0.5394 | 0.5617 |
| 0.9455 | 0.65 | 155 | 0.9898 | 0.5399 | 0.5173 | 0.5390 | 0.5583 |
| 0.9732 | 0.67 | 160 | 0.9750 | 0.5477 | 0.5207 | 0.5406 | 0.5601 |
| 1.0215 | 0.69 | 165 | 0.9494 | 0.5790 | 0.5495 | 0.5511 | 0.5759 |
| 0.99 | 0.71 | 170 | 0.9331 | 0.5696 | 0.5355 | 0.5372 | 0.5500 |
| 1.0102 | 0.73 | 175 | 0.9284 | 0.5759 | 0.5425 | 0.5488 | 0.5567 |
| 0.9633 | 0.75 | 180 | 0.9313 | 0.5837 | 0.5571 | 0.5726 | 0.5758 |
| 0.9388 | 0.77 | 185 | 0.9262 | 0.5869 | 0.5625 | 0.5830 | 0.5817 |
| 0.9606 | 0.79 | 190 | 0.9140 | 0.5915 | 0.5638 | 0.5728 | 0.5835 |
| 0.969 | 0.82 | 195 | 0.9170 | 0.5978 | 0.5712 | 0.5769 | 0.5964 |
| 0.8779 | 0.84 | 200 | 0.9089 | 0.5947 | 0.5696 | 0.5790 | 0.5925 |
| 0.9041 | 0.86 | 205 | 0.9013 | 0.6166 | 0.5874 | 0.5894 | 0.6083 |
| 0.8643 | 0.88 | 210 | 0.8783 | 0.6275 | 0.5961 | 0.5972 | 0.6140 |
| 0.8864 | 0.9 | 215 | 0.8651 | 0.6307 | 0.5984 | 0.6060 | 0.6152 |
| 0.9075 | 0.92 | 220 | 0.8562 | 0.6401 | 0.6107 | 0.6096 | 0.6313 |
| 0.8659 | 0.94 | 225 | 0.8407 | 0.6244 | 0.5896 | 0.5864 | 0.6085 |
| 0.8921 | 0.96 | 230 | 0.8171 | 0.6385 | 0.6014 | 0.5955 | 0.6138 |
| 0.9176 | 0.98 | 235 | 0.8120 | 0.6432 | 0.6052 | 0.6001 | 0.6183 |
| 0.8124 | 1.0 | 240 | 0.8084 | 0.6479 | 0.6087 | 0.6058 | 0.6229 |
| 0.7606 | 1.03 | 245 | 0.7978 | 0.6588 | 0.6198 | 0.6166 | 0.6258 |
| 0.7879 | 1.05 | 250 | 0.8361 | 0.6322 | 0.6002 | 0.6090 | 0.6310 |
| 0.8515 | 1.07 | 255 | 0.8527 | 0.6307 | 0.6063 | 0.6070 | 0.6368 |
| 0.7861 | 1.09 | 260 | 0.8300 | 0.6510 | 0.6229 | 0.6172 | 0.6449 |
| 0.8782 | 1.11 | 265 | 0.8068 | 0.6588 | 0.6262 | 0.6195 | 0.6412 |
| 0.6993 | 1.13 | 270 | 0.8127 | 0.6573 | 0.6245 | 0.6186 | 0.6414 |
| 0.7961 | 1.15 | 275 | 0.8302 | 0.6448 | 0.6129 | 0.6142 | 0.6382 |
| 0.829 | 1.17 | 280 | 0.8130 | 0.6416 | 0.6068 | 0.6047 | 0.6264 |
| 0.7315 | 1.19 | 285 | 0.8127 | 0.6714 | 0.6414 | 0.6348 | 0.6609 |
| 0.7115 | 1.21 | 290 | 0.8074 | 0.6651 | 0.6367 | 0.6297 | 0.6577 |
| 0.7937 | 1.23 | 295 | 0.8018 | 0.6667 | 0.6405 | 0.6338 | 0.6595 |
| 0.8213 | 1.26 | 300 | 0.7846 | 0.6651 | 0.6317 | 0.6313 | 0.6424 |
| 0.9309 | 1.28 | 305 | 0.7801 | 0.6651 | 0.6267 | 0.6314 | 0.6357 |
| 0.7616 | 1.3 | 310 | 0.8000 | 0.6635 | 0.6403 | 0.6352 | 0.6657 |
| 0.7075 | 1.32 | 315 | 0.8006 | 0.6635 | 0.6395 | 0.6354 | 0.6642 |
| 0.8925 | 1.34 | 320 | 0.8418 | 0.6385 | 0.6185 | 0.6205 | 0.6531 |
| 0.7579 | 1.36 | 325 | 0.8114 | 0.6541 | 0.6308 | 0.6281 | 0.6602 |
| 0.6983 | 1.38 | 330 | 0.7589 | 0.6745 | 0.6424 | 0.6356 | 0.6538 |
| 0.756 | 1.4 | 335 | 0.7540 | 0.6870 | 0.6423 | 0.6454 | 0.6436 |
| 0.8183 | 1.42 | 340 | 0.7762 | 0.6651 | 0.6304 | 0.6248 | 0.6486 |
| 0.7386 | 1.44 | 345 | 0.8212 | 0.6510 | 0.6244 | 0.6229 | 0.6535 |
| 0.7175 | 1.46 | 350 | 0.8002 | 0.6573 | 0.6269 | 0.6229 | 0.6512 |
| 0.7076 | 1.49 | 355 | 0.7799 | 0.6682 | 0.6310 | 0.6281 | 0.6506 |
| 0.7115 | 1.51 | 360 | 0.7525 | 0.6886 | 0.6576 | 0.6510 | 0.6697 |
| 0.7092 | 1.53 | 365 | 0.7882 | 0.6714 | 0.6272 | 0.6513 | 0.6330 |
| 0.6852 | 1.55 | 370 | 0.7909 | 0.6698 | 0.6287 | 0.6548 | 0.6363 |
| 0.673 | 1.57 | 375 | 0.7396 | 0.6901 | 0.6523 | 0.6536 | 0.6542 |
| 0.7115 | 1.59 | 380 | 0.7270 | 0.6933 | 0.6539 | 0.6532 | 0.6546 |
| 0.6391 | 1.61 | 385 | 0.7389 | 0.6964 | 0.6654 | 0.6576 | 0.6790 |
| 0.6018 | 1.63 | 390 | 0.7619 | 0.6886 | 0.6628 | 0.6571 | 0.6835 |
| 0.743 | 1.65 | 395 | 0.7635 | 0.6854 | 0.6579 | 0.6546 | 0.6780 |
| 0.6865 | 1.67 | 400 | 0.7457 | 0.7011 | 0.6709 | 0.6681 | 0.6855 |
| 0.6629 | 1.69 | 405 | 0.7309 | 0.7058 | 0.6752 | 0.6717 | 0.6861 |
| 0.6887 | 1.72 | 410 | 0.7389 | 0.6933 | 0.6628 | 0.6555 | 0.6809 |
| 0.6494 | 1.74 | 415 | 0.7742 | 0.6823 | 0.6565 | 0.6519 | 0.6831 |
| 0.6798 | 1.76 | 420 | 0.7751 | 0.6667 | 0.6337 | 0.6345 | 0.6614 |
| 0.6825 | 1.78 | 425 | 0.7798 | 0.6604 | 0.6269 | 0.6375 | 0.6594 |
| 0.7926 | 1.8 | 430 | 0.7085 | 0.7105 | 0.6726 | 0.6670 | 0.6804 |
| 0.6508 | 1.82 | 435 | 0.7455 | 0.6964 | 0.6439 | 0.6653 | 0.6460 |
| 0.7772 | 1.84 | 440 | 0.7669 | 0.6964 | 0.6531 | 0.6780 | 0.6594 |
| 0.7265 | 1.86 | 445 | 0.7454 | 0.7089 | 0.6722 | 0.6800 | 0.6826 |
| 0.5965 | 1.88 | 450 | 0.7700 | 0.6933 | 0.6670 | 0.6623 | 0.6931 |
| 0.6436 | 1.9 | 455 | 0.7910 | 0.6901 | 0.6654 | 0.6620 | 0.6951 |
| 0.6887 | 1.92 | 460 | 0.7752 | 0.6870 | 0.6590 | 0.6552 | 0.6872 |
| 0.7574 | 1.95 | 465 | 0.7511 | 0.6980 | 0.6686 | 0.6621 | 0.6925 |
| 0.6853 | 1.97 | 470 | 0.7446 | 0.7074 | 0.6775 | 0.6711 | 0.6981 |
| 0.7416 | 1.99 | 475 | 0.7151 | 0.7105 | 0.6783 | 0.6703 | 0.6938 |
| 0.723 | 2.01 | 480 | 0.6886 | 0.7105 | 0.6727 | 0.6691 | 0.6776 |
| 0.5993 | 2.03 | 485 | 0.6947 | 0.7152 | 0.6767 | 0.6711 | 0.6865 |
| 0.549 | 2.05 | 490 | 0.7140 | 0.7167 | 0.6833 | 0.6764 | 0.6969 |
| 0.5739 | 2.07 | 495 | 0.7372 | 0.7136 | 0.6843 | 0.6828 | 0.6961 |
| 0.6444 | 2.09 | 500 | 0.7733 | 0.7089 | 0.6796 | 0.6943 | 0.6920 |
| 0.5526 | 2.11 | 505 | 0.7368 | 0.7277 | 0.6954 | 0.6927 | 0.7074 |
| 0.5429 | 2.13 | 510 | 0.7194 | 0.7246 | 0.6886 | 0.6879 | 0.6913 |
| 0.5838 | 2.15 | 515 | 0.7465 | 0.7214 | 0.6818 | 0.6933 | 0.6866 |
| 0.6746 | 2.18 | 520 | 0.7644 | 0.7152 | 0.6865 | 0.6819 | 0.7054 |
| 0.7252 | 2.2 | 525 | 0.7564 | 0.7042 | 0.6713 | 0.6645 | 0.6918 |
| 0.5443 | 2.22 | 530 | 0.7337 | 0.7027 | 0.6636 | 0.6598 | 0.6782 |
| 0.5526 | 2.24 | 535 | 0.7324 | 0.7183 | 0.6795 | 0.6831 | 0.6865 |
| 0.692 | 2.26 | 540 | 0.7622 | 0.7121 | 0.6826 | 0.6841 | 0.6971 |
| 0.5897 | 2.28 | 545 | 0.7525 | 0.7089 | 0.6771 | 0.6708 | 0.6951 |
| 0.708 | 2.3 | 550 | 0.7366 | 0.7105 | 0.6763 | 0.6690 | 0.6938 |
| 0.6009 | 2.32 | 555 | 0.7232 | 0.7136 | 0.6741 | 0.6690 | 0.6843 |
| 0.6622 | 2.34 | 560 | 0.7104 | 0.7136 | 0.6763 | 0.6727 | 0.6816 |
| 0.8816 | 2.36 | 565 | 0.7150 | 0.7183 | 0.6830 | 0.6775 | 0.6932 |
| 0.6642 | 2.38 | 570 | 0.7545 | 0.6980 | 0.6681 | 0.6652 | 0.6961 |
| 0.5929 | 2.41 | 575 | 0.7167 | 0.7136 | 0.6778 | 0.6704 | 0.6930 |
| 0.6612 | 2.43 | 580 | 0.7078 | 0.7277 | 0.6912 | 0.6858 | 0.7023 |
| 0.4924 | 2.45 | 585 | 0.7138 | 0.7167 | 0.6809 | 0.6753 | 0.6938 |
| 0.544 | 2.47 | 590 | 0.7088 | 0.7183 | 0.6807 | 0.6749 | 0.6901 |
| 0.4047 | 2.49 | 595 | 0.7210 | 0.7199 | 0.6843 | 0.6775 | 0.6965 |
| 0.5416 | 2.51 | 600 | 0.7199 | 0.7214 | 0.6845 | 0.6777 | 0.6952 |
| 0.5407 | 2.53 | 605 | 0.7159 | 0.7293 | 0.6934 | 0.6873 | 0.7017 |
| 0.5775 | 2.55 | 610 | 0.7354 | 0.7308 | 0.6975 | 0.6902 | 0.7133 |
| 0.6107 | 2.57 | 615 | 0.7402 | 0.7261 | 0.6932 | 0.6863 | 0.7103 |
| 0.5679 | 2.59 | 620 | 0.7266 | 0.7293 | 0.6946 | 0.6869 | 0.7091 |
| 0.5599 | 2.62 | 625 | 0.7049 | 0.7136 | 0.6736 | 0.6716 | 0.6757 |
| 0.6608 | 2.64 | 630 | 0.7150 | 0.7183 | 0.6834 | 0.6761 | 0.6952 |
| 0.6886 | 2.66 | 635 | 0.7334 | 0.7230 | 0.6925 | 0.6856 | 0.7107 |
| 0.6524 | 2.68 | 640 | 0.7106 | 0.7324 | 0.6955 | 0.6907 | 0.7060 |
| 0.5027 | 2.7 | 645 | 0.7031 | 0.7261 | 0.6871 | 0.6896 | 0.6883 |
| 0.5327 | 2.72 | 650 | 0.7033 | 0.7230 | 0.6824 | 0.6863 | 0.6812 |
| 0.6561 | 2.74 | 655 | 0.7188 | 0.7183 | 0.6846 | 0.6770 | 0.6979 |
| 0.591 | 2.76 | 660 | 0.7449 | 0.7136 | 0.6844 | 0.6793 | 0.7087 |
| 0.4584 | 2.78 | 665 | 0.7220 | 0.7074 | 0.6732 | 0.6661 | 0.6855 |
| 0.501 | 2.8 | 670 | 0.7212 | 0.7199 | 0.6829 | 0.6830 | 0.6879 |
| 0.7118 | 2.82 | 675 | 0.7327 | 0.7167 | 0.6827 | 0.6775 | 0.6962 |
| 0.5037 | 2.85 | 680 | 0.7544 | 0.7121 | 0.6818 | 0.6742 | 0.7042 |
| 0.4921 | 2.87 | 685 | 0.7265 | 0.7136 | 0.6791 | 0.6714 | 0.6926 |
| 0.5255 | 2.89 | 690 | 0.7278 | 0.7074 | 0.6706 | 0.6659 | 0.6855 |
| 0.509 | 2.91 | 695 | 0.7334 | 0.7027 | 0.6654 | 0.6599 | 0.6806 |
| 0.4321 | 2.93 | 700 | 0.7358 | 0.7152 | 0.6805 | 0.6728 | 0.6944 |
| 0.6196 | 2.95 | 705 | 0.7406 | 0.7293 | 0.6971 | 0.6895 | 0.7119 |
| 0.5289 | 2.97 | 710 | 0.7363 | 0.7324 | 0.7017 | 0.6944 | 0.7162 |
| 0.6204 | 2.99 | 715 | 0.7401 | 0.7324 | 0.7024 | 0.6949 | 0.7182 |
| 0.5459 | 3.01 | 720 | 0.7360 | 0.7308 | 0.7010 | 0.6937 | 0.7152 |
| 0.4793 | 3.03 | 725 | 0.7363 | 0.7324 | 0.7007 | 0.6966 | 0.7123 |
| 0.5157 | 3.05 | 730 | 0.7330 | 0.7355 | 0.7026 | 0.6999 | 0.7107 |
| 0.4863 | 3.08 | 735 | 0.7231 | 0.7199 | 0.6842 | 0.6803 | 0.6887 |
| 0.423 | 3.1 | 740 | 0.7313 | 0.7230 | 0.6873 | 0.6816 | 0.6950 |
| 0.4879 | 3.12 | 745 | 0.7546 | 0.7199 | 0.6895 | 0.6828 | 0.7064 |
| 0.2499 | 3.14 | 750 | 0.7727 | 0.7214 | 0.6934 | 0.6913 | 0.7093 |
| 0.487 | 3.16 | 755 | 0.7621 | 0.7230 | 0.6906 | 0.6832 | 0.7052 |
| 0.3501 | 3.18 | 760 | 0.7966 | 0.7027 | 0.6689 | 0.6664 | 0.6919 |
| 0.5762 | 3.2 | 765 | 0.7694 | 0.7121 | 0.6747 | 0.6708 | 0.6896 |
| 0.4491 | 3.22 | 770 | 0.7482 | 0.7230 | 0.6873 | 0.6860 | 0.6887 |
| 0.4803 | 3.24 | 775 | 0.7584 | 0.7261 | 0.6895 | 0.6910 | 0.6934 |
| 0.3349 | 3.26 | 780 | 0.7874 | 0.7183 | 0.6870 | 0.6929 | 0.6956 |
| 0.5481 | 3.28 | 785 | 0.8124 | 0.7105 | 0.6856 | 0.6831 | 0.7075 |
| 0.3695 | 3.31 | 790 | 0.7935 | 0.7089 | 0.6798 | 0.6714 | 0.6995 |
| 0.3998 | 3.33 | 795 | 0.7702 | 0.7152 | 0.6811 | 0.6748 | 0.6912 |
| 0.5214 | 3.35 | 800 | 0.7705 | 0.7152 | 0.6765 | 0.6772 | 0.6759 |
| 0.4914 | 3.37 | 805 | 0.7796 | 0.7293 | 0.6954 | 0.6887 | 0.7048 |
| 0.4096 | 3.39 | 810 | 0.7912 | 0.7121 | 0.6818 | 0.6732 | 0.6999 |
| 0.4346 | 3.41 | 815 | 0.7758 | 0.7293 | 0.6958 | 0.6887 | 0.7060 |
| 0.4933 | 3.43 | 820 | 0.7802 | 0.7136 | 0.6795 | 0.6719 | 0.6942 |
| 0.4561 | 3.45 | 825 | 0.7670 | 0.7261 | 0.6929 | 0.6863 | 0.7020 |
| 0.5619 | 3.47 | 830 | 0.7656 | 0.7293 | 0.6916 | 0.6950 | 0.6915 |
| 0.4934 | 3.49 | 835 | 0.7875 | 0.7277 | 0.6872 | 0.7002 | 0.6866 |
| 0.545 | 3.51 | 840 | 0.7675 | 0.7199 | 0.6733 | 0.6852 | 0.6663 |
| 0.4279 | 3.54 | 845 | 0.7582 | 0.7136 | 0.6709 | 0.6735 | 0.6690 |
| 0.351 | 3.56 | 850 | 0.7599 | 0.7136 | 0.6728 | 0.6724 | 0.6741 |
| 0.3701 | 3.58 | 855 | 0.7602 | 0.7293 | 0.6922 | 0.6940 | 0.6915 |
| 0.5307 | 3.6 | 860 | 0.7689 | 0.7308 | 0.6936 | 0.6968 | 0.6940 |
| 0.3895 | 3.62 | 865 | 0.7657 | 0.7246 | 0.6897 | 0.6852 | 0.6952 |
| 0.4676 | 3.64 | 870 | 0.7715 | 0.7230 | 0.6875 | 0.6811 | 0.6965 |
| 0.4124 | 3.66 | 875 | 0.7795 | 0.7230 | 0.6899 | 0.6822 | 0.7024 |
| 0.464 | 3.68 | 880 | 0.7933 | 0.7214 | 0.6893 | 0.6829 | 0.7022 |
| 0.4911 | 3.7 | 885 | 0.8201 | 0.7324 | 0.6947 | 0.6999 | 0.7029 |
| 0.4753 | 3.72 | 890 | 0.7907 | 0.7324 | 0.6978 | 0.6928 | 0.7060 |
| 0.3981 | 3.74 | 895 | 0.7811 | 0.7214 | 0.6832 | 0.6823 | 0.6842 |
| 0.5685 | 3.77 | 900 | 0.7806 | 0.7277 | 0.6899 | 0.6880 | 0.6920 |
| 0.4643 | 3.79 | 905 | 0.7792 | 0.7308 | 0.6961 | 0.6942 | 0.6995 |
| 0.4609 | 3.81 | 910 | 0.7886 | 0.7152 | 0.6814 | 0.6738 | 0.6940 |
| 0.5575 | 3.83 | 915 | 0.8158 | 0.7011 | 0.6688 | 0.6656 | 0.6925 |
| 0.4409 | 3.85 | 920 | 0.7921 | 0.7074 | 0.6717 | 0.6657 | 0.6890 |
| 0.5152 | 3.87 | 925 | 0.7839 | 0.7214 | 0.6859 | 0.6783 | 0.7003 |
| 0.4547 | 3.89 | 930 | 0.7646 | 0.7387 | 0.7034 | 0.6998 | 0.7111 |
| 0.32 | 3.91 | 935 | 0.7502 | 0.7277 | 0.6885 | 0.6893 | 0.6881 |
| 0.2742 | 3.93 | 940 | 0.7583 | 0.7167 | 0.6734 | 0.6794 | 0.6686 |
| 0.5842 | 3.95 | 945 | 0.7613 | 0.7261 | 0.6885 | 0.6842 | 0.6942 |
| 0.4406 | 3.97 | 950 | 0.7951 | 0.7387 | 0.7056 | 0.7011 | 0.7178 |
| 0.5251 | 4.0 | 955 | 0.7932 | 0.7261 | 0.6918 | 0.6851 | 0.7056 |
| 0.4235 | 4.02 | 960 | 0.7839 | 0.7167 | 0.6818 | 0.6745 | 0.6949 |
| 0.3876 | 4.04 | 965 | 0.7668 | 0.7277 | 0.6918 | 0.6864 | 0.6987 |
| 0.4244 | 4.06 | 970 | 0.7622 | 0.7246 | 0.6851 | 0.6872 | 0.6834 |
| 0.3872 | 4.08 | 975 | 0.7696 | 0.7261 | 0.6879 | 0.6903 | 0.6867 |
| 0.3878 | 4.1 | 980 | 0.7760 | 0.7183 | 0.6781 | 0.6779 | 0.6787 |
| 0.3029 | 4.12 | 985 | 0.7897 | 0.7340 | 0.6971 | 0.6933 | 0.7027 |
| 0.3147 | 4.14 | 990 | 0.7987 | 0.7308 | 0.6946 | 0.6903 | 0.7003 |
| 0.3531 | 4.16 | 995 | 0.8009 | 0.7167 | 0.6750 | 0.6746 | 0.6753 |
| 0.393 | 4.18 | 1000 | 0.8072 | 0.7136 | 0.6724 | 0.6730 | 0.6718 |
| 0.5162 | 4.21 | 1005 | 0.8105 | 0.7277 | 0.6902 | 0.6861 | 0.6952 |
| 0.4582 | 4.23 | 1010 | 0.8124 | 0.7293 | 0.6919 | 0.6873 | 0.6977 |
| 0.4746 | 4.25 | 1015 | 0.8130 | 0.7340 | 0.7015 | 0.6944 | 0.7125 |
| 0.453 | 4.27 | 1020 | 0.8024 | 0.7418 | 0.7083 | 0.7019 | 0.7174 |
| 0.3852 | 4.29 | 1025 | 0.7856 | 0.7183 | 0.6778 | 0.6763 | 0.6798 |
| 0.3614 | 4.31 | 1030 | 0.7797 | 0.7167 | 0.6766 | 0.6757 | 0.6781 |
| 0.3222 | 4.33 | 1035 | 0.7949 | 0.7293 | 0.6897 | 0.6983 | 0.6899 |
| 0.3769 | 4.35 | 1040 | 0.8036 | 0.7246 | 0.6853 | 0.6974 | 0.6826 |
| 0.3626 | 4.37 | 1045 | 0.7951 | 0.7340 | 0.6947 | 0.7033 | 0.6925 |
| 0.335 | 4.39 | 1050 | 0.8133 | 0.7293 | 0.6999 | 0.6923 | 0.7139 |
| 0.4664 | 4.41 | 1055 | 0.8644 | 0.7074 | 0.6818 | 0.6747 | 0.7095 |
| 0.3939 | 4.44 | 1060 | 0.8280 | 0.7246 | 0.6949 | 0.6859 | 0.7140 |
| 0.3793 | 4.46 | 1065 | 0.7876 | 0.7293 | 0.6919 | 0.6879 | 0.6966 |
| 0.4559 | 4.48 | 1070 | 0.7933 | 0.7277 | 0.6837 | 0.6939 | 0.6787 |
| 0.362 | 4.5 | 1075 | 0.7908 | 0.7308 | 0.6886 | 0.6955 | 0.6862 |
| 0.3833 | 4.52 | 1080 | 0.8061 | 0.7246 | 0.6894 | 0.6912 | 0.6948 |
| 0.2983 | 4.54 | 1085 | 0.8001 | 0.7371 | 0.6958 | 0.7029 | 0.6956 |
| 0.4279 | 4.56 | 1090 | 0.7939 | 0.7340 | 0.6985 | 0.6970 | 0.7007 |
| 0.371 | 4.58 | 1095 | 0.8178 | 0.7355 | 0.7047 | 0.6957 | 0.7213 |
| 0.2119 | 4.6 | 1100 | 0.8276 | 0.7277 | 0.6953 | 0.6877 | 0.7129 |
| 0.4231 | 4.62 | 1105 | 0.8099 | 0.7402 | 0.7089 | 0.7007 | 0.7219 |
| 0.1754 | 4.64 | 1110 | 0.8107 | 0.7340 | 0.6973 | 0.7013 | 0.6991 |
| 0.2922 | 4.67 | 1115 | 0.8135 | 0.7324 | 0.6945 | 0.6989 | 0.6954 |
| 0.3584 | 4.69 | 1120 | 0.8163 | 0.7433 | 0.7120 | 0.7076 | 0.7192 |
| 0.3186 | 4.71 | 1125 | 0.8135 | 0.7449 | 0.7120 | 0.7076 | 0.7178 |
| 0.2247 | 4.73 | 1130 | 0.8224 | 0.7418 | 0.7103 | 0.7060 | 0.7166 |
| 0.5324 | 4.75 | 1135 | 0.8359 | 0.7402 | 0.7119 | 0.7071 | 0.7216 |
| 0.3348 | 4.77 | 1140 | 0.8277 | 0.7340 | 0.6964 | 0.6981 | 0.6991 |
| 0.2568 | 4.79 | 1145 | 0.8138 | 0.7340 | 0.6960 | 0.6974 | 0.6956 |
| 0.3209 | 4.81 | 1150 | 0.8127 | 0.7293 | 0.6892 | 0.6901 | 0.6883 |
| 0.4479 | 4.83 | 1155 | 0.8081 | 0.7340 | 0.6962 | 0.6930 | 0.6999 |
| 0.3882 | 4.85 | 1160 | 0.8195 | 0.7371 | 0.7053 | 0.6981 | 0.7156 |
| 0.3669 | 4.87 | 1165 | 0.8290 | 0.7293 | 0.6967 | 0.6885 | 0.7107 |
| 0.3157 | 4.9 | 1170 | 0.8288 | 0.7355 | 0.7019 | 0.6943 | 0.7135 |
| 0.4165 | 4.92 | 1175 | 0.8225 | 0.7340 | 0.6982 | 0.6948 | 0.7039 |
| 0.2225 | 4.94 | 1180 | 0.8172 | 0.7293 | 0.6896 | 0.6894 | 0.6903 |
| 0.3322 | 4.96 | 1185 | 0.8276 | 0.7246 | 0.6833 | 0.6856 | 0.6814 |
| 0.3355 | 4.98 | 1190 | 0.8414 | 0.7214 | 0.6813 | 0.6819 | 0.6838 |
| 0.3134 | 5.0 | 1195 | 0.8560 | 0.7324 | 0.6976 | 0.6927 | 0.7103 |
| 0.2255 | 5.02 | 1200 | 0.8507 | 0.7308 | 0.6970 | 0.6901 | 0.7070 |
| 0.3257 | 5.04 | 1205 | 0.8506 | 0.7214 | 0.6806 | 0.6834 | 0.6814 |
| 0.2508 | 5.06 | 1210 | 0.8652 | 0.7261 | 0.6840 | 0.6932 | 0.6805 |
| 0.2465 | 5.08 | 1215 | 0.8663 | 0.7246 | 0.6814 | 0.6902 | 0.6771 |
| 0.273 | 5.1 | 1220 | 0.8629 | 0.7199 | 0.6769 | 0.6790 | 0.6765 |
| 0.2377 | 5.13 | 1225 | 0.8664 | 0.7355 | 0.6996 | 0.6956 | 0.7052 |
| 0.2537 | 5.15 | 1230 | 0.8793 | 0.7324 | 0.6998 | 0.6947 | 0.7088 |
| 0.2031 | 5.17 | 1235 | 0.8715 | 0.7261 | 0.6928 | 0.6877 | 0.7005 |
| 0.2148 | 5.19 | 1240 | 0.8654 | 0.7355 | 0.6980 | 0.6962 | 0.7001 |
| 0.2889 | 5.21 | 1245 | 0.8712 | 0.7261 | 0.6872 | 0.6881 | 0.6863 |
| 0.368 | 5.23 | 1250 | 0.8732 | 0.7308 | 0.6917 | 0.6929 | 0.6913 |
| 0.2998 | 5.25 | 1255 | 0.8758 | 0.7293 | 0.6927 | 0.6905 | 0.6958 |
| 0.3705 | 5.27 | 1260 | 0.8713 | 0.7308 | 0.6939 | 0.6906 | 0.6975 |
| 0.2486 | 5.29 | 1265 | 0.8734 | 0.7277 | 0.6929 | 0.6872 | 0.7003 |
| 0.2424 | 5.31 | 1270 | 0.8772 | 0.7214 | 0.6847 | 0.6820 | 0.6909 |
| 0.3169 | 5.33 | 1275 | 0.8768 | 0.7230 | 0.6828 | 0.6847 | 0.6856 |
| 0.2918 | 5.36 | 1280 | 0.8836 | 0.7246 | 0.6856 | 0.6839 | 0.6913 |
| 0.2464 | 5.38 | 1285 | 0.8798 | 0.7246 | 0.6859 | 0.6835 | 0.6909 |
| 0.3308 | 5.4 | 1290 | 0.8762 | 0.7340 | 0.6947 | 0.6909 | 0.6995 |
| 0.2678 | 5.42 | 1295 | 0.8799 | 0.7340 | 0.6952 | 0.6900 | 0.7019 |
| 0.3768 | 5.44 | 1300 | 0.8762 | 0.7293 | 0.6880 | 0.6862 | 0.6907 |
| 0.3272 | 5.46 | 1305 | 0.8741 | 0.7246 | 0.6816 | 0.6831 | 0.6806 |
| 0.2762 | 5.48 | 1310 | 0.8801 | 0.7308 | 0.6872 | 0.6914 | 0.6850 |
| 0.3292 | 5.5 | 1315 | 0.8855 | 0.7324 | 0.6884 | 0.6922 | 0.6868 |
| 0.2974 | 5.52 | 1320 | 0.8856 | 0.7324 | 0.6879 | 0.6911 | 0.6868 |
| 0.3522 | 5.54 | 1325 | 0.8799 | 0.7214 | 0.6767 | 0.6759 | 0.6775 |
| 0.2946 | 5.56 | 1330 | 0.8815 | 0.7199 | 0.6783 | 0.6769 | 0.6804 |
| 0.2064 | 5.59 | 1335 | 0.8876 | 0.7293 | 0.6894 | 0.6839 | 0.6970 |
| 0.2353 | 5.61 | 1340 | 0.9266 | 0.7261 | 0.6938 | 0.6878 | 0.7087 |
| 0.2696 | 5.63 | 1345 | 0.9339 | 0.7152 | 0.6817 | 0.6789 | 0.6956 |
| 0.4084 | 5.65 | 1350 | 0.8897 | 0.7308 | 0.6886 | 0.6897 | 0.6901 |
| 0.3375 | 5.67 | 1355 | 0.8848 | 0.7246 | 0.6812 | 0.6874 | 0.6775 |
| 0.2449 | 5.69 | 1360 | 0.8848 | 0.7230 | 0.6789 | 0.6850 | 0.6749 |
| 0.2459 | 5.71 | 1365 | 0.8859 | 0.7246 | 0.6815 | 0.6832 | 0.6806 |
| 0.3471 | 5.73 | 1370 | 0.8895 | 0.7230 | 0.6818 | 0.6805 | 0.6832 |
| 0.3112 | 5.75 | 1375 | 0.9040 | 0.7261 | 0.6881 | 0.6876 | 0.6919 |
| 0.3404 | 5.77 | 1380 | 0.9397 | 0.7214 | 0.6836 | 0.6910 | 0.6897 |
| 0.2509 | 5.79 | 1385 | 0.9319 | 0.7277 | 0.6852 | 0.6963 | 0.6878 |
| 0.367 | 5.82 | 1390 | 0.8828 | 0.7261 | 0.6839 | 0.6861 | 0.6832 |
| 0.3158 | 5.84 | 1395 | 0.8770 | 0.7167 | 0.6741 | 0.6770 | 0.6729 |
| 0.1901 | 5.86 | 1400 | 0.8789 | 0.7183 | 0.6771 | 0.6783 | 0.6779 |
| 0.2183 | 5.88 | 1405 | 0.8804 | 0.7261 | 0.6845 | 0.6838 | 0.6856 |
| 0.3058 | 5.9 | 1410 | 0.8927 | 0.7277 | 0.6877 | 0.6921 | 0.6866 |
| 0.1906 | 5.92 | 1415 | 0.8929 | 0.7261 | 0.6859 | 0.6889 | 0.6856 |
| 0.2887 | 5.94 | 1420 | 0.8876 | 0.7293 | 0.6904 | 0.6908 | 0.6915 |
| 0.2236 | 5.96 | 1425 | 0.8900 | 0.7261 | 0.6866 | 0.6823 | 0.6918 |
| 0.3345 | 5.98 | 1430 | 0.8948 | 0.7293 | 0.6902 | 0.6884 | 0.6930 |
| 0.3004 | 6.0 | 1435 | 0.8938 | 0.7277 | 0.6871 | 0.6868 | 0.6873 |
| 0.3376 | 6.03 | 1440 | 0.8939 | 0.7308 | 0.6902 | 0.6895 | 0.6913 |
| 0.1774 | 6.05 | 1445 | 0.9019 | 0.7261 | 0.6893 | 0.6890 | 0.6915 |
| 0.1947 | 6.07 | 1450 | 0.8971 | 0.7308 | 0.6913 | 0.6917 | 0.6913 |
| 0.1641 | 6.09 | 1455 | 0.9135 | 0.7089 | 0.6639 | 0.6746 | 0.6574 |
| 0.3712 | 6.11 | 1460 | 0.9258 | 0.7089 | 0.6612 | 0.6755 | 0.6543 |
| 0.234 | 6.13 | 1465 | 0.8986 | 0.7261 | 0.6863 | 0.6868 | 0.6863 |
| 0.2605 | 6.15 | 1470 | 0.9004 | 0.7277 | 0.6875 | 0.6874 | 0.6881 |
| 0.1891 | 6.17 | 1475 | 0.9035 | 0.7293 | 0.6881 | 0.6867 | 0.6907 |
| 0.1988 | 6.19 | 1480 | 0.9032 | 0.7230 | 0.6807 | 0.6796 | 0.6824 |
| 0.1683 | 6.21 | 1485 | 0.9044 | 0.7293 | 0.6867 | 0.6876 | 0.6864 |
| 0.2669 | 6.23 | 1490 | 0.9156 | 0.7277 | 0.6879 | 0.6887 | 0.6885 |
| 0.2185 | 6.26 | 1495 | 0.9242 | 0.7324 | 0.6922 | 0.6927 | 0.6938 |
| 0.1485 | 6.28 | 1500 | 0.9264 | 0.7308 | 0.6916 | 0.6921 | 0.6925 |
| 0.1654 | 6.3 | 1505 | 0.9295 | 0.7308 | 0.6907 | 0.6913 | 0.6905 |
| 0.2177 | 6.32 | 1510 | 0.9347 | 0.7293 | 0.6884 | 0.6898 | 0.6871 |
| 0.1512 | 6.34 | 1515 | 0.9451 | 0.7261 | 0.6853 | 0.6842 | 0.6867 |
| 0.1006 | 6.36 | 1520 | 0.9623 | 0.7261 | 0.6869 | 0.6850 | 0.6911 |
| 0.1367 | 6.38 | 1525 | 0.9851 | 0.7277 | 0.6901 | 0.6916 | 0.6932 |
| 0.2743 | 6.4 | 1530 | 0.9740 | 0.7340 | 0.6958 | 0.6982 | 0.6960 |
| 0.2843 | 6.42 | 1535 | 0.9689 | 0.7261 | 0.6873 | 0.6892 | 0.6856 |
| 0.2563 | 6.44 | 1540 | 0.9781 | 0.7199 | 0.6757 | 0.6819 | 0.6706 |
| 0.2941 | 6.46 | 1545 | 0.9763 | 0.7246 | 0.6844 | 0.6915 | 0.6799 |
| 0.2245 | 6.49 | 1550 | 0.9718 | 0.7340 | 0.6948 | 0.6962 | 0.6952 |
| 0.1545 | 6.51 | 1555 | 0.9737 | 0.7324 | 0.6921 | 0.6921 | 0.6934 |
| 0.3361 | 6.53 | 1560 | 0.9692 | 0.7324 | 0.6944 | 0.6931 | 0.6966 |
| 0.162 | 6.55 | 1565 | 0.9704 | 0.7324 | 0.6946 | 0.6925 | 0.6982 |
| 0.2815 | 6.57 | 1570 | 0.9656 | 0.7340 | 0.6957 | 0.6962 | 0.6964 |
| 0.2087 | 6.59 | 1575 | 0.9639 | 0.7308 | 0.6927 | 0.6919 | 0.6952 |
| 0.2326 | 6.61 | 1580 | 0.9696 | 0.7324 | 0.6959 | 0.6929 | 0.7009 |
| 0.1923 | 6.63 | 1585 | 0.9611 | 0.7340 | 0.6981 | 0.6959 | 0.7019 |
| 0.1684 | 6.65 | 1590 | 0.9606 | 0.7355 | 0.6964 | 0.6978 | 0.6954 |
| 0.3993 | 6.67 | 1595 | 0.9609 | 0.7293 | 0.6888 | 0.6921 | 0.6860 |
| 0.3185 | 6.69 | 1600 | 0.9627 | 0.7355 | 0.6970 | 0.6974 | 0.6982 |
| 0.2099 | 6.72 | 1605 | 0.9814 | 0.7261 | 0.6910 | 0.6906 | 0.6962 |
| 0.1302 | 6.74 | 1610 | 0.9806 | 0.7308 | 0.6938 | 0.6922 | 0.6991 |
| 0.238 | 6.76 | 1615 | 0.9711 | 0.7324 | 0.6928 | 0.6940 | 0.6927 |
| 0.3351 | 6.78 | 1620 | 0.9749 | 0.7230 | 0.6788 | 0.6868 | 0.6738 |
| 0.3485 | 6.8 | 1625 | 0.9761 | 0.7308 | 0.6884 | 0.6937 | 0.6858 |
| 0.137 | 6.82 | 1630 | 0.9766 | 0.7324 | 0.6909 | 0.6947 | 0.6895 |
| 0.1751 | 6.84 | 1635 | 0.9776 | 0.7324 | 0.6932 | 0.6928 | 0.6946 |
| 0.1701 | 6.86 | 1640 | 0.9787 | 0.7355 | 0.6977 | 0.6954 | 0.7005 |
| 0.148 | 6.88 | 1645 | 0.9830 | 0.7387 | 0.7036 | 0.7001 | 0.7076 |
| 0.2204 | 6.9 | 1650 | 0.9860 | 0.7340 | 0.6949 | 0.6942 | 0.6960 |
| 0.1966 | 6.92 | 1655 | 0.9920 | 0.7214 | 0.6793 | 0.6817 | 0.6775 |
| 0.2242 | 6.95 | 1660 | 0.9979 | 0.7152 | 0.6727 | 0.6771 | 0.6688 |
| 0.157 | 6.97 | 1665 | 1.0002 | 0.7293 | 0.6876 | 0.6925 | 0.6852 |
| 0.2665 | 6.99 | 1670 | 1.0067 | 0.7230 | 0.6838 | 0.6860 | 0.6860 |
| 0.159 | 7.01 | 1675 | 1.0002 | 0.7230 | 0.6841 | 0.6834 | 0.6867 |
| 0.1399 | 7.03 | 1680 | 0.9954 | 0.7277 | 0.6887 | 0.6874 | 0.6909 |
| 0.16 | 7.05 | 1685 | 0.9981 | 0.7277 | 0.6878 | 0.6878 | 0.6889 |
| 0.1074 | 7.07 | 1690 | 1.0067 | 0.7277 | 0.6881 | 0.6886 | 0.6889 |
| 0.15 | 7.09 | 1695 | 1.0130 | 0.7261 | 0.6857 | 0.6860 | 0.6863 |
| 0.1956 | 7.11 | 1700 | 1.0177 | 0.7261 | 0.6858 | 0.6854 | 0.6871 |
| 0.0964 | 7.13 | 1705 | 1.0193 | 0.7277 | 0.6877 | 0.6884 | 0.6881 |
| 0.1922 | 7.15 | 1710 | 1.0224 | 0.7277 | 0.6867 | 0.6894 | 0.6854 |
| 0.1334 | 7.18 | 1715 | 1.0224 | 0.7261 | 0.6844 | 0.6883 | 0.6812 |
| 0.1071 | 7.2 | 1720 | 1.0252 | 0.7183 | 0.6746 | 0.6796 | 0.6704 |
| 0.1798 | 7.22 | 1725 | 1.0306 | 0.7214 | 0.6781 | 0.6851 | 0.6724 |
| 0.2293 | 7.24 | 1730 | 1.0302 | 0.7277 | 0.6878 | 0.6900 | 0.6865 |
| 0.1813 | 7.26 | 1735 | 1.0316 | 0.7261 | 0.6884 | 0.6898 | 0.6895 |
| 0.1884 | 7.28 | 1740 | 1.0327 | 0.7261 | 0.6884 | 0.6898 | 0.6895 |
| 0.1482 | 7.3 | 1745 | 1.0328 | 0.7261 | 0.6877 | 0.6900 | 0.6883 |
| 0.1044 | 7.32 | 1750 | 1.0387 | 0.7324 | 0.6947 | 0.6989 | 0.6946 |
| 0.3129 | 7.34 | 1755 | 1.0264 | 0.7261 | 0.6884 | 0.6905 | 0.6887 |
| 0.1136 | 7.36 | 1760 | 1.0226 | 0.7183 | 0.6789 | 0.6826 | 0.6759 |
| 0.1869 | 7.38 | 1765 | 1.0219 | 0.7214 | 0.6812 | 0.6852 | 0.6783 |
| 0.1363 | 7.41 | 1770 | 1.0230 | 0.7261 | 0.6865 | 0.6913 | 0.6836 |
| 0.0683 | 7.43 | 1775 | 1.0295 | 0.7230 | 0.6835 | 0.6885 | 0.6800 |
| 0.155 | 7.45 | 1780 | 1.0372 | 0.7214 | 0.6805 | 0.6870 | 0.6767 |
| 0.3063 | 7.47 | 1785 | 1.0365 | 0.7246 | 0.6849 | 0.6885 | 0.6834 |
| 0.0882 | 7.49 | 1790 | 1.0347 | 0.7214 | 0.6821 | 0.6856 | 0.6795 |
| 0.1951 | 7.51 | 1795 | 1.0363 | 0.7183 | 0.6786 | 0.6803 | 0.6771 |
| 0.1963 | 7.53 | 1800 | 1.0397 | 0.7261 | 0.6865 | 0.6878 | 0.6875 |
| 0.2286 | 7.55 | 1805 | 1.0406 | 0.7261 | 0.6868 | 0.6880 | 0.6883 |
| 0.1509 | 7.57 | 1810 | 1.0362 | 0.7293 | 0.6896 | 0.6930 | 0.6887 |
| 0.1184 | 7.59 | 1815 | 1.0418 | 0.7105 | 0.6661 | 0.6765 | 0.6584 |
| 0.1063 | 7.62 | 1820 | 1.0522 | 0.7105 | 0.6630 | 0.6777 | 0.6529 |
| 0.134 | 7.64 | 1825 | 1.0484 | 0.7199 | 0.6762 | 0.6882 | 0.6686 |
| 0.2583 | 7.66 | 1830 | 1.0450 | 0.7261 | 0.6826 | 0.6912 | 0.6789 |
| 0.1144 | 7.68 | 1835 | 1.0507 | 0.7277 | 0.6882 | 0.6944 | 0.6877 |
| 0.1107 | 7.7 | 1840 | 1.0511 | 0.7214 | 0.6839 | 0.6853 | 0.6877 |
| 0.2604 | 7.72 | 1845 | 1.0395 | 0.7246 | 0.6863 | 0.6858 | 0.6881 |
| 0.1464 | 7.74 | 1850 | 1.0398 | 0.7199 | 0.6787 | 0.6801 | 0.6777 |
| 0.2535 | 7.76 | 1855 | 1.0411 | 0.7246 | 0.6820 | 0.6869 | 0.6779 |
| 0.1572 | 7.78 | 1860 | 1.0406 | 0.7183 | 0.6765 | 0.6789 | 0.6743 |
| 0.1646 | 7.8 | 1865 | 1.0415 | 0.7183 | 0.6746 | 0.6796 | 0.6704 |
| 0.2349 | 7.82 | 1870 | 1.0426 | 0.7261 | 0.6844 | 0.6890 | 0.6816 |
| 0.2146 | 7.85 | 1875 | 1.0449 | 0.7277 | 0.6882 | 0.6907 | 0.6885 |
| 0.1505 | 7.87 | 1880 | 1.0456 | 0.7277 | 0.6915 | 0.6908 | 0.6944 |
| 0.2806 | 7.89 | 1885 | 1.0445 | 0.7261 | 0.6900 | 0.6894 | 0.6926 |
| 0.2245 | 7.91 | 1890 | 1.0402 | 0.7277 | 0.6908 | 0.6904 | 0.6916 |
| 0.1388 | 7.93 | 1895 | 1.0410 | 0.7293 | 0.6914 | 0.6919 | 0.6911 |
| 0.3175 | 7.95 | 1900 | 1.0403 | 0.7261 | 0.6876 | 0.6899 | 0.6856 |
| 0.2023 | 7.97 | 1905 | 1.0379 | 0.7230 | 0.6857 | 0.6885 | 0.6832 |
| 0.1165 | 7.99 | 1910 | 1.0389 | 0.7261 | 0.6881 | 0.6913 | 0.6852 |
| 0.1103 | 8.01 | 1915 | 1.0431 | 0.7246 | 0.6865 | 0.6899 | 0.6834 |
| 0.1822 | 8.03 | 1920 | 1.0520 | 0.7214 | 0.6820 | 0.6872 | 0.6775 |
| 0.1773 | 8.05 | 1925 | 1.0600 | 0.7121 | 0.6690 | 0.6790 | 0.6614 |
| 0.1259 | 8.08 | 1930 | 1.0601 | 0.7183 | 0.6773 | 0.6843 | 0.6716 |
| 0.1737 | 8.1 | 1935 | 1.0619 | 0.7183 | 0.6804 | 0.6845 | 0.6775 |
| 0.1776 | 8.12 | 1940 | 1.0646 | 0.7277 | 0.6901 | 0.6921 | 0.6905 |
| 0.112 | 8.14 | 1945 | 1.0652 | 0.7324 | 0.6965 | 0.6968 | 0.6982 |
| 0.1649 | 8.16 | 1950 | 1.0650 | 0.7324 | 0.6962 | 0.6960 | 0.6982 |
| 0.1296 | 8.18 | 1955 | 1.0660 | 0.7308 | 0.6958 | 0.6954 | 0.6976 |
| 0.1325 | 8.2 | 1960 | 1.0651 | 0.7277 | 0.6897 | 0.6905 | 0.6901 |
| 0.1422 | 8.22 | 1965 | 1.0680 | 0.7199 | 0.6782 | 0.6839 | 0.6738 |
| 0.3486 | 8.24 | 1970 | 1.0723 | 0.7183 | 0.6729 | 0.6821 | 0.6661 |
| 0.2213 | 8.26 | 1975 | 1.0700 | 0.7121 | 0.6632 | 0.6738 | 0.6563 |
| 0.1206 | 8.28 | 1980 | 1.0671 | 0.7152 | 0.6673 | 0.6766 | 0.6622 |
| 0.1196 | 8.31 | 1985 | 1.0657 | 0.7183 | 0.6723 | 0.6796 | 0.6692 |
| 0.1955 | 8.33 | 1990 | 1.0568 | 0.7183 | 0.6745 | 0.6812 | 0.6696 |
| 0.1085 | 8.35 | 1995 | 1.0566 | 0.7152 | 0.6735 | 0.6813 | 0.6672 |
| 0.1359 | 8.37 | 2000 | 1.0549 | 0.7230 | 0.6862 | 0.6890 | 0.6836 |
| 0.2431 | 8.39 | 2005 | 1.0555 | 0.7308 | 0.6960 | 0.6976 | 0.6944 |
| 0.1512 | 8.41 | 2010 | 1.0570 | 0.7324 | 0.6966 | 0.6972 | 0.6970 |
| 0.1002 | 8.43 | 2015 | 1.0601 | 0.7355 | 0.6997 | 0.7000 | 0.7005 |
| 0.1529 | 8.45 | 2020 | 1.0601 | 0.7277 | 0.6913 | 0.6915 | 0.6913 |
| 0.1633 | 8.47 | 2025 | 1.0618 | 0.7261 | 0.6881 | 0.6882 | 0.6883 |
| 0.068 | 8.49 | 2030 | 1.0657 | 0.7199 | 0.6816 | 0.6826 | 0.6812 |
| 0.1883 | 8.51 | 2035 | 1.0644 | 0.7261 | 0.6885 | 0.6881 | 0.6891 |
| 0.1484 | 8.54 | 2040 | 1.0624 | 0.7324 | 0.6961 | 0.6952 | 0.6970 |
| 0.1438 | 8.56 | 2045 | 1.0642 | 0.7340 | 0.6983 | 0.6973 | 0.6995 |
| 0.1164 | 8.58 | 2050 | 1.0660 | 0.7308 | 0.6950 | 0.6948 | 0.6952 |
| 0.1523 | 8.6 | 2055 | 1.0702 | 0.7246 | 0.6875 | 0.6895 | 0.6857 |
| 0.0793 | 8.62 | 2060 | 1.0749 | 0.7230 | 0.6832 | 0.6874 | 0.6797 |
| 0.0752 | 8.64 | 2065 | 1.0783 | 0.7214 | 0.6797 | 0.6853 | 0.6755 |
| 0.0825 | 8.66 | 2070 | 1.0854 | 0.7230 | 0.6798 | 0.6868 | 0.6745 |
| 0.1463 | 8.68 | 2075 | 1.0937 | 0.7199 | 0.6748 | 0.6837 | 0.6686 |
| 0.1806 | 8.7 | 2080 | 1.0951 | 0.7199 | 0.6786 | 0.6854 | 0.6741 |
| 0.1354 | 8.72 | 2085 | 1.0925 | 0.7277 | 0.6885 | 0.6918 | 0.6877 |
| 0.1348 | 8.74 | 2090 | 1.0896 | 0.7324 | 0.6960 | 0.6958 | 0.6982 |
| 0.174 | 8.77 | 2095 | 1.0875 | 0.7261 | 0.6908 | 0.6900 | 0.6918 |
| 0.1424 | 8.79 | 2100 | 1.0902 | 0.7261 | 0.6896 | 0.6897 | 0.6895 |
| 0.1056 | 8.81 | 2105 | 1.0938 | 0.7261 | 0.6886 | 0.6906 | 0.6867 |
| 0.1662 | 8.83 | 2110 | 1.0952 | 0.7261 | 0.6866 | 0.6900 | 0.6836 |
| 0.1077 | 8.85 | 2115 | 1.0970 | 0.7246 | 0.6853 | 0.6887 | 0.6830 |
| 0.2363 | 8.87 | 2120 | 1.0967 | 0.7230 | 0.6832 | 0.6872 | 0.6808 |
| 0.1287 | 8.89 | 2125 | 1.0975 | 0.7261 | 0.6875 | 0.6916 | 0.6860 |
| 0.141 | 8.91 | 2130 | 1.0982 | 0.7277 | 0.6890 | 0.6930 | 0.6877 |
| 0.1411 | 8.93 | 2135 | 1.0962 | 0.7230 | 0.6824 | 0.6861 | 0.6800 |
| 0.1088 | 8.95 | 2140 | 1.0954 | 0.7230 | 0.6823 | 0.6880 | 0.6777 |
| 0.1032 | 8.97 | 2145 | 1.0942 | 0.7214 | 0.6807 | 0.6866 | 0.6759 |
| 0.0683 | 9.0 | 2150 | 1.0915 | 0.7230 | 0.6825 | 0.6877 | 0.6785 |
| 0.1402 | 9.02 | 2155 | 1.0894 | 0.7277 | 0.6894 | 0.6934 | 0.6861 |
| 0.0853 | 9.04 | 2160 | 1.0914 | 0.7246 | 0.6841 | 0.6891 | 0.6802 |
| 0.1155 | 9.06 | 2165 | 1.0937 | 0.7214 | 0.6787 | 0.6846 | 0.6743 |
| 0.0675 | 9.08 | 2170 | 1.0961 | 0.7230 | 0.6801 | 0.6869 | 0.6753 |
| 0.0754 | 9.1 | 2175 | 1.0959 | 0.7246 | 0.6828 | 0.6881 | 0.6791 |
| 0.0974 | 9.12 | 2180 | 1.0975 | 0.7293 | 0.6892 | 0.6926 | 0.6867 |
| 0.1567 | 9.14 | 2185 | 1.0993 | 0.7246 | 0.6850 | 0.6886 | 0.6822 |
| 0.1691 | 9.16 | 2190 | 1.0999 | 0.7261 | 0.6866 | 0.6917 | 0.6824 |
| 0.1026 | 9.18 | 2195 | 1.1006 | 0.7246 | 0.6850 | 0.6904 | 0.6806 |
| 0.0727 | 9.21 | 2200 | 1.1029 | 0.7246 | 0.6850 | 0.6904 | 0.6806 |
| 0.0834 | 9.23 | 2205 | 1.1046 | 0.7199 | 0.6783 | 0.6843 | 0.6738 |
| 0.1159 | 9.25 | 2210 | 1.1049 | 0.7230 | 0.6823 | 0.6880 | 0.6777 |
| 0.1586 | 9.27 | 2215 | 1.1046 | 0.7214 | 0.6808 | 0.6852 | 0.6775 |
| 0.1292 | 9.29 | 2220 | 1.1043 | 0.7230 | 0.6824 | 0.6865 | 0.6793 |
| 0.0743 | 9.31 | 2225 | 1.1035 | 0.7246 | 0.6851 | 0.6889 | 0.6822 |
| 0.06 | 9.33 | 2230 | 1.1022 | 0.7277 | 0.6912 | 0.6927 | 0.6901 |
| 0.1545 | 9.35 | 2235 | 1.1039 | 0.7293 | 0.6916 | 0.6932 | 0.6907 |
| 0.1546 | 9.37 | 2240 | 1.1058 | 0.7230 | 0.6833 | 0.6861 | 0.6812 |
| 0.2023 | 9.39 | 2245 | 1.1066 | 0.7214 | 0.6808 | 0.6852 | 0.6775 |
| 0.1607 | 9.41 | 2250 | 1.1077 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.0658 | 9.44 | 2255 | 1.1090 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.0417 | 9.46 | 2260 | 1.1107 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.063 | 9.48 | 2265 | 1.1129 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.0988 | 9.5 | 2270 | 1.1147 | 0.7230 | 0.6833 | 0.6886 | 0.6789 |
| 0.1082 | 9.52 | 2275 | 1.1155 | 0.7230 | 0.6833 | 0.6886 | 0.6789 |
| 0.1984 | 9.54 | 2280 | 1.1154 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1793 | 9.56 | 2285 | 1.1153 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1324 | 9.58 | 2290 | 1.1152 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.1059 | 9.6 | 2295 | 1.1157 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.0473 | 9.62 | 2300 | 1.1158 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.1065 | 9.64 | 2305 | 1.1166 | 0.7230 | 0.6818 | 0.6868 | 0.6777 |
| 0.1373 | 9.67 | 2310 | 1.1173 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1248 | 9.69 | 2315 | 1.1177 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.0966 | 9.71 | 2320 | 1.1183 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.0742 | 9.73 | 2325 | 1.1189 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.0827 | 9.75 | 2330 | 1.1193 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.143 | 9.77 | 2335 | 1.1202 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1623 | 9.79 | 2340 | 1.1201 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1495 | 9.81 | 2345 | 1.1197 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.0965 | 9.83 | 2350 | 1.1195 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1297 | 9.85 | 2355 | 1.1194 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1164 | 9.87 | 2360 | 1.1195 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1759 | 9.9 | 2365 | 1.1195 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.2404 | 9.92 | 2370 | 1.1192 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1467 | 9.94 | 2375 | 1.1189 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1969 | 9.96 | 2380 | 1.1187 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.1573 | 9.98 | 2385 | 1.1187 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
| 0.2614 | 10.0 | 2390 | 1.1188 | 0.7246 | 0.6845 | 0.6892 | 0.6806 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
|
anindabitm/sagemaker-distilbert-emotion
|
anindabitm
| 2021-11-18T17:43:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9165
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2434
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9423 | 1.0 | 500 | 0.2434 | 0.9165 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
qinluo/wobert-chinese-plus
|
qinluo
| 2021-11-18T13:43:26Z | 5 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"fill-mask",
"wobert",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: zh
tags:
- wobert
inference: True
---
## Word based BERT model
原模型及说明见:https://github.com/ZhuiyiTechnology/WoBERT
pytorch 模型见: https://github.com/JunnYu/WoBERT_pytorch
## 安装 WoBertTokenizer
```bash
pip install git+https://github.com/JunnYu/WoBERT_pytorch.git
```
## TF Example
```python
from transformers import TFBertForMaskedLM as WoBertForMaskedLM
from wobert import WoBertTokenizer
import tensorflow as tf
pretrained_model_or_path = 'qinluo/wobert-chinese-plus'
tokenizer = WoBertTokenizer.from_pretrained(pretrained_model_or_path)
model = WoBertForMaskedLM.from_pretrained(pretrained_model_or_path)
text = '今天[MASK]很好,我[MASK]去公园玩。'
inputs = tokenizer(text, return_tensors='tf')
outputs = model(**inputs).logits[0]
outputs_sentence = ''
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(tf.math.top_k(outputs[i], k=5)[1])
outputs_sentence += '[' + '|'.join(tokens) + ']'
else:
outputs_sentence += ''.join(tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(outputs_sentence)
# 今天[天气|阳光|天|心情|空气]很好,我[想|要|打算|准备|就]去公园玩。
```
## Pytorch Example
```python
from transformers import BertForMaskedLM as WoBertForMaskedLM
from wobert import WoBertTokenizer
pretrained_model_or_path = 'qinluo/wobert-chinese-plus'
tokenizer = WoBertTokenizer.from_pretrained(pretrained_model_or_path)
model = WoBertForMaskedLM.from_pretrained(pretrained_model_or_path)
text = '今天[MASK]很好,我[MASK]去公园玩。'
inputs = tokenizer(text, return_tensors='pt')
outputs = model(**inputs).logits[0]
outputs_sentence = ''
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(outputs[i].topk(k=5)[1])
outputs_sentence += '[' + '|'.join(tokens) + ']'
else:
outputs_sentence += ''.join(tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(outputs_sentence)
# 今天[天气|阳光|天|心情|空气]很好,我[想|要|打算|准备|就]去公园玩。
```
## 引用
Bibtex:
```tex
@techreport{zhuiyiwobert,
title={WoBERT: Word-based Chinese BERT model - ZhuiyiAI},
author={Jianlin Su},
year={2020},
url="https://github.com/ZhuiyiTechnology/WoBERT",
}
```
|
FabianGroeger/HotelBERT
|
FabianGroeger
| 2021-11-18T05:56:08Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"roberta",
"fill-mask",
"de",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: de
widget:
- text: "Das <mask> hat sich toll um uns gekümmert."
---
# HotelBERT
This model was trained on reviews from a well known German hotel platform.
|
FabianGroeger/HotelBERT-small
|
FabianGroeger
| 2021-11-18T05:39:47Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"roberta",
"fill-mask",
"de",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: de
widget:
- text: "Das <mask> hat sich toll um uns gekümmert."
---
# HotelBERT-small
This model was trained on reviews from a well known German hotel platform.
|
eml914/streaming_transformer_asr_librispeech
|
eml914
| 2021-11-18T02:23:37Z | 4 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librispeech
license: cc-by-4.0
---
## ESPnet2 ASR model
### `eml914/streaming_transformer_asr_librispeech`
This model was trained by Emiru Tsunoo using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 12eb132418a1f69548f7998e53273cd05d989ed9
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model eml914/streaming_transformer_asr_librispeech
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Nov 17 18:18:46 JST 2021`
- python version: `3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]`
- espnet version: `espnet 0.10.5a1`
- pytorch version: `pytorch 1.4.0`
- Git hash: `12eb132418a1f69548f7998e53273cd05d989ed9`
- Commit date: `Tue Nov 16 10:12:21 2021 +0900`
## asr_train_asr_streaming_fbank_pitch_en_bpe5000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_clean|2703|54402|97.6|2.2|0.3|0.3|2.7|31.9|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_other|2864|50948|93.5|5.8|0.7|0.9|7.4|50.4|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean|2620|52576|97.5|2.3|0.3|0.3|2.9|33.1|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean_dbg|2620|62|96.8|3.2|0.0|0.0|3.2|0.0|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_other|2939|52343|93.5|5.7|0.8|0.9|7.4|53.7|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_clean|2703|288456|99.2|0.4|0.4|0.3|1.1|31.9|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_other|2864|265951|97.2|1.6|1.2|0.9|3.7|50.4|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean|2620|281530|99.2|0.4|0.4|0.3|1.1|33.1|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean_dbg|2620|367|99.5|0.0|0.5|0.8|1.4|0.0|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_other|2939|272758|97.3|1.5|1.3|0.9|3.6|53.7|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_clean|2703|68010|96.8|2.1|1.1|0.4|3.6|31.9|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_other|2864|63110|91.9|5.9|2.2|1.5|9.6|50.4|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean|2620|65818|96.7|2.2|1.1|0.4|3.7|33.1|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean_dbg|2620|94|97.9|2.1|0.0|1.1|3.2|0.0|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_other|2939|65101|91.8|5.5|2.7|1.2|9.4|53.7|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_streaming.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_streaming_fbank_pitch_en_bpe5000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 33851
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
unused_parameters: false
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
pretrain_path: null
init_param: []
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_fbank_pitch_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_fbank_pitch_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_fbank_pitch_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_fbank_pitch_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 800
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/fbank_pitch/train_960_sp/feats.scp
- speech
- kaldi_ark
- - dump/fbank_pitch/train_960_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/fbank_pitch/dev/feats.scp
- speech
- kaldi_ark
- - dump/fbank_pitch/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁THE
- S
- ▁AND
- ▁OF
- ▁TO
- ▁A
- ▁IN
- ▁I
- ▁HE
- ▁THAT
- ▁WAS
- ED
- ▁IT
- ''''
- ▁HIS
- ING
- ▁YOU
- ▁WITH
- ▁FOR
- ▁HAD
- T
- ▁AS
- ▁HER
- ▁IS
- ▁BE
- ▁BUT
- ▁NOT
- ▁SHE
- D
- ▁AT
- ▁ON
- LY
- ▁HIM
- ▁THEY
- ▁ALL
- ▁HAVE
- ▁BY
- ▁SO
- ▁THIS
- ▁MY
- ▁WHICH
- ▁ME
- ▁SAID
- ▁FROM
- ▁ONE
- Y
- E
- ▁WERE
- ▁WE
- ▁NO
- N
- ▁THERE
- ▁OR
- ER
- ▁AN
- ▁WHEN
- ▁ARE
- ▁THEIR
- ▁WOULD
- ▁IF
- ▁WHAT
- ▁THEM
- ▁WHO
- ▁OUT
- M
- ▁DO
- ▁WILL
- ▁UP
- ▁BEEN
- P
- R
- ▁MAN
- ▁THEN
- ▁COULD
- ▁MORE
- C
- ▁INTO
- ▁NOW
- ▁VERY
- ▁YOUR
- ▁SOME
- ▁LITTLE
- ES
- ▁TIME
- RE
- ▁CAN
- ▁LIKE
- LL
- ▁ABOUT
- ▁HAS
- ▁THAN
- ▁DID
- ▁UPON
- ▁OVER
- IN
- ▁ANY
- ▁WELL
- ▁ONLY
- B
- ▁SEE
- ▁GOOD
- ▁OTHER
- ▁TWO
- L
- ▁KNOW
- ▁GO
- ▁DOWN
- ▁BEFORE
- A
- AL
- ▁OUR
- ▁OLD
- ▁SHOULD
- ▁MADE
- ▁AFTER
- ▁GREAT
- ▁DAY
- ▁MUST
- ▁COME
- ▁HOW
- ▁SUCH
- ▁CAME
- LE
- ▁WHERE
- ▁US
- ▁NEVER
- ▁THESE
- ▁MUCH
- ▁DE
- ▁MISTER
- ▁WAY
- G
- ▁S
- ▁MAY
- ATION
- ▁LONG
- OR
- ▁AM
- ▁FIRST
- ▁BACK
- ▁OWN
- ▁RE
- ▁AGAIN
- ▁SAY
- ▁MEN
- ▁WENT
- ▁HIMSELF
- ▁HERE
- NESS
- ▁THINK
- V
- IC
- ▁EVEN
- ▁THOUGHT
- ▁HAND
- ▁JUST
- ▁O
- ▁UN
- VE
- ION
- ▁ITS
- 'ON'
- ▁MAKE
- ▁MIGHT
- ▁TOO
- K
- ▁AWAY
- ▁LIFE
- TH
- ▁WITHOUT
- ST
- ▁THROUGH
- ▁MOST
- ▁TAKE
- ▁DON
- ▁EVERY
- F
- O
- ▁SHALL
- ▁THOSE
- ▁EYES
- AR
- ▁STILL
- ▁LAST
- ▁HOUSE
- ▁HEAD
- ABLE
- ▁NOTHING
- ▁NIGHT
- ITY
- ▁LET
- ▁MANY
- ▁OFF
- ▁BEING
- ▁FOUND
- ▁WHILE
- EN
- ▁SAW
- ▁GET
- ▁PEOPLE
- ▁FACE
- ▁YOUNG
- CH
- ▁UNDER
- ▁ONCE
- ▁TELL
- AN
- ▁THREE
- ▁PLACE
- ▁ROOM
- ▁YET
- ▁SAME
- IL
- US
- U
- ▁FATHER
- ▁RIGHT
- EL
- ▁THOUGH
- ▁ANOTHER
- LI
- RI
- ▁HEART
- IT
- ▁PUT
- ▁TOOK
- ▁GIVE
- ▁EVER
- ▁E
- ▁PART
- ▁WORK
- ERS
- ▁LOOK
- ▁NEW
- ▁KING
- ▁MISSUS
- ▁SIR
- ▁LOVE
- ▁MIND
- ▁LOOKED
- W
- RY
- ▁ASKED
- ▁LEFT
- ET
- ▁LIGHT
- CK
- ▁DOOR
- ▁MOMENT
- RO
- ▁WORLD
- ▁THINGS
- ▁HOME
- UL
- ▁THING
- LA
- ▁WHY
- ▁MOTHER
- ▁ALWAYS
- ▁FAR
- FUL
- ▁WATER
- CE
- IVE
- UR
- ▁HEARD
- ▁SOMETHING
- ▁SEEMED
- I
- LO
- ▁BECAUSE
- OL
- ▁END
- ▁TOLD
- ▁CON
- ▁YES
- ▁GOING
- ▁GOT
- RA
- IR
- ▁WOMAN
- ▁GOD
- EST
- TED
- ▁FIND
- ▁KNEW
- ▁SOON
- ▁EACH
- ▁SIDE
- H
- TON
- MENT
- ▁OH
- NE
- Z
- LING
- ▁AGAINST
- TER
- ▁NAME
- ▁MISS
- ▁QUITE
- ▁WANT
- ▁YEARS
- ▁FEW
- ▁BETTER
- ENT
- ▁HALF
- ▁DONE
- ▁ALSO
- ▁BEGAN
- ▁HAVING
- ▁ENOUGH
- IS
- ▁LADY
- ▁WHOLE
- LESS
- ▁BOTH
- ▁SEEN
- ▁SET
- ▁WHITE
- ▁COURSE
- IES
- ▁VOICE
- ▁CALLED
- ▁D
- ▁EX
- ATE
- ▁TURNED
- ▁GAVE
- ▁C
- ▁POOR
- MAN
- UT
- NA
- ▁DEAR
- ISH
- ▁GIRL
- ▁MORNING
- ▁BETWEEN
- LED
- ▁NOR
- IA
- ▁AMONG
- MA
- ▁
- ▁SMALL
- ▁REST
- ▁WHOM
- ▁FELT
- ▁HANDS
- ▁MYSELF
- ▁HIGH
- ▁M
- ▁HOWEVER
- ▁HERSELF
- ▁P
- CO
- ▁STOOD
- ID
- ▁KIND
- ▁HUNDRED
- AS
- ▁ROUND
- ▁ALMOST
- TY
- ▁SINCE
- ▁G
- AM
- ▁LA
- SE
- ▁BOY
- ▁MA
- ▁PERHAPS
- ▁WORDS
- ATED
- ▁HO
- X
- ▁MO
- ▁SAT
- ▁REPLIED
- ▁FOUR
- ▁ANYTHING
- ▁TILL
- ▁UNTIL
- ▁BLACK
- TION
- ▁CRIED
- RU
- TE
- ▁FACT
- ▁HELP
- ▁NEXT
- ▁LOOKING
- ▁DOES
- ▁FRIEND
- ▁LAY
- ANCE
- ▁POWER
- ▁BROUGHT
- VER
- ▁FIRE
- ▁KEEP
- PO
- FF
- ▁COUNTRY
- ▁SEA
- ▁WORD
- ▁CAR
- ▁DAYS
- ▁TOGETHER
- ▁IMP
- ▁REASON
- KE
- ▁INDEED
- TING
- ▁MATTER
- ▁FULL
- ▁TEN
- TIC
- ▁LAND
- ▁RATHER
- ▁AIR
- ▁HOPE
- ▁DA
- ▁OPEN
- ▁FEET
- ▁EN
- ▁FIVE
- ▁POINT
- ▁CO
- OM
- ▁LARGE
- ▁B
- ▁CL
- ME
- ▁GONE
- ▁CHILD
- INE
- GG
- ▁BEST
- ▁DIS
- UM
- ▁HARD
- ▁LORD
- OUS
- ▁WIFE
- ▁SURE
- ▁FORM
- DE
- ▁DEATH
- ANT
- ▁NATURE
- ▁BA
- ▁CARE
- ▁BELIEVE
- PP
- ▁NEAR
- ▁RO
- ▁RED
- ▁WAR
- IE
- ▁SPEAK
- ▁FEAR
- ▁CASE
- ▁TAKEN
- ▁ALONG
- ▁CANNOT
- ▁HEAR
- ▁THEMSELVES
- CI
- ▁PRESENT
- AD
- ▁MASTER
- ▁SON
- ▁THUS
- ▁LI
- ▁LESS
- ▁SUN
- ▁TRUE
- IM
- IOUS
- ▁THOUSAND
- ▁MONEY
- ▁W
- ▁BEHIND
- ▁CHILDREN
- ▁DOCTOR
- AC
- ▁TWENTY
- ▁WISH
- ▁SOUND
- ▁WHOSE
- ▁LEAVE
- ▁ANSWERED
- ▁THOU
- ▁DUR
- ▁HA
- ▁CERTAIN
- ▁PO
- ▁PASSED
- GE
- TO
- ▁ARM
- ▁LO
- ▁STATE
- ▁ALONE
- TA
- ▁SHOW
- ▁NEED
- ▁LIVE
- ND
- ▁DEAD
- ENCE
- ▁STRONG
- ▁PRE
- ▁TI
- ▁GROUND
- SH
- TI
- ▁SHORT
- IAN
- UN
- ▁PRO
- ▁HORSE
- MI
- ▁PRINCE
- ARD
- ▁FELL
- ▁ORDER
- ▁CALL
- AT
- ▁GIVEN
- ▁DARK
- ▁THEREFORE
- ▁CLOSE
- ▁BODY
- ▁OTHERS
- ▁SENT
- ▁SECOND
- ▁OFTEN
- ▁CA
- ▁MANNER
- MO
- NI
- ▁BRING
- ▁QUESTION
- ▁HOUR
- ▁BO
- AGE
- ▁ST
- ▁TURN
- ▁TABLE
- ▁GENERAL
- ▁EARTH
- ▁BED
- ▁REALLY
- ▁SIX
- 'NO'
- IST
- ▁BECOME
- ▁USE
- ▁READ
- ▁SE
- ▁VI
- ▁COMING
- ▁EVERYTHING
- ▁EM
- ▁ABOVE
- ▁EVENING
- ▁BEAUTIFUL
- ▁FEEL
- ▁RAN
- ▁LEAST
- ▁LAW
- ▁ALREADY
- ▁MEAN
- ▁ROSE
- WARD
- ▁ITSELF
- ▁SOUL
- ▁SUDDENLY
- ▁AROUND
- RED
- ▁ANSWER
- ICAL
- ▁RA
- ▁WIND
- ▁FINE
- ▁WON
- ▁WHETHER
- ▁KNOWN
- BER
- NG
- ▁TA
- ▁CAPTAIN
- ▁EYE
- ▁PERSON
- ▁WOMEN
- ▁SORT
- ▁ASK
- ▁BROTHER
- ▁USED
- ▁HELD
- ▁BIG
- ▁RETURNED
- ▁STRANGE
- ▁BU
- ▁PER
- ▁FREE
- ▁EITHER
- ▁WITHIN
- ▁DOUBT
- ▁YEAR
- ▁CLEAR
- ▁SIGHT
- ▁GRA
- ▁LOST
- ▁KEPT
- ▁F
- PE
- ▁BAR
- ▁TOWN
- ▁SLEEP
- ARY
- ▁HAIR
- ▁FRIENDS
- ▁DREAM
- ▁FELLOW
- PER
- ▁DEEP
- QUE
- ▁BECAME
- ▁REAL
- ▁PAST
- ▁MAKING
- RING
- ▁COMP
- ▁ACT
- ▁BAD
- HO
- STER
- ▁YE
- ▁MEANS
- ▁RUN
- MEN
- ▁DAUGHTER
- ▁SENSE
- ▁CITY
- ▁SOMETIMES
- ▁TOWARDS
- ▁ROAD
- ▁SP
- ▁LU
- ▁READY
- ▁FOOT
- ▁COLD
- ▁SA
- ▁LETTER
- ▁ELSE
- ▁MAR
- ▁STA
- BE
- ▁TRUTH
- ▁LE
- BO
- ▁BUSINESS
- CHE
- ▁JOHN
- ▁SUBJECT
- ▁COURT
- ▁IDEA
- ILY
- ▁RIVER
- ATING
- ▁FAMILY
- HE
- ▁DIDN
- ▁GLAD
- ▁SEVERAL
- IAL
- ▁UNDERSTAND
- ▁SC
- ▁POSSIBLE
- ▁DIFFERENT
- ▁RETURN
- ▁ARMS
- ▁LOW
- ▁HOLD
- ▁TALK
- ▁RU
- ▁WINDOW
- ▁INTEREST
- ▁SISTER
- SON
- ▁SH
- ▁BLOOD
- ▁SAYS
- ▁CAP
- ▁DI
- ▁HUMAN
- ▁CAUSE
- NCE
- ▁THANK
- ▁LATE
- GO
- ▁CUT
- ▁ACROSS
- ▁STORY
- NT
- ▁COUNT
- ▁ABLE
- DY
- LEY
- ▁NUMBER
- ▁STAND
- ▁CHURCH
- ▁THY
- ▁SUPPOSE
- LES
- BLE
- OP
- ▁EFFECT
- BY
- ▁K
- ▁NA
- ▁SPOKE
- ▁MET
- ▁GREEN
- ▁HUSBAND
- ▁RESPECT
- ▁PA
- ▁FOLLOWED
- ▁REMEMBER
- ▁LONGER
- ▁AGE
- ▁TAKING
- ▁LINE
- ▁SEEM
- ▁HAPPY
- LAND
- EM
- ▁STAY
- ▁PLAY
- ▁COMMON
- ▁GA
- ▁BOOK
- ▁TIMES
- ▁OBJECT
- ▁SEVEN
- QUI
- DO
- UND
- ▁FL
- ▁PRETTY
- ▁FAIR
- WAY
- ▁WOOD
- ▁REACHED
- ▁APPEARED
- ▁SWEET
- ▁FALL
- BA
- ▁PASS
- ▁SIGN
- ▁TREE
- IONS
- ▁GARDEN
- ▁ILL
- ▁ART
- ▁REMAIN
- ▁OPENED
- ▁BRIGHT
- ▁STREET
- ▁TROUBLE
- ▁PAIN
- ▁CONTINUED
- ▁SCHOOL
- OUR
- ▁CARRIED
- ▁SAYING
- HA
- ▁CHANGE
- ▁FOLLOW
- ▁GOLD
- ▁SW
- ▁FEELING
- ▁COMMAND
- ▁BEAR
- ▁CERTAINLY
- ▁BLUE
- ▁NE
- CA
- ▁WILD
- ▁ACCOUNT
- ▁OUGHT
- UD
- ▁T
- ▁BREATH
- ▁WANTED
- ▁RI
- ▁HEAVEN
- ▁PURPOSE
- ▁CHARACTER
- ▁RICH
- ▁PE
- ▁DRESS
- OS
- FA
- ▁TH
- ▁ENGLISH
- ▁CHANCE
- ▁SHIP
- ▁VIEW
- ▁TOWARD
- AK
- ▁JOY
- ▁JA
- ▁HAR
- ▁NEITHER
- ▁FORCE
- ▁UNCLE
- DER
- ▁PLAN
- ▁PRINCESS
- DI
- ▁CHIEF
- ▁HAT
- ▁LIVED
- ▁AB
- ▁VISIT
- ▁MOR
- TEN
- ▁WALL
- UC
- ▁MINE
- ▁PLEASURE
- ▁SMILE
- ▁FRONT
- ▁HU
- ▁DEAL
- OW
- ▁FURTHER
- GED
- ▁TRIED
- DA
- VA
- ▁NONE
- ▁ENTERED
- ▁QUEEN
- ▁PAY
- ▁EL
- ▁EXCEPT
- ▁SHA
- ▁FORWARD
- ▁EIGHT
- ▁ADDED
- ▁PUBLIC
- ▁EIGHTEEN
- ▁STAR
- ▁HAPPENED
- ▁LED
- ▁WALKED
- ▁ALTHOUGH
- ▁LATER
- ▁SPIRIT
- ▁WALK
- ▁BIT
- ▁MEET
- LIN
- ▁FI
- LT
- ▁MOUTH
- ▁WAIT
- ▁HOURS
- ▁LIVING
- ▁YOURSELF
- ▁FAST
- ▁CHA
- ▁HALL
- ▁BEYOND
- ▁BOAT
- ▁SECRET
- ENS
- ▁CHAIR
- RN
- ▁RECEIVED
- ▁CAT
- RESS
- ▁DESIRE
- ▁GENTLEMAN
- UGH
- ▁LAID
- EVER
- ▁OCCASION
- ▁WONDER
- ▁GU
- ▁PARTY
- DEN
- ▁FISH
- ▁SEND
- ▁NEARLY
- ▁TRY
- CON
- ▁SEEMS
- RS
- ▁BELL
- ▁BRA
- ▁SILENCE
- IG
- ▁GUARD
- ▁DIE
- ▁DOING
- ▁TU
- ▁COR
- ▁EARLY
- ▁BANK
- ▁FIGURE
- IF
- ▁ENGLAND
- ▁MARY
- ▁AFRAID
- LER
- ▁FO
- ▁WATCH
- ▁FA
- ▁VA
- ▁GRE
- ▁AUNT
- PED
- ▁SERVICE
- ▁JE
- ▁PEN
- ▁MINUTES
- ▁PAN
- ▁TREES
- NED
- ▁GLASS
- ▁TONE
- ▁PLEASE
- ▁FORTH
- ▁CROSS
- ▁EXCLAIMED
- ▁DREW
- ▁EAT
- ▁AH
- ▁GRAVE
- ▁CUR
- PA
- URE
- CENT
- ▁MILES
- ▁SOFT
- ▁AGO
- ▁POSITION
- ▁WARM
- ▁LENGTH
- ▁NECESSARY
- ▁THINKING
- ▁PICTURE
- ▁PI
- SHIP
- IBLE
- ▁HEAVY
- ▁ATTENTION
- ▁DOG
- ABLY
- ▁STANDING
- ▁NATURAL
- ▁APPEAR
- OV
- ▁CAUGHT
- VO
- ISM
- ▁SPRING
- ▁EXPERIENCE
- ▁PAT
- OT
- ▁STOPPED
- ▁REGARD
- ▁HARDLY
- ▁SELF
- ▁STRENGTH
- ▁GREW
- ▁KNIGHT
- ▁OPINION
- ▁WIDE
- ▁INSTEAD
- ▁SOUTH
- ▁TRANS
- ▁CORNER
- ▁LEARN
- ▁ISLAND
- ▁MI
- ▁THIRD
- ▁STE
- ▁STRAIGHT
- ▁TEA
- ▁BOUND
- ▁SEEING
- ▁JU
- ▁DINNER
- ▁BEAUTY
- ▁PEACE
- AH
- ▁REP
- ▁SILENT
- ▁CRE
- ALLY
- RIC
- ▁STEP
- ▁VER
- ▁JO
- GER
- ▁SITTING
- ▁THIRTY
- ▁SAVE
- ENED
- ▁GLANCE
- ▁REACH
- ▁ACTION
- ▁SAL
- ▁SAD
- ▁STONE
- ITIES
- ▁FRENCH
- ▁STRUCK
- ▁PAPER
- ▁WHATEVER
- ▁SUB
- ▁DISTANCE
- ▁WRONG
- ▁KNOWLEDGE
- ▁SAFE
- ▁SNOW
- ▁MUSIC
- ▁FIFTY
- RON
- ▁ATTEMPT
- ▁GOVERNMENT
- TU
- ▁CROWD
- ▁BESIDES
- ▁LOVED
- ▁BOX
- ▁DIRECTION
- ▁TRAIN
- ▁NORTH
- ▁THICK
- ▁GETTING
- AV
- ▁FLOOR
- ▁COMPANY
- ▁BLOW
- ▁PLAIN
- TRO
- ▁BESIDE
- ▁ROCK
- ▁IMMEDIATELY
- FI
- ▁SHADOW
- ▁SIT
- ORS
- ILE
- ▁DRINK
- ▁SPOT
- ▁DANGER
- ▁AL
- ▁SAINT
- ▁SLOWLY
- ▁PALACE
- IER
- ▁RESULT
- ▁PETER
- ▁FOREST
- ▁BELONG
- ▁SU
- ▁PAR
- RIS
- ▁TEARS
- ▁APPEARANCE
- ▁GATE
- BU
- ITION
- ▁QUICKLY
- ▁QUIET
- ▁LONDON
- ▁START
- ▁BROWN
- TRA
- KIN
- ▁CONSIDER
- ▁BATTLE
- ▁ANNE
- ▁PIECE
- ▁DIED
- ▁SUCCESS
- ▁LIPS
- ▁FILLED
- ▁FORGET
- ▁POST
- IFIED
- ▁MARGARET
- ▁FOOD
- HAM
- ▁PLEASANT
- ▁FE
- ▁EXPRESSION
- ▁POCKET
- ▁FRESH
- ▁WEAR
- TRI
- ▁BROKEN
- ▁LAUGHED
- GING
- ▁FOLLOWING
- WN
- IP
- ▁TOUCH
- ▁YOUTH
- ATIVE
- ▁LEG
- ▁WEEK
- ▁REMAINED
- ▁EASY
- NER
- RK
- ▁ENTER
- ▁FIGHT
- ▁PLACED
- ▁TRAVEL
- ▁SIMPLE
- ▁GIRLS
- ▁WAITING
- ▁STOP
- ▁WAVE
- AU
- ▁WISE
- ▁CAMP
- TURE
- UB
- ▁VE
- ▁OFFICE
- ▁GRAND
- ▁FIT
- ▁JUDGE
- UP
- MENTS
- ▁QUICK
- HI
- ▁FLO
- RIES
- VAL
- ▁COMFORT
- ▁PARTICULAR
- ▁STARTED
- ▁SUIT
- ▁NI
- ▁PALE
- ▁IMPOSSIBLE
- ▁HOT
- ▁CONVERSATION
- ▁SCENE
- ▁BOYS
- ▁WIN
- ▁BRE
- ▁SOCIETY
- ▁OUTSIDE
- ▁WRITE
- ▁EFFORT
- ▁TALKING
- ▁FORTUNE
- ▁NINE
- ▁WA
- ▁SINGLE
- ▁RULE
- ▁PORT
- ▁WINTER
- ▁CAST
- ▁CRA
- ▁HAPPEN
- ▁CRO
- ▁SHUT
- NING
- ▁GUN
- ▁NOBLE
- ▁BEGIN
- ▁PATH
- ▁SKY
- ▁WONDERFUL
- ▁SUDDEN
- ▁ARMY
- ▁CHE
- ▁WORTH
- ▁MOUNTAIN
- ▁MIN
- AG
- ▁FLU
- ▁GRACE
- ▁CHAPTER
- ▁BELOW
- ▁RING
- ▁TURNING
- ▁IRON
- ▁TOP
- ▁AFTERNOON
- ORY
- ▁EVIL
- ▁TRUST
- ▁BOW
- ▁TRI
- ▁SAIL
- ▁CONTENT
- ▁HORSES
- ITE
- ▁SILVER
- AP
- ▁LAD
- ▁RUNNING
- ▁HILL
- ▁BEGINNING
- ▁MAD
- ▁HABIT
- GRA
- ▁CLOTHES
- ▁MORROW
- ▁CRY
- ▁FASHION
- ▁PRESENCE
- ▁Z
- FE
- ▁ARRIVED
- ▁QUARTER
- ▁PERFECT
- ▁WO
- ▁TRA
- ▁USUAL
- ▁NECK
- ▁MARRIED
- ▁SEAT
- ▁WI
- ▁GAR
- ▁SAND
- ▁SHORE
- ▁GIVING
- NY
- ▁PROBABLY
- ▁MINUTE
- ▁EXPECT
- ▁DU
- ▁SHOT
- ▁INSTANT
- ▁DEGREE
- ▁COLOR
- ▁WEST
- RT
- ▁MARCH
- ▁BIRD
- ▁SHOWED
- ▁GREATER
- ▁SERIOUS
- ▁CARRY
- ▁COVERED
- ▁FORMER
- ▁LOUD
- ▁MOVED
- ▁MASS
- ▁SEEK
- ▁CHO
- GEN
- ▁ROMAN
- IB
- ▁MOON
- ▁BOARD
- ▁STREAM
- ▁EASILY
- ▁WISHED
- ▁SEARCH
- ▁COULDN
- ▁MONTHS
- ▁SICK
- LIE
- ▁DUTY
- ▁TWELVE
- ▁FAINT
- ▁STRANGER
- ▁SURPRISE
- ▁KILL
- ▁LEAVING
- ▁JOURNEY
- ▁SCARCELY
- ▁RAISED
- ▁SPEAKING
- ▁TERRIBLE
- ▁TOM
- ▁FIELD
- ▁GAME
- ▁QUA
- ▁PROMISE
- ▁LIE
- ▁CONDITION
- ▁TRO
- ▁PERSONAL
- ▁TALL
- ▁STICK
- ▁THREW
- ▁MARRY
- ▁VAN
- ▁BURN
- ▁ACCORDING
- ▁RISE
- ▁ATTACK
- ▁SWORD
- ▁GUESS
- ▁THOUGHTS
- ▁THIN
- ▁THROW
- ▁CALM
- SIDE
- ▁VILLAGE
- ▁DEN
- ▁ANXIOUS
- ▁MER
- GI
- ▁EXPECTED
- ▁BALL
- ▁ESPECIALLY
- ▁CHARGE
- ▁MEASURE
- ISE
- ▁NICE
- ▁TRYING
- ▁ALLOW
- ▁SHARP
- ▁BREAD
- ▁HONOUR
- ▁HONOR
- ▁ENTIRELY
- ▁BILL
- ▁BRI
- ▁WRITTEN
- ▁AR
- ▁BROKE
- ▁KILLED
- ▁MARK
- ▁VEN
- ▁LADIES
- ▁LEARNED
- ▁FLOWERS
- PLE
- ▁FORTY
- ▁OFFER
- ▁HAPPINESS
- ▁PRAY
- ▁CLASS
- ▁FER
- ▁PRINCIPLE
- GU
- ▁BOOKS
- ▁SHAPE
- ▁SUMMER
- ▁JACK
- ▁DRAW
- ▁GOLDEN
- ▁DECIDED
- ▁LEAD
- ▁UNLESS
- ▁HARM
- ▁LISTEN
- HER
- ▁SHOOK
- ▁INFLUENCE
- ▁PERFECTLY
- ▁MARRIAGE
- ▁BROAD
- ▁ESCAPE
- ▁STATES
- ▁MIDDLE
- ▁PLANT
- ▁MIL
- ▁MOVEMENT
- ▁NOISE
- ▁ENEMY
- ▁HISTORY
- ▁BREAK
- ROUS
- ▁UNDERSTOOD
- ▁LATTER
- FER
- ▁COMES
- ▁MERELY
- ▁SIMPLY
- WI
- ▁IMAGINE
- ▁LOWER
- ▁CONDUCT
- ▁BORN
- WA
- ▁YARD
- ▁KA
- ▁CLOSED
- ▁NOTE
- GA
- ▁STRA
- RAN
- ▁EXIST
- EV
- ▁SPEECH
- ▁BITTER
- JO
- ▁MAKES
- ▁GRASS
- ▁REPLY
- ▁CHANGED
- ▁MON
- ▁LYING
- ▁DANCE
- ▁FINALLY
- ▁AMERICAN
- ▁ENJOY
- ▁CONTAIN
- ▁MEANT
- USE
- ▁OBSERVED
- THER
- ▁LAUGH
- ▁AFTERWARDS
- ▁BEAT
- ▁RACE
- ▁EQUAL
- ▁RAIN
- PS
- ▁STEPS
- ▁BENEATH
- ▁TAIL
- ▁TASTE
- IO
- EY
- ▁CHAR
- ▁GE
- GN
- TIN
- ▁GROW
- ▁TE
- IANS
- ▁MOVE
- ▁REPEATED
- ▁DRIVE
- TUR
- ▁SI
- CLOCK
- ▁BRAVE
- ▁MADAME
- ▁LOT
- ▁CASTLE
- ▁HI
- AND
- ▁FUTURE
- ▁RELATION
- ▁SORRY
- ▁HEALTH
- ▁DICK
- ▁R
- ▁BUILDING
- ▁EDGE
- ▁BLESS
- ▁SPITE
- WE
- ▁MIS
- ▁PRISONER
- ▁ALLOWED
- ▁PH
- ▁CATCH
- MER
- ETH
- ▁COAT
- ▁COMPLETE
- ▁WOULDN
- ▁CREATURE
- ▁YELLOW
- ▁IMPORTANT
- ▁ADD
- ▁PASSING
- ▁DARKNESS
- ▁CARRIAGE
- ▁MILL
- ▁FIFTEEN
- NCY
- ▁HUNG
- ▁OB
- ▁PLEASED
- ▁SPREAD
- ▁CURIOUS
- ▁WORSE
- ▁CIRCUMSTANCES
- ▁GI
- LAR
- ▁CAL
- ▁HY
- ▁MERE
- ▁JANE
- ▁EAST
- BI
- ▁CUP
- ▁BLIND
- ▁PASSION
- ▁DISCOVERED
- ▁NOTICE
- ▁REPORT
- ▁SPACE
- ▁PRESENTLY
- ▁SORROW
- ▁PACK
- ▁DIN
- CY
- ▁DRY
- ▁ANCIENT
- ▁DRESSED
- ▁COVER
- ▁VO
- ▁EXISTENCE
- ▁EXACTLY
- ▁BEAST
- ▁PROPER
- ▁DROPPED
- ▁CLEAN
- ▁COLOUR
- ▁HOST
- ▁CHAMBER
- ▁FAITH
- LET
- ▁DETERMINED
- ▁PRIEST
- ▁STORM
- ▁SKIN
- ▁DARE
- ▁PERSONS
- ▁PICK
- ▁NARROW
- ▁SUPPORT
- ▁PRIVATE
- ▁SMILED
- ▁COUSIN
- ▁DRAWING
- ▁ATTEND
- ▁COOK
- ▁PREVENT
- ▁VARIOUS
- ▁BLA
- ▁FIXED
- ▁WEAK
- THE
- ▁HOLE
- ▁BOTTOM
- ▁NOBODY
- ADE
- ▁LEGS
- ITCH
- ▁INDIVIDUAL
- ▁EARS
- LIKE
- ▁ADVANTAGE
- ▁FRANCE
- ▁BON
- ▁WINE
- ▁LIVES
- OD
- ▁WALLS
- ▁TIRED
- ▁SHOP
- ▁ANIMAL
- ▁CRU
- ▁WROTE
- ▁ROYAL
- ▁CONSIDERED
- ▁MORAL
- ▁COMPANION
- ▁LOSE
- ▁ISN
- ▁BAG
- ▁LAKE
- ▁INTER
- ▁COM
- ▁LETTERS
- ▁LUCK
- ▁EAR
- ▁GERMAN
- ▁PET
- ▁SAKE
- ▁DROP
- ▁PAID
- ▁BREAKFAST
- ▁LABOR
- ▁DESERT
- ▁DECLARED
- ▁HUM
- ▁STUDY
- ▁INSTANCE
- ONE
- ▁SOMEWHAT
- ▁CLOTH
- ▁SPECIAL
- ▁COLONEL
- ▁SONG
- ▁MAIN
- ▁VALUE
- ▁PROUD
- ▁EXPRESS
- ▁NATION
- ▁HANDSOME
- ▁CONFESS
- ▁PU
- ▁PASSAGE
- ▁PERIOD
- ▁CUSTOM
- ▁HURT
- ▁SHOULDER
- ▁CHRIST
- ZA
- ▁RECEIVE
- ▁DIFFICULT
- ▁DEPEND
- ▁MEETING
- ▁CHI
- ▁GEN
- LIGHT
- ▁BELIEVED
- ▁SOCIAL
- ▁DIFFICULTY
- ▁GREATEST
- ▁DRAWN
- ▁GRANT
- ▁BIRDS
- ▁ANGRY
- ▁HEAT
- UFF
- ▁DUE
- ▁PLACES
- ▁SIN
- ▁COURAGE
- ▁EVIDENTLY
- ▁GENTLE
- ▁CRUEL
- ▁GEORGE
- ▁GRI
- ▁SERVANT
- ▁U
- ▁PURE
- OOK
- ▁KNOWS
- ▁KNOWING
- LF
- ▁WRITING
- ▁REMEMBERED
- ▁CU
- ▁HOLDING
- ▁TENDER
- ▁QUI
- ▁BURST
- ▁SURELY
- IGN
- ▁VALLEY
- ▁FU
- ▁BUTTER
- ▁SPOKEN
- ▁STORE
- ▁DISC
- ▁CHRISTIAN
- ▁PARIS
- ▁HENRY
- ▁FINISHED
- ▁PROVE
- ▁FOOL
- ▁SOLDIERS
- ▁LANGUAGE
- ▁INSIDE
- ▁BAN
- ▁FALLEN
- ROW
- ▁MAL
- ▁BABY
- ▁SITUATION
- ▁WATCHED
- ANS
- ▁RUIN
- ▁GENTLEMEN
- ▁FRO
- ▁FANCY
- ▁ACCEPT
- ▁SEASON
- ▁OURSELVES
- ▁SAN
- ▁SPEED
- IZED
- ▁COOL
- ▁SERVE
- ▁VESSEL
- ▁WILLIAM
- ▁OBLIGED
- ▁GROUP
- FORM
- ▁GOES
- UOUS
- ▁LEAVES
- ▁PECULIAR
- ▁NEWS
- ▁VAIN
- ▁EVERYBODY
- ▁PIN
- UG
- ▁FORGOTTEN
- ▁FRA
- GAN
- ▁CAREFULLY
- ▁FLASH
- UCH
- ▁FUR
- ▁MURDER
- ▁DELIGHT
- ▁WAITED
- ▁RENDER
- ▁PROPERTY
- ▁NOTICED
- ▁ROLL
- ▁KNOCK
- ▁EARNEST
- KI
- ▁HONEST
- ▁PROMISED
- ▁BAL
- AW
- ▁WALKING
- ANG
- ▁SQUARE
- ▁QUIETLY
- ▁CLOUD
- WOOD
- ▁FORMED
- ▁HIGHER
- ▁BUILT
- ▁FATE
- ▁TEACH
- MY
- ▁FALSE
- ▁YORK
- ▁DUST
- ▁CLIMB
- ▁FOND
- ▁GROWN
- ▁DESCEND
- ▁RAG
- ▁FRUIT
- ▁GENERALLY
- ▁OFFERED
- ▁ER
- ▁NURSE
- POSE
- ▁SPENT
- ▁JOIN
- ▁STATION
- ▁MEANING
- ▁SMOKE
- HOOD
- ▁ROUGH
- JU
- ▁LIKELY
- ▁SURFACE
- ▁KE
- ▁MONTH
- ▁POSSESSION
- ▁TONGUE
- ▁DUKE
- ▁NOSE
- ▁LAUGHING
- ▁WEATHER
- ▁WHISPERED
- ▁SYSTEM
- ▁LAWS
- DDLE
- ▁TOUCHED
- ▁TRADE
- LD
- ▁SURPRISED
- RIN
- ▁ARCH
- ▁WEALTH
- FOR
- ▁TEMPER
- ▁FRANK
- ▁GAL
- ▁BARE
- ▁OPPORTUNITY
- ▁CLAIM
- ▁ANIMALS
- ▁REV
- ▁COST
- ▁WASH
- ZE
- ▁CORN
- ▁OPPOSITE
- ▁POLICE
- ▁IDEAS
- LON
- ▁KEY
- ▁READING
- ▁COLLECT
- CHED
- ▁H
- ▁CROWN
- ▁TAR
- ▁SWIFT
- ▁SHOULDERS
- ▁ICE
- ▁GRAY
- ▁SHARE
- ▁PREPARED
- ▁GRO
- ▁UND
- ▁TER
- ▁EMPTY
- CING
- ▁SMILING
- ▁AVOID
- ▁DIFFERENCE
- ▁EXPLAIN
- ▁POUR
- ▁ATTRACT
- ▁OPENING
- ▁WHEEL
- ▁MATERIAL
- ▁BREAST
- ▁SUFFERING
- ▁DISTINCT
- ▁BOOT
- ▁ROW
- ▁FINGERS
- HAN
- ▁ALTOGETHER
- ▁FAT
- ▁PAPA
- ▁BRAIN
- ▁ASLEEP
- ▁GREY
- ▁SUM
- ▁GAS
- ▁WINDOWS
- ▁ALIVE
- ▁PROCEED
- ▁FLOWER
- ▁LEAP
- ▁PUR
- ▁PIECES
- ▁ALTER
- ▁MEMORY
- IENT
- ▁FILL
- ▁CLO
- ▁THROWN
- ▁KINGDOM
- ▁RODE
- IUS
- ▁MAID
- ▁DIM
- ▁BAND
- ▁VIRTUE
- ▁DISH
- ▁GUEST
- ▁LOSS
- ▁CAUSED
- ▁MOTION
- ▁POT
- ▁MILLION
- ▁FAULT
- ▁LOVELY
- ▁HERO
- PPING
- ▁UNITED
- ▁SPI
- SOME
- BRA
- ▁MOUNTAINS
- ▁NU
- ▁SATISFIED
- ▁DOLLARS
- ▁LOVER
- ▁CONCEAL
- ▁VAST
- ▁PULL
- ▁HATH
- ▁RUSH
- ▁J
- ▁DESPAIR
- EX
- ▁HEIGHT
- ▁CE
- ▁BENT
- ▁PITY
- ▁RISING
- ATH
- ▁PRIDE
- ▁HURRY
- KA
- ▁SETTLED
- ▁JUSTICE
- ▁LIFTED
- PEN
- ▁SOLDIER
- ▁FINDING
- ▁REMARK
- ▁REGULAR
- ▁STRUGGLE
- ▁MACHINE
- ▁SING
- ▁HURRIED
- ▁SUFFICIENT
- ▁REPRESENT
- ▁DOUBLE
- ▁ALARM
- ▁SUPPER
- ▁DREADFUL
- ▁FORE
- ATOR
- ▁STOCK
- ▁TIN
- ▁EXAMPLE
- ▁ROOF
- ▁FLOW
- ▁SUPPOSED
- ▁PRESERV
- ▁L
- ▁LISTENED
- OC
- ▁STO
- ▁SECURE
- ▁FRIGHTENED
- ▁DISTURB
- ▁EMOTION
- ▁SERVANTS
- ▁YO
- ▁BUY
- ▁FORCED
- ▁KITCHEN
- ▁TERROR
- ▁STAIRS
- ▁SIXTY
- KER
- ▁ORDINARY
- ▁DIRECTLY
- ▁HEADS
- ▁METHOD
- ▁FORGIVE
- ▁AWFUL
- ▁REFLECT
- ▁GREATLY
- ▁TALKED
- ▁RIDE
- STONE
- ▁FAVOUR
- ▁WELCOME
- ▁SEIZED
- OU
- ▁CONTROL
- ▁ORDERED
- ▁ANGEL
- ▁USUALLY
- ▁POET
- ▁BOLD
- LINE
- ▁ADVENTURE
- ▁WATCHING
- ▁FOLK
- ▁MISTRESS
- IZE
- ▁GROWING
- ▁CAVE
- ▁EVIDENCE
- ▁FINGER
- ▁SEVENTEEN
- ▁MOVING
- EOUS
- ▁DOESN
- ▁COW
- ▁TYPE
- ▁BOIL
- ▁TALE
- ▁DELIVER
- ▁FARM
- ▁MONSIEUR
- ▁GATHERED
- ▁FEELINGS
- ▁RATE
- ▁REMARKED
- ▁PUTTING
- ▁MAT
- ▁CONTRARY
- ▁CRIME
- ▁PLA
- ▁COL
- ▁NEARER
- TES
- ▁CIVIL
- ▁SHAME
- ▁LOOSE
- ▁DISCOVER
- ▁FLAT
- ▁TWICE
- ▁FAIL
- VIS
- ▁UNC
- EA
- ▁EUROPE
- ▁PATIENT
- ▁UNTO
- ▁SUFFER
- ▁PAIR
- ▁TREASURE
- OSE
- ▁EAGER
- ▁FLY
- ▁N
- ▁VAL
- ▁DAN
- ▁SALT
- ▁BORE
- BBE
- ▁ARTHUR
- ▁AFFAIRS
- ▁SLOW
- ▁CONSIST
- ▁DEVIL
- LAN
- ▁AFFECTION
- ▁ENGAGED
- ▁KISS
- ▁YA
- ▁OFFICER
- IFICATION
- ▁LAMP
- ▁PARTS
- HEN
- ▁MILK
- ▁PROCESS
- ▁GIFT
- ▁PULLED
- ▁HID
- ▁RAY
- ▁EXCELLENT
- ▁IMPRESSION
- ▁AUTHORITY
- ▁PROVED
- ▁TELLING
- TTE
- ▁TOWER
- ▁CONSEQUENCE
- ▁FAVOR
- ▁FLEW
- ▁CHARLES
- ISTS
- ▁ADDRESS
- ▁FAMILIAR
- ▁LIMIT
- ▁CONFIDENCE
- ▁RARE
- ▁WEEKS
- ▁WOODS
- ▁INTENTION
- ▁DIRECT
- ▁PERFORM
- ▁SOLEMN
- ▁DISTANT
- ▁IMAGE
- ▁PRESIDENT
- ▁FIRM
- ▁INDIAN
- ▁RANK
- ▁LIKED
- ▁AGREE
- ▁HOUSES
- ▁WIL
- ▁MATTERS
- ▁PRISON
- ▁MODE
- ▁MAJOR
- ▁WORKING
- ▁SLIP
- ▁WEIGHT
- ▁AWARE
- ▁BUSY
- ▁LOOKS
- ▁WOUND
- ▁THOR
- ▁BATH
- ▁EXERCISE
- ▁SIMILAR
- ▁WORE
- ▁AMOUNT
- ▁QUESTIONS
- ▁VIOLENT
- ▁EXCUSE
- ▁ASIDE
- ▁TUR
- ▁DULL
- OF
- ▁EMPEROR
- ▁NEVERTHELESS
- ▁SHOUT
- ▁EXPLAINED
- ▁SIZE
- ▁ACCOMPLISH
- FORD
- CAN
- ▁MISTAKE
- ▁INSTANTLY
- ▁SMOOTH
- ▁STRIKE
- ▁BOB
- ISED
- ▁HORROR
- ▁SCIENCE
- ▁PROTEST
- ▁MANAGE
- ▁OBEY
- ▁NECESSITY
- ▁SPLENDID
- ▁PRESS
- ▁INTERESTING
- ▁RELIGION
- ▁UNKNOWN
- ▁FIERCE
- ▁DISAPPEARED
- ▁HOLY
- ▁HATE
- ▁PLAYED
- ▁LIN
- ▁NATURALLY
- ▁DROVE
- ▁LOUIS
- TIES
- ▁BRAND
- INESS
- RIE
- ▁SHOOT
- ▁CONSENT
- ▁SEATED
- ▁LINES
- GUE
- ▁AGREED
- ▁CIRCLE
- ▁STIR
- ▁STREETS
- ▁TASK
- ▁RID
- ▁PRODUCED
- ▁ACCIDENT
- ▁WITNESS
- ▁LIBERTY
- ▁DETAIL
- ▁MINISTER
- ▁POWERFUL
- ▁SAVAGE
- ▁SIXTEEN
- ▁PRETEND
- ▁COAST
- ▁SQU
- ▁UTTER
- ▁NAMED
- ▁CLEVER
- ▁ADMIT
- ▁COUPLE
- ▁WICKED
- ▁MESSAGE
- ▁TEMPLE
- ▁STONES
- ▁YESTERDAY
- ▁HILLS
- DAY
- ▁SLIGHT
- ▁DIAMOND
- ▁POSSIBLY
- ▁AFFAIR
- ▁ORIGINAL
- ▁HEARING
- ▁WORTHY
- ▁SELL
- NEY
- ICK
- ▁COTTAGE
- ▁SACRIFICE
- ▁PROGRESS
- ▁SHOCK
- ▁DESIGN
- ▁SOUGHT
- ▁PIT
- ▁SUNDAY
- ▁OTHERWISE
- ▁CABIN
- ▁PRAYER
- ▁DWELL
- ▁GAIN
- ▁BRIDGE
- ▁PARTICULARLY
- ▁YIELD
- ▁TREAT
- RIGHT
- ▁OAK
- ▁ROPE
- WIN
- ▁ORDERS
- ▁SUSPECT
- ▁EDWARD
- AB
- ▁ELEVEN
- ▁TEETH
- ▁OCCURRED
- DDING
- ▁AMERICA
- ▁FALLING
- ▁LION
- ▁DEPART
- ▁KEEPING
- ▁DEMAND
- ▁PAUSED
- ▁CEASED
- INA
- ▁FUN
- ▁CHEER
- ▁PARDON
- ▁NATIVE
- LUS
- LOW
- ▁DOGS
- ▁REQUIRED
- ILITY
- ▁ELECT
- ▁ENTERTAIN
- ITUDE
- ▁HUGE
- ▁CARRYING
- ▁BLU
- ▁INSIST
- ▁SATISFACTION
- ▁HUNT
- ▁COUNTENANCE
- ▁UPPER
- ▁MAIDEN
- ▁FAILED
- ▁JAMES
- ▁FOREIGN
- ▁GATHER
- ▁TEST
- BOARD
- ▁TERMS
- ▁SILK
- ▁BEG
- ▁BROTHERS
- ▁PAGE
- ▁KNEES
- ▁SHOWN
- ▁PROFESSOR
- ▁MIGHTY
- ▁DEFI
- ▁CHARM
- ▁REQUIRE
- ▁LOG
- MORE
- ▁PROOF
- ▁POSSESSED
- ▁SOFTLY
- ▁UNFORTUNATE
- ▁PRICE
- ▁SEVERE
- ▁SINGING
- ▁STAGE
- ▁FREEDOM
- ▁SHOUTED
- ▁FARTHER
- ▁MAJESTY
- ▁PREVIOUS
- ▁GUIDE
- ▁MATCH
- ▁CHEST
- ▁INTENDED
- ▁BI
- ▁EXCITEMENT
- ▁OFFICERS
- ▁SUR
- ▁SHAKE
- ▁SENTIMENT
- ▁GENTLY
- ▁SUCCEEDED
- ▁MENTION
- ▁LOCK
- ▁ACQUAINTANCE
- ▁IMAGINATION
- ▁PHYSICAL
- ▁LEADING
- ▁SLAVE
- ▁CART
- ▁POINTED
- ▁STEAM
- ▁SHADE
- ▁PIPE
- ▁BASE
- ▁INVENT
- ▁ALAS
- ▁WORKED
- ▁REGRET
- ▁BUR
- ▁FAITHFUL
- ▁MENTIONED
- ▁RECORD
- ▁COMPLAIN
- ▁SUPERIOR
- ▁BAY
- ▁PAL
- EMENT
- UE
- ▁SEVENTY
- ▁HOTEL
- ▁SHEEP
- ▁MEAL
- ▁ADVICE
- ▁HIDDEN
- ▁DEMANDED
- ▁CONSCIOUS
- ▁BROW
- ▁POSSESS
- ▁FOURTH
- ▁EVENTS
- ▁FRI
- ▁PRAISE
- ▁ADVANCED
- ▁RESOLVED
- ▁STUFF
- ▁CHEERFUL
- ▁BIRTH
- ▁GRIEF
- ▁AFFORD
- ▁FAIRY
- ▁WAKE
- ▁SIDES
- ▁SUBSTANCE
- ▁ARTICLE
- ▁LEVEL
- ▁MIST
- ▁JOINED
- ▁PRACTICAL
- ▁CLEARLY
- ▁TRACE
- ▁AWAKE
- ▁OBSERVE
- ▁BASKET
- ▁LACK
- VILLE
- ▁SPIRITS
- ▁EXCITED
- ▁ABANDON
- ▁SHINING
- ▁FULLY
- ▁CALLING
- ▁CONSIDERABLE
- ▁SPRANG
- ▁MILE
- ▁DOZEN
- ▁PEA
- ▁DANGEROUS
- ▁WIT
- ▁JEW
- ▁POUNDS
- ▁FOX
- ▁INFORMATION
- ▁LIES
- ▁DECK
- NNY
- ▁PAUL
- ▁STARS
- ▁ANGER
- ▁SETTLE
- ▁WILLING
- ▁ADAM
- ▁FACES
- ▁SMITH
- ▁IMPORTANCE
- ▁STRAIN
- WAR
- ▁SAM
- ▁FEATHER
- ▁SERVED
- ▁AUTHOR
- ▁PERCEIVED
- ▁FLAME
- ▁DIVINE
- ▁TRAIL
- ▁ANYBODY
- ▁SIGH
- ▁DELICATE
- KY
- ▁FOLD
- ▁HAVEN
- ▁DESIRED
- ▁CURIOSITY
- ▁PRACTICE
- ▁CONSIDERATION
- ▁ABSOLUTELY
- ▁CITIZEN
- ▁BOTTLE
- ▁INTERESTED
- ▁MEAT
- ▁OCCUPIED
- ▁CHOOSE
- ▁THROAT
- ETTE
- ▁CANDLE
- ▁DAWN
- ▁PROTECT
- ▁SENTENCE
- IED
- ▁ROCKS
- ▁PORTION
- ▁APPARENTLY
- ▁PRESENTED
- ▁TIGHT
- ▁ACTUALLY
- ▁DYING
- ▁HAM
- ▁DAILY
- ▁SUFFERED
- ▁POLITICAL
- ▁BODIES
- ▁MODERN
- ▁COMPLETELY
- ▁SOONER
- TAN
- ▁PROP
- ▁ADVANCE
- ▁REFUSED
- ▁FARMER
- ▁POLITE
- ▁THUNDER
- ▁BRIEF
- ▁ELSIE
- ▁SAILOR
- ▁SUGGESTED
- ▁PLATE
- ▁AID
- ▁FLESH
- ▁WEEP
- ▁BUCK
- ▁ANTI
- ▁OCEAN
- ▁SPEND
- WELL
- ▁ODD
- ▁GOVERNOR
- ▁ENTRANCE
- ▁SUSPICION
- ▁STEPPED
- ▁RAPIDLY
- ▁CHECK
- ▁HIDE
- ▁FLIGHT
- ▁CLUB
- ▁ENTIRE
- ▁INDIANS
- ASH
- ▁CAPITAL
- ▁MAMMA
- HAR
- ▁CORRECT
- ▁CRACK
- ▁SENSATION
- ▁WORST
- ▁PACE
- ▁MIDST
- ▁AUGUST
- ▁PROPORTION
- ▁INNOCENT
- LINESS
- ▁REGARDED
- ▁DRIVEN
- ORD
- ▁HASTE
- ▁EDUCATION
- ▁EMPLOY
- ▁TRULY
- ▁INSTRUMENT
- ▁MAG
- ▁FRAME
- ▁FOOLISH
- ▁TAUGHT
- ▁HANG
- ▁ARGUMENT
- ▁NINETEEN
- ▁ELDER
- ▁NAY
- ▁NEEDED
- ▁NEIGHBOR
- ▁INSTRUCT
- ▁PAPERS
- ▁REWARD
- ▁EQUALLY
- ▁FIELDS
- ▁DIG
- HIN
- ▁CONDITIONS
- JA
- ▁SPAR
- ▁REQUEST
- ▁WORN
- ▁REMARKABLE
- ▁LOAD
- ▁WORSHIP
- ▁PARK
- ▁KI
- ▁INTERRUPTED
- ▁SKILL
- ▁TERM
- LAC
- ▁CRITIC
- ▁DISTRESS
- ▁BELIEF
- ▁STERN
- IGHT
- ▁TRACK
- ▁HUNTING
- ▁JEWEL
- ▁GRADUALLY
- ▁GLOW
- ▁RUSHED
- ▁MENTAL
- ▁VISITOR
- ▁PICKED
- ▁BEHOLD
- ▁EXPRESSED
- ▁RUB
- ▁SKI
- ARTAGNAN
- ▁MOREOVER
- ▁OPERATION
- ▁CAREFUL
- ▁KEEN
- ▁ASSERT
- ▁WANDER
- ▁ENEMIES
- ▁MYSTERIOUS
- ▁DEPTH
- ▁PREFER
- ▁CROSSED
- ▁CHARMING
- ▁DREAD
- ▁FLOUR
- ▁ROBIN
- ▁TRE
- ▁RELIEF
- ▁INQUIRED
- ▁APPLE
- ▁HENCE
- ▁WINGS
- ▁CHOICE
- ▁JUD
- OO
- ▁SPECIES
- ▁DELIGHTED
- IUM
- ▁RAPID
- ▁APPEAL
- ▁FAMOUS
- ▁USEFUL
- ▁HELEN
- ▁NEWSPAPER
- ▁PLENTY
- ▁BEARING
- ▁NERVOUS
- ▁PARA
- ▁URGE
- ▁ROAR
- ▁WOUNDED
- ▁CHAIN
- ▁PRODUCE
- ▁REFLECTION
- ▁MERCHANT
- ▁QUARREL
- ▁GLORY
- ▁BEGUN
- ▁BARON
- CUS
- ▁QUEER
- ▁MIX
- ▁GAZE
- ▁WHISPER
- ▁BURIED
- ▁DIV
- ▁CARD
- ▁FREQUENTLY
- ▁TIP
- ▁KNEE
- ▁REGION
- ▁ROOT
- ▁LEST
- ▁JEALOUS
- CTOR
- ▁SAVED
- ▁ASKING
- ▁TRIP
- QUA
- ▁UNION
- HY
- ▁COMPANIONS
- ▁SHIPS
- ▁HALE
- ▁APPROACHED
- ▁HARRY
- ▁DRUNK
- ▁ARRIVAL
- ▁SLEPT
- ▁FURNISH
- HEAD
- ▁PIG
- ▁ABSENCE
- ▁PHIL
- ▁HEAP
- ▁SHOES
- ▁CONSCIOUSNESS
- ▁KINDLY
- ▁EVIDENT
- ▁SCAR
- ▁DETERMIN
- ▁GRASP
- ▁STEAL
- ▁OWE
- ▁KNIFE
- ▁PRECIOUS
- ▁ELEMENT
- ▁PROCEEDED
- ▁FEVER
- ▁LEADER
- ▁RISK
- ▁EASE
- ▁GRIM
- ▁MOUNT
- ▁MEANWHILE
- ▁CENTURY
- OON
- ▁JUDGMENT
- ▁AROSE
- ▁VISION
- ▁SPARE
- ▁EXTREME
- ▁CONSTANT
- ▁OBSERVATION
- ▁THRUST
- ▁DELAY
- ▁CENT
- ▁INCLUD
- ▁LIFT
- ▁ADMIRE
- ▁ISSUE
- ▁FRIENDSHIP
- ▁LESSON
- ▁PRINCIPAL
- ▁MOURN
- ▁ACCEPTED
- ▁BURNING
- ▁CAPABLE
- ▁EXTRAORDINARY
- ▁SANG
- ▁REMOVED
- ▁HOPED
- ▁HORN
- ▁ALICE
- ▁MUD
- ▁APARTMENT
- ▁FIGHTING
- ▁BLAME
- ▁TREMBLING
- ▁SOMEBODY
- ▁ANYONE
- ▁BRIDE
- ▁READER
- ▁ROB
- ▁EVERYWHERE
- ▁LABOUR
- ▁RECALL
- ▁BULL
- ▁HIT
- ▁COUNCIL
- ▁POPULAR
- ▁CHAP
- ▁TRIAL
- ▁DUN
- ▁WISHES
- ▁BRILLIANT
- ▁ASSURED
- ▁FORGOT
- ▁CONTINUE
- ▁ACKNOWLEDG
- ▁RETREAT
- ▁INCREASED
- ▁CONTEMPT
- ▁GRANDFATHER
- ▁SYMPATHY
- ▁GHOST
- ▁STRETCHED
- ▁CREATURES
- ▁CAB
- ▁HIND
- ▁PLAYING
- ▁MISERABLE
- ▁MEMBERS
- ▁KINDNESS
- ▁HIGHEST
- ▁PRIM
- ▁KISSED
- ▁DESERVE
- ▁HUT
- ▁BEGGED
- ▁EIGHTY
- ▁CLOSELY
- ▁WONDERED
- ▁MILITARY
- ▁REMIND
- ▁ACCORDINGLY
- ▁LARGER
- ▁MAINTAIN
- ▁ENGINE
- ▁MOTIVE
- ▁DESTROY
- ▁STRIP
- ▁HANS
- ▁AHEAD
- ▁INFINITE
- ▁PROMPT
- ▁INFORMED
- TTLE
- ▁PEER
- ▁PRESSED
- ▁TRAP
- ▁SOMEWHERE
- ▁BOUGHT
- ▁VISIBLE
- ▁ASHAMED
- ▁TEAR
- ▁NEIGHBOUR
- ▁CONSTITUTION
- ▁INTELLIGENCE
- ▁PROFESSION
- ▁HUNGRY
- RIDGE
- ▁SMELL
- ▁STORIES
- ▁LISTENING
- ▁APPROACH
- ▁STRING
- ▁EXPLANATION
- ▁IMMENSE
- ▁RELIGIOUS
- ▁THROUGHOUT
- ▁HOLLOW
- ▁AWAIT
- ▁FLYING
- ▁SCREAM
- ▁ACTIVE
- ▁RUM
- ▁PRODUCT
- ▁UNHAPPY
- ▁VAGUE
- ARIES
- ▁ELIZABETH
- ▁STUPID
- ▁DIGNITY
- ▁ISABEL
- GAR
- ▁BRO
- ▁PITCH
- ▁COMRADE
- ▁STIFF
- ▁RECKON
- ▁SOLD
- ▁SPARK
- ▁STRO
- ▁CRYING
- ▁MAGIC
- ▁REPEAT
- PORT
- ▁MARKED
- ▁COMFORTABLE
- ▁PROJECT
- ▁BECOMING
- ▁PARENTS
- ▁SHELTER
- ▁STOLE
- ▁HINT
- ▁NEST
- ▁TRICK
- ▁THOROUGHLY
- ▁HOSPITAL
- ▁WEAPON
- ▁ROME
- ▁STYLE
- ▁ADMITTED
- ▁SAFETY
- FIELD
- ▁UNDERSTANDING
- ▁TREMBLE
- ▁PRINT
- ▁SLAVES
- ▁WEARY
- ▁ARTIST
- ▁CREDIT
- BURG
- ▁CONCLUSION
- ▁SELDOM
- ▁UNUSUAL
- ▁CLOUDS
- ▁UNABLE
- ▁GAY
- ▁HANGING
- ▁SCR
- ▁BOWED
- ▁DAVID
- ▁VOL
- ▁PUSHED
- ▁ESCAPED
- MOND
- ▁WARN
- ▁BETRAY
- ▁EGGS
- ▁PLAINLY
- ▁EXHIBIT
- ▁DISPLAY
- ▁MEMBER
- ▁GRIN
- ▁PROSPECT
- ▁BRUSH
- ▁BID
- ▁SUCCESSFUL
- ▁EXTENT
- ▁PERSUADE
- ▁MID
- ▁MOOD
- ▁ARRANGED
- ▁UNIVERSAL
- ▁JIM
- ▁SIGNAL
- ▁WHILST
- ▁PHILIP
- ▁WOLF
- RATE
- ▁EAGERLY
- ▁BILLY
- ▁RETURNING
- ▁CONSCIENCE
- ▁FORTUNATE
- ▁FEMALE
- ▁GLEAM
- ▁HASTILY
- ▁PROVIDED
- ▁OBTAIN
- ▁INSTINCT
- ▁CONCERNED
- ▁CONCERNING
- ▁SOMEHOW
- ▁PINK
- ▁RAGE
- ▁ACCUSTOMED
- ▁UNCONSCIOUS
- ▁ADVISE
- ▁BRANCHES
- ▁TINY
- ▁REFUSE
- ▁BISHOP
- ▁SUPPLY
- ▁PEASANT
- ▁LAWYER
- ▁WASTE
- ▁CONNECTION
- ▁DEVELOP
- ▁CORRESPOND
- ▁PLUM
- ▁NODDED
- ▁SLIPPED
- ▁EU
- ▁CONSTANTLY
- CUM
- MMED
- ▁FAIRLY
- HOUSE
- ▁KIT
- ▁RANG
- ▁FEATURES
- ▁PAUSE
- ▁PAINFUL
- ▁JOE
- ▁WHENCE
- ▁LAUGHTER
- ▁COACH
- ▁CHRISTMAS
- ▁EATING
- ▁WHOLLY
- ▁APART
- ▁SUPER
- ▁REVOLUTION
- ▁LONELY
- ▁CHEEKS
- ▁THRONE
- ▁CREW
- ▁ATTAIN
- ▁ESTABLISHED
- TIME
- ▁DASH
- ▁FRIENDLY
- ▁OPERA
- ▁EARL
- ▁EXHAUST
- ▁CLIFF
- ▁REVEAL
- ▁ADOPT
- ▁CENTRE
- ▁MERRY
- ▁SYLVIA
- ▁IDEAL
- ▁MISFORTUNE
- ▁FEAST
- ▁ARAB
- ▁NUT
- ▁FETCH
- ▁FOUGHT
- ▁PILE
- ▁SETTING
- ▁SOURCE
- ▁PERSIST
- ▁MERCY
- ▁BARK
- ▁LUC
- ▁DEEPLY
- ▁COMPARE
- ▁ATTITUDE
- ▁ENDURE
- ▁DELIGHTFUL
- ▁BEARD
- ▁PATIENCE
- ▁LOCAL
- ▁UTTERED
- ▁VICTORY
- ▁TREATED
- ▁SEPARATE
- ▁WAG
- ▁DRAGG
- ▁TITLE
- ▁TROOPS
- ▁TRIUMPH
- ▁REAR
- ▁GAINED
- ▁SINK
- ▁DEFEND
- ▁TIED
- ▁FLED
- ▁DARED
- ▁INCREASE
- ▁POND
- ▁CONQUER
- ▁FOREHEAD
- ▁FAN
- ▁ANXIETY
- ▁ENCOUNTER
- ▁SEX
- ▁HALT
- ▁SANK
- ▁CHEEK
- ▁HUMBLE
- ▁WRITER
- ▁EMPLOYED
- ▁DISTINGUISHED
- ▁RAISE
- ▁WHIP
- ▁GIANT
- ▁RANGE
- ▁OBTAINED
- ▁FLAG
- ▁MAC
- ▁JUMPED
- ▁DISCOVERY
- ▁NATIONAL
- ▁COMMISSION
- ▁POSITIVE
- ▁LOVING
- ▁EXACT
- ▁MURMURED
- ▁GAZED
- ▁REFER
- ▁COLLEGE
- ▁ENCOURAGE
- ▁NOVEL
- ▁CLOCK
- ▁MORTAL
- ▁ROLLED
- ▁RAT
- IZING
- ▁GUILTY
- ▁VICTOR
- WORTH
- ▁PRA
- ▁APPROACHING
- ▁RELATIVE
- ▁ESTATE
- ▁UGLY
- ▁METAL
- ▁ROBERT
- ▁TENT
- ▁ADMIRATION
- ▁FOURTEEN
- ▁BARBAR
- ▁WITCH
- ELLA
- ▁CAKE
- ▁SHONE
- ▁MANAGED
- ▁VOLUME
- ▁GREEK
- ▁DANCING
- ▁WRETCHED
- ▁CONDEMN
- ▁MAGNIFICENT
- ▁CONSULT
- J
- ▁ORGAN
- ▁FLEET
- ▁ARRANGEMENT
- ▁INCIDENT
- ▁MISERY
- ▁ARROW
- ▁STROKE
- ▁ASSIST
- ▁BUILD
- ▁SUCCEED
- ▁DESPERATE
- ▁WIDOW
- UDE
- ▁MARKET
- ▁WISDOM
- ▁PRECISE
- ▁CURRENT
- ▁SPOIL
- ▁BADE
- ▁WOODEN
- ▁RESIST
- ▁OBVIOUS
- ▁SENSIBLE
- FALL
- ▁ADDRESSED
- ▁GIL
- ▁COUNSEL
- ▁PURCHASE
- ▁SELECT
- ▁USELESS
- ▁STARED
- ▁ARREST
- ▁POISON
- ▁FIN
- ▁SWALLOW
- ▁BLOCK
- ▁SLID
- ▁NINETY
- ▁SPORT
- ▁PROVIDE
- ▁ANNA
- ▁LAMB
- ▁INTERVAL
- ▁JUMP
- ▁DESCRIBED
- ▁STRIKING
- ▁PROVISION
- ▁PROPOSED
- ▁MELANCHOLY
- ▁WARRIOR
- ▁SUGGEST
- ▁DEPARTURE
- ▁BURDEN
- ▁LIMB
- ▁TROUBLED
- ▁MEADOW
- ▁SACRED
- ▁SOLID
- ▁TRU
- ▁LUCY
- ▁RECOVER
- ▁ENERGY
- ▁POWDER
- ▁RESUMED
- ▁INTENSE
- ▁BRITISH
- ▁STRAW
- ▁AGREEABLE
- ▁EVERYONE
- ▁CONCERN
- ▁VOYAGE
- ▁SOUTHERN
- ▁BOSOM
- ▁UTTERLY
- ▁FEED
- ▁ESSENTIAL
- ▁CONFINE
- ▁HOUSEHOLD
- ▁EXTREMELY
- ▁WONDERING
- ▁LIST
- ▁PINE
- PHA
- ▁EXPERIMENT
- ▁JOSEPH
- ▁MYSTERY
- ▁RESTORE
- ▁BLUSH
- FOLD
- ▁CHOSEN
- ▁INTELLECT
- ▁CURTAIN
- OLOGY
- ▁MOUNTED
- ▁LAP
- ▁EPI
- ▁PUNISH
- ▁WEDDING
- ▁RECOGNIZED
- ▁DRIFT
- ▁PREPARATION
- ▁RESOLUTION
- ▁OPPRESS
- ▁FIX
- ▁VICTIM
- OGRAPH
- ▁SUMMON
- ▁JULIA
- ▁FLOOD
- ▁WAL
- ULATION
- ▁SLIGHTLY
- ▁LODGE
- ▁WIRE
- ▁CONFUSION
- ▁UNEXPECTED
- ▁CONCEIVE
- ▁PRIZE
- ▁JESUS
- ▁ADDITION
- ▁RUDE
- ▁FATAL
- ▁CARELESS
- ▁PATCH
- ▁KO
- ▁CATHERINE
- ▁PARLIAMENT
- ▁PROFOUND
- ▁ALOUD
- ▁RELIEVE
- ▁PUSH
- ABILITY
- ▁ACCOMPANIED
- ▁SOVEREIGN
- ▁SINGULAR
- ▁ECHO
- ▁COMPOSED
- ▁SHAKING
- ATORY
- ▁ASSISTANCE
- ▁TEACHER
- ▁HORRIBLE
- ▁STRICT
- ▁VERSE
- ▁PUNISHMENT
- ▁GOWN
- ▁MISTAKEN
- ▁VARI
- ▁SWEPT
- ▁GESTURE
- ▁BUSH
- ▁STEEL
- ▁AFFECTED
- ▁DIRECTED
- ▁SURROUNDED
- ▁ABSURD
- ▁SUGAR
- ▁SCRAP
- ▁IMMEDIATE
- ▁SADDLE
- ▁TY
- ▁ARISE
- ▁SIGHED
- ▁EXCHANGE
- ▁IMPATIENT
- ▁SNAP
- ▁EMBRACE
- ▁DISEASE
- ▁PROFIT
- ▁RIDING
- ▁RECOVERED
- ▁GOVERN
- ▁STRETCH
- ▁CONVINCED
- ▁LEANING
- ▁DOMESTIC
- ▁COMPLEX
- ▁MANIFEST
- ▁INDULGE
- ▁GENIUS
- ▁AGENT
- ▁VEIL
- ▁DESCRIPTION
- ▁INCLINED
- ▁DECEIVE
- ▁DARLING
- ▁REIGN
- HU
- ▁ENORMOUS
- ▁RESTRAIN
- ▁DUTIES
- BURY
- TTERED
- ▁POLE
- ▁ENABLE
- ▁EXCEPTION
- ▁INTIMATE
- ▁COUNTESS
- ▁TRIBE
- ▁HANDKERCHIEF
- ▁MIDNIGHT
- ▁PROBLEM
- ▁TRAMP
- ▁OIL
- CAST
- ▁CRUSH
- ▁DISCUSS
- ▁RAM
- ▁TROT
- ▁UNRE
- ▁WHIRL
- ▁LOCKED
- ▁HORIZON
- ▁OFFICIAL
- ▁SCHEME
- ▁DROWN
- ▁PIERRE
- ▁PERMITTED
- ▁CONNECTED
- ▁ASSURE
- ▁COCK
- ▁UTMOST
- ▁DEVOTED
- ▁RELI
- ▁SUFFICIENTLY
- ▁INTELLECTUAL
- ▁CARPET
- ▁OBJECTION
- ▁AFTERWARD
- ▁REALITY
- ▁NEGRO
- ▁RETAIN
- ▁ASCEND
- ▁CEASE
- ▁KATE
- ▁MARVEL
- KO
- ▁BOND
- MOST
- ▁COAL
- GATE
- ▁IGNORANT
- ▁BREAKING
- ▁TWIN
- ▁ASTONISHMENT
- ▁COFFEE
- ▁JAR
- ▁CITIES
- ▁ORIGIN
- ▁EXECUT
- ▁FINAL
- ▁INHABITANTS
- ▁STABLE
- ▁CHIN
- ▁PARTIES
- ▁PLUNGE
- ▁GENEROUS
- ▁DESCRIBE
- ▁ANNOUNCED
- ▁MERIT
- ▁REVERE
- ▁ERE
- ACIOUS
- ZI
- ▁DISAPPOINT
- ▁SUGGESTION
- ▁DOUBTLESS
- ▁TRUNK
- ▁STAMP
- ▁JOB
- ▁APPOINTED
- ▁DIVIDED
- ▁ACQUAINTED
- CHI
- ▁ABSOLUTE
- ▁FEARFUL
- ▁PRIVILEGE
- ▁CRAFT
- ▁STEEP
- ▁HUNTER
- ▁FORBID
- ▁MODEST
- ▁ENDEAVOUR
- ▁SWEEP
- ▁BEHELD
- ▁ABSORB
- ▁CONSTRUCT
- ▁EMPIRE
- ▁EXPEDITION
- ▁ERECT
- ▁OFFEND
- ▁INTEND
- ▁PERMIT
- ▁DESTROYED
- ▁CONTRACT
- ▁THIRST
- ▁WAGON
- ▁EVA
- ▁GLOOM
- ▁ATMOSPHERE
- ▁RESERVE
- ▁VOTE
- ▁GER
- ▁NONSENSE
- ▁PREVAIL
- ▁QUALITY
- ▁CLASP
- ▁CONCLUDED
- ▁RAP
- ▁KATY
- ▁ETERNAL
- ▁MUTTERED
- ▁NEGLECT
- ▁SQUIRE
- ▁CREEP
- LOCK
- ▁ELECTRIC
- ▁HAY
- ▁EXPENSE
- ▁SCORN
- ▁RETIRED
- ▁STOUT
- ▁MURMUR
- ▁SHARPLY
- ▁DISTRICT
- ▁LEAF
- ▁FAILURE
- WICK
- ▁JEAN
- ▁NUMEROUS
- ▁INFANT
- ▁REALIZED
- ▁TRAVELLER
- ▁HUNGER
- ▁JUNE
- ▁MUN
- ▁RECOMMEND
- ▁CREP
- ZZLE
- ▁RICHARD
- WORK
- ▁MONTE
- ▁PREACH
- ▁PALM
- AVI
- ▁ANYWHERE
- ▁DISPOSITION
- ▁MIRROR
- ▁VENTURE
- ▁POUND
- ▁CIGAR
- ▁INVITED
- ▁BENCH
- ▁PROTECTION
- ▁BENEFIT
- ▁THOMAS
- ▁CLERK
- ▁REPROACH
- ▁UNIFORM
- ▁GENERATION
- ▁SEAL
- ▁COMPASS
- ▁WARNING
- ▁EXTENDED
- ▁DIFFICULTIES
- ▁MAYBE
- ▁GROAN
- ▁AFFECT
- ▁COMB
- ▁EARN
- ▁WESTERN
- ▁IDLE
- ▁SCORE
- ▁TAP
- ▁ASTONISHED
- ▁INTRODUCED
- ▁LEISURE
- ▁LIEUTENANT
- ▁VIOLENCE
- ▁FIRMLY
- ▁MONSTER
- ▁UR
- ▁PROPERLY
- ▁TWIST
- ▁PIRATE
- ▁ROBBER
- ▁BATTER
- ▁WEPT
- ▁LEANED
- ▁FOG
- ▁ORNAMENT
- ▁ANDREW
- ▁BUSHES
- ▁REPUBLIC
- ▁CONFIDENT
- ▁LEAN
- ▁DART
- ▁STOOP
- ▁CURL
- ▁COUNTER
- ▁NORTHERN
- ▁PEARL
- ▁NEAREST
- ▁FRANCIS
- ▁WANDERING
- ▁FREQUENT
- ▁STARTLED
- ▁STATEMENT
- ▁OCCUR
- ▁BLOOM
- ▁NERVE
- ▁INSPECT
- ▁INDUCE
- ▁FLATTER
- ▁DATE
- ▁AMBITION
- ▁SLOPE
- ▁MALE
- ▁MADAM
- ▁MONK
- ▁RENT
- ▁CONFIRM
- ▁INVESTIGAT
- ▁RABBIT
- ▁REGIMENT
- ▁SUBMIT
- ▁SPELL
- ▁FURIOUS
- ▁RAIL
- ▁BESTOW
- ▁RALPH
- ▁SCATTERED
- ▁COMPELLED
- ▁THREAD
- ▁CHILL
- ▁DENY
- ▁PRONOUNC
- ▁MANKIND
- ▁CATTLE
- ▁EXECUTION
- ▁REBEL
- ▁SUPREME
- ▁VALUABLE
- ▁LIKEWISE
- ▁CONVEY
- ▁TIDE
- ▁GLOOMY
- ▁COIN
- ▁ACTUAL
- ▁TAX
- ▁PROVINCE
- ▁GRATEFUL
- ▁SPIRITUAL
- ▁VANISHED
- ▁DIANA
- ▁HAUNT
- ▁DRAGON
- ▁CRAWL
- ▁CHINA
- ▁GRATITUDE
- ▁NEAT
- ▁FINISH
- ▁INTENT
- ▁FRIGHT
- ▁EMBARRASS
- ▁THIRTEEN
- ▁RUTH
- ▁SLIGHTEST
- ▁DEVELOPMENT
- ▁INTERVIEW
- ▁SPECTACLE
- ▁BROOK
- VIE
- ▁WEAKNESS
- ▁AUDIENCE
- ▁CONSEQUENTLY
- ▁ABROAD
- ▁ASPECT
- ▁PAINTED
- ▁RELEASE
- ▁INSULT
- ▁SOOTH
- ▁DISAPPOINTMENT
- ▁EMERG
- ▁BRIG
- ▁ESTEEM
- ▁INVITATION
- ▁PASSENGER
- ▁PUBLISH
- ▁PIANO
- ▁IRISH
- ▁DESK
- ▁BEATEN
- ▁FIFTH
- ▁IMPULSE
- ▁SWEAR
- ▁EATEN
- ▁PURPLE
- ▁COMMITTED
- ▁COUNTRIES
- ▁PERCEIVE
- ISON
- ▁CELEBRAT
- ▁GRANDMOTHER
- ▁SHUDDER
- ▁SUNSHINE
- ▁SPANISH
- ▁HITHERTO
- ▁MARILLA
- ▁SNAKE
- ▁MOCK
- ▁INTERFERE
- ▁WALTER
- ▁AMID
- ▁MARBLE
- ▁MISSION
- TERIOR
- ▁DRIVING
- ▁FURNITURE
- ▁STEADY
- ▁CIRCUMSTANCE
- ▁INTERPRET
- ▁ENCHANT
- ▁ERROR
- ▁CONVICTION
- ▁HELPLESS
- ▁MEDICINE
- ▁QUALITIES
- ▁ITALIAN
- ▁HASTENED
- ▁OCCASIONALLY
- ▁PURSUED
- ▁HESITATED
- ▁INDEPENDENT
- ▁OLIVER
- ▁LINGER
- UX
- ▁EXAMINED
- ▁REPENT
- ▁PHYSICIAN
- ▁CHASE
- ▁BELOVED
- ▁ATTACHED
- ▁FLORENCE
- ▁HONEY
- ▁MOUSE
- ▁CRIES
- ▁BAKE
- ▁POEM
- ▁DESTRUCTION
- ▁FULFIL
- ▁MESSENGER
- ▁TRISTRAM
- ▁FANCIED
- ▁EXCESS
- ▁CURSE
- ▁CHU
- ▁QUANTITY
- ▁THORNTON
- ▁CREATED
- ▁CONTINUALLY
- ▁LIGHTNING
- ▁BORNE
- ▁TOTAL
- ▁DISPOSED
- ▁RIFLE
- ▁POLLY
- ▁GOAT
- ▁BACKWARD
- ▁VIRGINIA
- ▁KICK
- ▁PERIL
- ▁QUO
- ▁GLORIOUS
- ▁MULTITUDE
- ▁LEATHER
- ▁ABSENT
- ▁DEMON
- ▁DEBT
- ▁TORTURE
- ▁ACCORD
- ▁MATE
- ▁CATHOLIC
- ▁PILL
- ▁LIBRARY
- ▁PURSUIT
- ▁SHIRT
- ▁DEAREST
- ▁COLLAR
- ▁BEACH
- ▁ROBE
- ▁DECLARE
- ▁BRANCH
- ▁TEMPT
- ▁STEADILY
- ▁DISGUST
- ▁SILLY
- ▁ARRIVE
- ▁DRANK
- ▁LEVI
- ▁COMMUNICAT
- ▁RACHEL
- ▁WASHINGTON
- ▁RESIGN
- ▁MEANTIME
- ▁LACE
- ▁ENGAGEMENT
- ▁QUIVER
- ▁SEPARATED
- ▁DISCUSSION
- ▁VENTURED
- ▁SURROUNDING
- ▁POLISH
- ▁NAIL
- ▁SWELL
- ▁JOKE
- ▁LINCOLN
- ▁STUDENT
- ▁GLITTER
- ▁RUSSIAN
- ▁READILY
- ▁CHRIS
- ▁POVERTY
- ▁DISGRACE
- ▁CHEESE
- ▁HEAVILY
- ▁SCALE
- ▁STAFF
- ▁ENTREAT
- ▁FAREWELL
- ▁LUNCH
- ▁PEEP
- ▁MULE
- ▁SOMEONE
- ▁DISAPPEAR
- ▁DECISION
- ▁PISTOL
- ▁PUN
- ▁SPUR
- ▁ASSUMED
- ▁EXTEND
- ▁ENTHUSIASM
- ▁DEFINITE
- ▁UNDERTAKE
- ▁COMMITTEE
- ▁SIMON
- ▁FENCE
- ▁APPLIED
- ▁RELATED
- ▁VICE
- ▁UNPLEASANT
- ▁PROBABLE
- ▁PROCURE
- ▁FROWN
- ▁CLOAK
- ▁HUMANITY
- ▁FAMILIES
- ▁PHILOSOPHER
- ▁DWARF
- ▁OVERCOME
- ▁DEFEAT
- ▁FASTENED
- ▁MARSH
- ▁CLASSES
- ▁TOMB
- ▁GRACIOUS
- ▁REMOTE
- ▁CELL
- ▁SHRIEK
- ▁RESCUE
- ▁POOL
- ▁ORGANIZ
- ▁CHOSE
- ▁CUTTING
- ▁COWARD
- ▁BORDER
- ▁DIRTY
- ▁MONKEY
- ▁HOOK
- ▁CHUCK
- ▁EMILY
- ▁JEST
- ▁PLAC
- ▁WEIGH
- ▁ASSOCIATE
- ▁GLIMPSE
- ▁STUCK
- ▁BOLT
- ▁MURDERER
- ▁PONY
- ▁DISTINGUISH
- ▁INSTITUTION
- ▁CUNNING
- ▁COMPLIMENT
- ▁APPETITE
- ▁REPUTATION
- ▁FEEBLE
- ▁KIN
- ▁SERIES
- ▁GRACEFUL
- ▁PLATFORM
- ▁BREEZE
- ▁PHRASE
- ▁CLAY
- MONT
- ▁RATTL
- ▁OPPOSITION
- ▁LANE
- ▁BOAST
- ▁GROWTH
- ▁INCLINATION
- ▁BEHAVE
- ▁SUSAN
- ▁DISTINCTION
- ▁DISLIKE
- ▁NICHOLAS
- ▁SATISFY
- ▁DRAMA
- ▁ELBOW
- ▁GAZING
- ▁CONSUM
- ▁SPIN
- ▁OATH
- ▁CHANNEL
- ▁CHARACTERISTIC
- ▁SPEAR
- ▁SLAIN
- ▁SAUCE
- ▁FROG
- ▁CONCEPTION
- ▁TIMID
- ▁ZEAL
- ▁APPARENT
- SHIRE
- ▁CENTER
- ▁VARIETY
- ▁DUSK
- ▁APT
- ▁COLUMN
- ▁REVENGE
- ▁RIVAL
- ▁IMITAT
- ▁PASSIONATE
- ▁SELFISH
- ▁NORMAN
- ▁REPAIR
- ▁THRILL
- ▁TREATMENT
- ▁ROSA
- ▁MARTIN
- ▁INDIFFERENT
- ▁THITHER
- ▁GALLANT
- ▁PEPPER
- ▁RECOLLECT
- ▁VINE
- ▁SCARCE
- ▁SHIELD
- ▁MINGLED
- CLOSE
- ▁HARSH
- ▁BRICK
- ▁HUMOR
- ▁MISCHIEF
- ▁TREMENDOUS
- ▁FUNCTION
- ▁SMART
- ▁SULTAN
- ▁DISMISS
- ▁THREATENED
- ▁CHEAP
- ▁FLOCK
- ▁ENDEAVOR
- ▁WHISK
- ▁ITALY
- ▁WAIST
- ▁FLUTTER
- ▁SMOKING
- ▁MONARCH
- ▁AFRICA
- ▁ACCUSE
- ▁HERBERT
- ▁REFRESH
- ▁REJOICE
- ▁PILLOW
- ▁EXPECTATION
- ▁POETRY
- ▁HOPELESS
- ▁PERISH
- ▁PHILOSOPHY
- ▁WHISTLE
- ▁BERNARD
- ▁LAMENT
- ▁IMPROVE
- ▁SUP
- ▁PERPLEX
- ▁FOUNTAIN
- ▁LEAGUE
- ▁DESPISE
- ▁IGNORANCE
- ▁REFERENCE
- ▁DUCK
- ▁GROVE
- ▁PURSE
- ▁PARTNER
- ▁PROPHET
- ▁SHIVER
- ▁NEIGHBOURHOOD
- ▁REPRESENTATIVE
- SAIL
- ▁WIP
- ▁ACQUIRED
- ▁CHIMNEY
- ▁DOCTRINE
- ▁MAXIM
- ▁ANGLE
- ▁MAJORITY
- ▁AUTUMN
- ▁CONFUSED
- ▁CRISTO
- ▁ACHIEVE
- ▁DISGUISE
- ▁REDUCED
- ▁EARLIER
- ▁THEATRE
- ▁DECIDE
- MINATED
- OLOGICAL
- ▁OCCUPATION
- ▁VIGOROUS
- ▁CONTINENT
- ▁DECLINE
- ▁COMMUNITY
- ▁MOTIONLESS
- ▁HATRED
- ▁COMMUNICATION
- ▁BOWL
- ▁COMMENT
- ▁APPROVE
- ▁CEREMONY
- ▁CRIMINAL
- ▁SCIENTIFIC
- ▁DUCHESS
- ▁VIVID
- ▁SHIFT
- ▁AVAIL
- ▁DAMP
- ▁JOHNSON
- ▁SLENDER
- ▁CONTRAST
- ▁AMUSEMENT
- ▁PLOT
- ▁LYN
- ▁ASSOCIATION
- ▁SNATCH
- ▁UNCERTAIN
- ▁PRESSURE
- ▁PERCH
- ▁APPLY
- ▁PLANET
- ▁NOTWITHSTANDING
- ▁SWUNG
- ▁STIRRED
- ▁ATTENDANT
- ▁ENJOYMENT
- ▁WORRY
- ▁ALBERT
- ▁NAKED
- ▁TALENT
- ▁MARIAN
- ▁REFORM
- ▁DELIBERATE
- ▁INTELLIGENT
- ▁SENSITIVE
- ▁YONDER
- ▁PUPIL
- ▁FRIGHTFUL
- ▁DOUBTFUL
- ▁STANDARD
- ▁MAGISTRATE
- ▁SHEPHERD
- ▁STOMACH
- ▁DEPOSIT
- ▁RENEW
- ▁HEDGE
- ▁FRANCS
- ▁POSSIBILITY
- ▁RESEMBLE
- ▁FATIGUE
- ▁PORTRAIT
- ▁FAVORITE
- ▁CREAM
- ▁BURG
- ▁SECRETARY
- ▁DIVERS
- ▁ACTIVITY
- ▁SPECULAT
- ▁HUMOUR
- ▁FITTED
- ▁EXTERNAL
- ▁CETERA
- ▁WRAPPED
- ▁WHIT
- ▁FRED
- ▁EXAMINATION
- ▁LODGING
- ▁OWING
- ▁JAW
- ▁CROW
- ▁BALANCE
- ▁PUFF
- ▁TENDERNESS
- ▁PORTHOS
- ▁ANCHOR
- ▁INTERRUPT
- ▁NECESSARILY
- ▁PERPETUAL
- ▁AGONY
- ▁POPE
- ▁SCHOLAR
- ▁SCOTLAND
- ▁SUPPRESS
- ▁WRATH
- ▁WRECK
- ▁EXCEED
- ▁PERFECTION
- ▁INDIA
- ▁TRADITION
- ▁SECTION
- ▁EASTERN
- ▁DOORWAY
- ▁WIVES
- ▁CONVENTION
- ▁ANNOUNC
- ▁EGYPT
- ▁CONTRADICT
- ▁SCRATCH
- ▁CENTRAL
- ▁GLOVE
- ▁WAX
- ▁PREPARE
- ▁ACCOMPANY
- ▁INCREASING
- ▁LIBERAL
- ▁RAISING
- ▁ORANGE
- ▁SHOE
- ▁ATTRIBUTE
- ▁LITERATURE
- ▁PUZZLED
- ▁WITHDRAW
- ▁WHITHER
- ▁HAWK
- ▁MOONLIGHT
- ▁EXAMINE
- ▁HAPPILY
- ▁PRECEDE
- ▁DETECTIVE
- ▁INCHES
- ▁SOLITARY
- ▁DUTCH
- ▁NAPOLEON
- ▁UNEASY
- ▁CARDINAL
- ▁BLEW
- ▁FOWL
- ▁DECORAT
- ▁CHILDHOOD
- ▁TORMENT
- ▁LOSING
- ▁PERMISSION
- ▁BLANK
- ▁UPSTAIRS
- ▁CAPACITY
- ▁TRIFLE
- ▁FOLLY
- ▁RECOGNIZE
- ▁REMOVE
- ▁VENGEANCE
- ▁ENTERPRISE
- ▁BEDROOM
- ▁ANYHOW
- ▁INQUIRY
- ▁ASHES
- ▁DRAG
- ▁HUSH
- ▁AWKWARD
- ▁SATURDAY
- ▁GENUINE
- ▁SURVIV
- ▁SKIRT
- ▁AFFECTIONATE
- ▁TANG
- ▁MUTUAL
- ▁DISPUTE
- ▁EAGLE
- ▁INCOME
- ▁BIND
- ▁FAME
- ▁IMPROVEMENT
- ROVING
- ▁DIFFER
- ▁AWOKE
- ▁SLEEVE
- ▁SOLITUDE
- ▁FAVOURITE
- JI
- ▁DETECT
- ▁COMPREHEND
- ▁PREPARING
- ▁SERPENT
- ▁SUMMIT
- ▁KNOT
- ▁KNIT
- ▁COPY
- ▁STOPPING
- ▁FADED
- ▁HIDEOUS
- ▁JULIE
- STEAD
- ▁SHINE
- ▁CONFLICT
- ▁PROPOSITION
- ▁REFUGE
- ▁GALLERY
- ▁BUNDLE
- ▁AXE
- ▁SLAVERY
- ▁MASK
- ▁ALYOSHA
- ▁LADDER
- ▁DEPARTMENT
- ▁DISCHARGE
- ▁DEPRESS
- ▁GALLOP
- ▁SCARLET
- ▁KITTY
- ▁RECEIVING
- ▁SURRENDER
- ▁SUSTAIN
- ▁TWILIGHT
- ▁CONGRESS
- ▁IRELAND
- ▁FUNNY
- ▁LEND
- ▁CONSTITUTE
- ▁FUNERAL
- ▁CRYSTAL
- ▁SPAIN
- ▁EXCEEDINGLY
- ▁DAMN
- ▁COMMUN
- ▁CIVILIZATION
- ▁PREJUDICE
- ▁PORCH
- ▁ASSISTANT
- ▁INDUSTRY
- ▁TUMBLE
- ▁DEFENCE
- ▁HITHER
- ▁SMOT
- ▁COLONI
- ▁AMAZEMENT
- ▁MARGUERITE
- ▁MIRACLE
- ▁INHERIT
- ▁BEGGAR
- ▁ENVELOPE
- ▁INDIGNATION
- ▁NATASHA
- ▁PROPOSAL
- ▁FRAGMENT
- ▁ROUSED
- ▁ROAST
- ENCIES
- ▁COMMENCED
- ▁RESOURCE
- ▁POPULATION
- ▁QUOTH
- ▁PURSUE
- ▁EDUCAT
- ▁AFFLICT
- ▁CONTACT
- ▁CRIMSON
- ▁DIVISION
- ▁DISORDER
- ▁COPPER
- ▁SOLICIT
- ▁MODERATE
- ▁DRUM
- ▁SWIM
- ▁SALUTE
- ▁ASSUME
- ▁MUSCLE
- ▁OVERWHELM
- ▁SHAKESPEARE
- ▁STRUGGLING
- ▁TRANQUIL
- ▁CHICKEN
- ▁TREAD
- ▁CLAW
- ▁BIBLE
- ▁RIDGE
- ▁THREAT
- ▁VELVET
- ▁EXPOSED
- ▁IDIOT
- ▁BARREL
- ▁PENNY
- ▁TEMPTATION
- ▁DANGLARS
- ▁CENTURIES
- ▁DISTRIBUT
- ▁REJECT
- ▁RETORTED
- ▁CONCENTRAT
- ▁CORDIAL
- ▁MOTOR
- ▁CANNON
- KEEP
- ▁WRETCH
- ▁ASSURANCE
- ▁THIEF
- ▁SURVEY
- ▁VITAL
- ▁RAILWAY
- ▁JACKSON
- ▁CRASH
- ▁GROWL
- ▁COMBAT
- ▁RECOLLECTION
- ▁SECURITY
- ▁JACOB
- ▁CLUTCH
- ▁BLANKET
- ▁NANCY
- ▁CELLAR
- ▁CONVENIENT
- ▁INDIGNANT
- ▁COARSE
- ▁WORM
- ▁SCREEN
- ▁TRANSPORT
- ▁BULLET
- ▁APPRECIATE
- ▁DEVOTION
- ▁INVISIBLE
- ▁DRIED
- ▁MIXTURE
- ▁CANDID
- ▁PERFORMANCE
- ▁RIPE
- ▁EXQUISITE
- ▁BARGAIN
- ▁TOBACCO
- ▁LOYAL
- ▁MOULD
- ▁ATTENTIVE
- ▁DOROTHY
- ▁BRUTE
- ▁ESTABLISHMENT
- ▁ABILITY
- ▁INHABIT
- ▁OBSCURE
- ▁BORROW
- ▁ESSENCE
- ▁DISMAY
- ▁FLEE
- ▁BLADE
- ▁PLUCK
- ▁COFFIN
- ▁SUNSET
- ▁STEPHEN
- ▁ECONOMIC
- ▁HOLIDAY
- ▁MECHANICAL
- ▁COTTON
- ▁AWAKENED
- ▁SEIZE
- ▁RIDICULOUS
- ▁SANCHO
- ▁HESITATION
- ▁CORPSE
- ▁SAVING
- HOLD
- FOOT
- ▁ELDEST
- ▁DESPITE
- ▁EDITH
- ▁CHERISH
- ▁RESISTANCE
- ▁WILSON
- ▁ARGUE
- ▁INQUIRE
- ▁APPREHENSION
- ▁AVENUE
- ▁DRAKE
- ▁PROPOSE
- HURST
- ▁INFERIOR
- ▁STAIRCASE
- ▁WHEREFORE
- ▁CARLYLE
- ▁COUCH
- ▁ROUTE
- ▁POLITICS
- ▁TOMORROW
- ▁THRONG
- ▁NAUGHT
- ▁SUNLIGHT
- ▁INDIFFERENCE
- ▁OBEDIENCE
- ▁RECEPTION
- ▁VEGETABLE
- ▁IMPERFECT
- ▁RESIDENCE
- ▁TURKEY
- ▁VIOLET
- ▁SARAH
- ▁ALTAR
- ▁GRIEVE
- ▁JERK
- ▁ENSU
- ▁MAGICIAN
- ▁BLOSSOM
- ▁LANTERN
- ▁RESOLUTE
- ▁THOUGHTFULLY
- ▁FORTNIGHT
- ▁TRUMPET
- ▁VALJEAN
- ▁UNWILLING
- ▁LECTURE
- ▁WHEREUPON
- ▁HOLLAND
- ▁CHANGING
- ▁CREEK
- ▁SLICE
- ▁NORMAL
- ▁ANNIE
- ▁ACCENT
- ▁FREDERICK
- ▁DISAGREEABLE
- ▁RUBBED
- ▁DUMB
- ▁ESTABLISH
- ▁IMPORT
- ▁AFFIRM
- ▁MATTHEW
- ▁BRISK
- ▁CONVERT
- ▁BENDING
- ▁IVAN
- ▁MADEMOISELLE
- ▁MICHAEL
- ▁EASIER
- ▁JONES
- ▁FACING
- ▁EXCELLENCY
- ▁LITERARY
- ▁GOSSIP
- ▁DEVOUR
- ▁STAGGER
- ▁PENCIL
- ▁AVERAGE
- ▁HAMMER
- ▁TRIUMPHANT
- ▁PREFERRED
- ▁APPLICATION
- ▁OCCUPY
- ▁AUTHORITIES
- BURN
- ▁ASCERTAIN
- ▁CORRIDOR
- ▁DELICIOUS
- ▁PRACTISE
- ▁UNIVERSE
- ▁SHILLING
- ▁CONTEST
- ▁ASHORE
- ▁COMMIT
- ▁ADMINISTRATION
- ▁STUDIED
- ▁RIGID
- ▁ADORN
- ▁ELSEWHERE
- ▁INNOCENCE
- ▁JOURNAL
- ▁LANDSCAPE
- ▁TELEGRAPH
- ▁ANGRILY
- ▁CAMPAIGN
- ▁UNJUST
- ▁CHALLENGE
- ▁TORRENT
- ▁RELATE
- ▁ASSEMBLED
- ▁IMPRESSED
- ▁CANOE
- ▁CONCLUD
- ▁QUIXOTE
- ▁SATISFACTORY
- ▁NIECE
- ▁DEAF
- ▁RAFT
- ▁JIMMY
- ▁GLID
- ▁REGULAT
- ▁CHATTER
- ▁GLACIER
- ▁ENVY
- ▁STATUE
- ▁BOSTON
- ▁RICHMOND
- ▁DENIED
- ▁FANNY
- ▁SOLOMON
- ▁VULGAR
- ▁STALK
- ▁REPLACE
- ▁SPOON
- ▁BASIN
- ▁FEATURE
- ▁CONVICT
- ▁ARCHITECT
- ▁ADMIRAL
- ▁RIBBON
- ▁PERMANENT
- ▁APRIL
- ▁JOLLY
- ▁NEIGHBORHOOD
- ▁IMPART
- BOROUGH
- CAMP
- ▁HORRID
- ▁IMMORTAL
- ▁PRUDENCE
- ▁SPANIARD
- ▁SUPPOSING
- ▁TELEPHONE
- ▁TEMPERATURE
- ▁PENETRATE
- ▁OYSTER
- ▁APPOINTMENT
- ▁EGYPTIAN
- ▁DWELT
- ▁NEPHEW
- ▁RAILROAD
- ▁SEPTEMBER
- ▁DEVICE
- ▁WHEAT
- ▁GILBERT
- ▁ELEGANT
- ▁ADVERTISE
- ▁RATIONAL
- ▁TURTLE
- ▁BROOD
- ▁ASSEMBLY
- ▁CULTIVATE
- ▁EDITOR
- ▁SPECIMEN
- ▁UNDOUBTEDLY
- ▁WHALE
- ▁DROPPING
- ▁BALLOON
- ▁MEDICAL
- COMB
- ▁COMPOSITION
- ▁FOOTSTEPS
- ▁LAUNCELOT
- ▁DISCOURSE
- ▁ERRAND
- ▁CONVERSE
- ▁ADVANCING
- ▁DOWNSTAIRS
- ▁TUMULT
- ▁CORRUPT
- ▁SUFFICE
- ▁ANGUISH
- ▁SHAGGY
- ▁RETIRE
- ▁TIMBER
- ▁BLAZE
- ▁ABSTRACT
- ▁EMBROIDER
- ▁PHOTOGRAPH
- ▁PROSPERITY
- ▁TERRIBLY
- ▁TERRITORY
- ▁THRESHOLD
- ▁PAVEMENT
- ▁INJURED
- ▁LIMP
- ▁AGITATION
- ▁RASCAL
- ▁PRESUME
- ▁OBSERVING
- ▁OBSTACLE
- ▁SIMPLICITY
- ▁SLUMBER
- ▁SUPPLIED
- ▁COMBINATION
- ▁DRAIN
- ▁WILDERNESS
- ▁BELIEVING
- ▁VILLAIN
- ▁RECKLESS
- ▁INJURY
- ▁CLAPP
- ▁FRIDAY
- ▁HERCULES
- ▁KENNEDY
- ▁SYMPTOM
- ▁SLEDGE
- ▁CEILING
- ▁LEMON
- ▁PLAGUE
- ▁MONDAY
- ▁CANVAS
- ▁IMPATIENCE
- ▁UNCOMFORTABLE
- ▁ACCESS
- ▁FROZEN
- ▁SENATOR
- ▁FRANZ
- ▁SWIMMING
- ▁BARRIER
- ▁ADJUST
- ▁COMPARISON
- ▁PROCLAIM
- ▁WRINKL
- ▁OVERLOOK
- ▁MITYA
- ▁GUILT
- ▁PERCEPTION
- ▁PRECAUTION
- ▁SPECTATOR
- ▁SURPRISING
- ▁DISTRACT
- ▁DISDAIN
- ▁BONNET
- ▁MAGNET
- ▁PROFESS
- ▁CONFOUND
- ▁NARRATIVE
- ▁STRUCTURE
- ▁SKETCH
- ▁ULTIMATE
- ▁GLOBE
- ▁INSECT
- FICIENCY
- ▁ORCHARD
- ▁AMIABLE
- ▁DESCENT
- ▁INDEPENDENCE
- ▁MANUFACTURE
- ▁SPRINKLE
- ▁NIGHTINGALE
- ▁CUSHION
- ▁EMINENT
- ▁SCOTT
- ▁ARRAY
- ▁COSETTE
- ▁WAVING
- ▁EXTRACT
- ▁IRREGULAR
- ▁PERSECUT
- ▁DERIVED
- ▁WITHDREW
- ▁CAUTION
- ▁SUSPICIOUS
- ▁MEMORIES
- ▁NOWHERE
- ▁SUBTLE
- ▁THOROUGH
- Q
- ▁APPROPRIATE
- ▁SLAUGHTER
- ▁YOURSELVES
- ▁THUMB
- ▁TWAS
- ▁ABODE
- ▁BIDDING
- ▁CONSPICUOUS
- ▁REBECCA
- ▁SERGEANT
- ▁APRON
- ▁ANTICIPATE
- ▁DISCIPLINE
- ▁GLANCING
- ▁PILGRIM
- ▁SULLEN
- ▁CONTRIBUTE
- ▁PRAIRIE
- ▁CARVED
- ▁COMMERCE
- ▁EXCLAMATION
- ▁MUSCULAR
- ▁NOVEMBER
- ▁PHENOMENA
- ▁SYMBOL
- ▁UMBRELLA
- ▁DIMINISH
- ▁PARLOUR
- ▁THREATENING
- ▁STUMP
- ▁EXTENSIVE
- ▁PLEASING
- ▁REMEMBRANCE
- ▁COMBINED
- ▁SHERIFF
- ▁SHAFT
- ▁LAURA
- ▁INTERCOURSE
- ▁STRICKEN
- ▁SUPPLIES
- ▁LANDLORD
- ▁SHRINK
- ▁PRICK
- ▁CAESAR
- ▁DRUG
- ▁BEWILDERED
- ▁NAUTILUS
- ▁BRUTAL
- ▁COMMERCIAL
- ▁MAGGIE
- ▁SPHERE
- ▁VIRGIN
- ▁BRETHREN
- ▁DESTINY
- ▁POLICY
- ▁TERRIFIED
- ▁HOUSEKEEPER
- ▁CRAZY
- ▁ARDENT
- ▁DISCERN
- ▁WRAP
- ▁MARQUIS
- ▁RUSSIA
- MOUTH
- ▁BRITAIN
- ▁HARBOUR
- ▁CONCERT
- ▁DONKEY
- ▁DAMAGE
- ▁SLIM
- ABOUT
- ▁LUXURY
- ▁MONSTROUS
- ▁TENDENCY
- ▁PARADISE
- ▁CULTURE
- ▁JULIUS
- ▁RAOUL
- ▁REMEDY
- ▁DECAY
- ▁SCOLD
- ▁SPLIT
- ▁ASSAULT
- ▁DECEMBER
- ▁MOSCOW
- ▁EXPLORE
- ▁TROUSERS
- ▁WRIST
- PIECE
- ▁MUSKET
- ▁VALENTINE
- ▁TYRANT
- ▁ABRAHAM
- ▁MEDIUM
- ▁ARTIFICIAL
- ▁FACULTY
- ▁OBLIGATION
- ▁RESEMBLANCE
- ▁INQUIRIES
- ▁DETAIN
- ▁SWARM
- ▁PLEDGE
- ▁ADMIRABLE
- ▁DEFECT
- ▁SUPERINTEND
- ▁PATRIOT
- ▁CLUNG
- ▁DISMAL
- ▁RECIT
- ▁IGNOR
- ▁AMELIA
- ▁JUSTIFY
- ▁ELEPHANT
- ▁ESTIMATE
- ▁KNELT
- ▁SERVING
- ▁WHIM
- ▁SHRILL
- ▁STUDIO
- ▁TEXT
- ▁ALEXANDER
- ▁WROUGHT
- ▁ABUNDANT
- ▁SITUATED
- ▁REGAIN
- ▁FIERY
- ▁SNEER
- ▁SWEAT
- ▁GLARE
- ▁NIGH
- ▁ESCORT
- ▁INEVITABLE
- ▁PSMITH
- ▁RELUCTANT
- ▁PRECEDING
- ▁RESORT
- ▁OUTRAGE
- ▁AMBASSADOR
- ▁CONSOLATION
- ▁RECOGNITION
- ▁REMORSE
- ▁BEHALF
- ▁FORMIDABLE
- ▁GRAVITY
- ▁DIVIDE
- ▁CONFRONT
- ▁GIGANTIC
- ▁OCTOBER
- ▁FLANK
- ▁SLEW
- ▁CLARA
- ▁FILM
- ▁BULK
- ▁POMP
- ▁ELEANOR
- ▁EMPHASIS
- ▁JAPANESE
- ▁CAVALRY
- ▁EXCLUSIVE
- ▁PERFUME
- ▁BRONZE
- ▁FEDERAL
- ▁LIQUID
- ▁RUBBING
- ▁OVEN
- DOLPH
- ▁CONVULS
- ▁DEPRIVED
- ▁RESPONSIBILITY
- ▁SIGNIFICANT
- ▁WAISTCOAT
- ▁CLUSTER
- ▁MARTHA
- ▁REVERSE
- ▁ATTORNEY
- ▁DROOP
- ▁SKILFUL
- ▁HABITUAL
- ▁PUMP
- ▁INTERVEN
- ▁OWL
- ▁CONJECTURE
- ▁FANTASTIC
- ▁RESPONSIBLE
- ▁DESTINED
- ▁DOCUMENT
- ▁THEREUPON
- ▁GODDESS
- ▁PACIFIC
- ▁WARRANT
- ▁COSTUME
- ▁BRIDLE
- ▁CALIFORNIA
- ▁DEMOCRATIC
- ▁EUSTACE
- ▁SQUIRREL
- ▁UNCOMMON
- ▁MARVELLOUS
- ▁PLOUGH
- ▁TRAGEDY
- ▁VAULT
- ▁HESITATE
- ▁REFRAIN
- ▁ADMIRING
- ▁CORPORAL
- ▁ENTITLED
- ▁SHREWD
- ▁SQUEEZ
- ▁ACCURATE
- ▁TEMPEST
- ▁MONUMENT
- ▁SIEGE
- ▁CHINESE
- ▁RAVEN
- ▁LOUNG
- ▁ASSASSIN
- ▁INFLICT
- ▁AGITATED
- ▁DESIRABLE
- ▁EARLIEST
- ▁LAUNCH
- ▁PILOT
- ▁PULSE
- ▁MUTE
- LEIGH
- ▁LIQUOR
- ▁SCARECROW
- ▁SKULL
- ▁DESOLATE
- ▁SUBLIME
- ▁SERENE
- ▁RECESS
- ▁WAKING
- ▁CHARLOTTE
- ▁CIRCULAR
- ▁INJUSTICE
- ▁PINOCCHIO
- ▁PRISCILLA
- ▁THYSELF
- ▁OCCURRENCE
- ▁CASUAL
- ▁FRANTIC
- ▁LEGEND
- ▁FERTIL
- ▁BACKGROUND
- ▁DELICACY
- ▁ESTRALLA
- ▁MANUSCRIPT
- ▁RESPONSE
- ▁UNIVERSITY
- ▁WOLVES
- ▁SCANDAL
- ▁STUMBLE
- ▁HOARSE
- ▁BODILY
- ▁CONVENT
- ▁EXAMINING
- ▁INCAPABLE
- ▁PERCEIVING
- ▁PHILADELPHIA
- ▁SUBSEQUENT
- ▁THIEVES
- ▁ACCUMULAT
- ▁DAMSEL
- ▁SCOTCH
- ▁UNDERNEATH
- ▁NOBILITY
- ▁SMASH
- ▁REVOLT
- ▁ENGAGE
- ▁CATHEDRAL
- ▁CHAMPION
- ▁DESPATCH
- ▁ETERNITY
- ▁JANUARY
- ▁PLEADED
- ▁PROBABILITY
- ▁JIMMIE
- ▁PARALLEL
- ▁FISHERMAN
- ▁JERRY
- ▁SWORE
- ▁DRAUGHT
- ▁OPPONENT
- ▁PRIMITIVE
- ▁SIGNIFICANCE
- ▁SUBSTANTIAL
- ▁AMAZED
- ▁DUNBAR
- ▁COMMEND
- ▁CONTEMPLATE
- ▁TESTIMONY
- ▁IMPERIAL
- ▁ADAPT
- ▁JUICE
- ▁CALAMIT
- CULAR
- ▁CHATEAU
- ▁PHOENIX
- ▁PRUDENT
- ▁SOLUTION
- ▁VILLEFORT
- ▁REACTION
- ▁RELAX
- ▁YU
- ▁PROHIBIT
- ▁DISTRUST
- ▁PLUNDER
- ▁WELFARE
- ▁NAVIGAT
- ▁PARLOR
- ▁LAZY
- ▁DETACH
- OMETER
- ▁PRIV
- ▁DISCOURAGE
- ▁OBSTINATE
- ▁REJOICING
- ▁SERMON
- ▁VEHICLE
- ▁FANCIES
- ▁ENLIGHTEN
- ▁ACUTE
- ▁ILLUSION
- ▁ANTHEA
- ▁MARTIAN
- ▁EXCITE
- ▁GENEROSITY
- OLOGIST
- ▁AMAZING
- ▁UNWORTHY
- ▁INTERNAL
- ▁INCENSE
- ▁VIBRAT
- ▁ADHERE
- ROACH
- ▁FEBRUARY
- ▁MEXICAN
- ▁POTATOES
- ▁INCESSANT
- ▁INTERPOSED
- ▁PARCEL
- ▁VEXED
- ▁PROMOTE
- MIDST
- ▁ARISTOCRAT
- ▁CYRIL
- ▁EMBARK
- ▁ABUNDANCE
- ▁LITERALLY
- ▁SURGEON
- ▁TERRACE
- ▁ATLANTIC
- ▁MARTYR
- ▁SPECK
- ▁SENATE
- ▁LOAF
- ▁ADMINISTER
- ▁APPREHEND
- ▁SUBDUED
- ▁TEMPORARY
- ▁DOMINION
- ▁ELABORATE
- ▁DIGNIFIED
- ▁ELIZA
- ▁SPLASH
- ▁CONSEIL
- ▁DEXTER
- ▁UNSEEN
- ▁TRAGIC
- VOCATION
- ▁GRATIFY
- ▁BACHELOR
- ▁DEFENSE
- ▁EXCURSION
- ▁FACULTIES
- ▁PROPRIETOR
- ▁SYMPATHETIC
- ▁UNNECESSARY
- ▁RADIANT
- ▁VACANT
- ▁OUNCE
- ▁SCREW
- ▁PHENOMENON
- ▁PROMINENT
- ▁WORRIED
- ▁STUDIES
- ▁CLIMATE
- ▁KEITH
- ▁ARAMIS
- ▁BLISS
- ▁CONTINUAL
- ▁SURPASS
- ▁HEBREW
- ▁IDENTITY
- ▁PROVOKE
- ▁TEMPERAMENT
- ▁CHARIOT
- ▁HARBOR
- ▁NINTH
- ▁PRIOR
- ▁DESIROUS
- ▁JERUSALEM
- ▁UNDERTAKING
- ▁EDISON
- ▁MIRTH
- ▁SCOUT
- ▁APPARATUS
- ▁ILLUSTRATION
- ▁INTELLIGIBLE
- ▁INVARIABLY
- ▁PIERCED
- ▁REVIEW
- ▁FLICKER
- ▁HAZARD
- ▁REVELATION
- ▁DIXON
- ▁EXCITING
- ▁GOSPEL
- ▁CONSTANCE
- ▁OVERTAKE
- ▁GUINEA
- ▁ALADDIN
- ▁CHICAGO
- ▁TULLIVER
- ▁HAMILTON
- ▁GARRISON
- ▁DISCIPLE
- ▁INTENSITY
- ▁TRAITOR
- ▁CHANCELLOR
- ▁PROVERB
- ▁DAGGER
- ▁FORESEE
- ▁CONFIDE
- ▁GLIMMER
- ▁CHAUVELIN
- ▁ILLUSTRATE
- ▁VOLUNTEER
- ▁JUNGLE
- ▁STREAK
- ▁SUNRISE
- ▁DISSOLV
- ▁QUEST
- ▁AWHILE
- ▁FELICITY
- ▁LEGISLATURE
- ▁LEONORA
- ▁MAGAZINE
- ▁PITIFUL
- ▁COLONY
- ▁SHAWL
- ▁ARRIVING
- ▁FUNDAMENTAL
- ▁CARPENTER
- ▁OVERFLOW
- ▁EXPAND
- ▁HARVEST
- ▁FEMININE
- ▁INNUMERABLE
- ▁SCRAMBLE
- ▁TWENTIETH
- ▁TRIFLING
- ▁GHASTL
- ▁CONQUEST
- ▁DANIEL
- ▁FACILIT
- ▁FORSAKE
- ▁BEHAVIOUR
- ▁GORGEOUS
- ▁PRODUCING
- ▁HAPPIER
- ▁PROMISING
- ▁RAINBOW
- ▁INSTINCTIVELY
- ▁DECREE
- ▁EYEBROWS
- ▁IRRESISTIBLE
- ▁PHARAOH
- ▁SCROOGE
- ▁UNNATURAL
- ▁CRUMBS
- ▁REFINED
- ▁DREARY
- ▁TRENCH
- ▁CONVINCE
- ▁FRINGE
- ▁EXTREMITY
- ▁INTIMACY
- ▁SCOUNDREL
- ▁SUFFRAGE
- ▁UNEASINESS
- ▁BARRICADE
- ▁CIRCULAT
- ▁SAMUEL
- ▁BRUCE
- ▁DARCY
- <sos/eos>
init: xavier_uniform
input_size: 83
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: false
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: null
frontend_conf: {}
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_fbank_pitch_en_bpe5000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: contextual_block_transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
block_size: 40
hop_size: 16
look_ahead: 16
init_average: true
ctx_pos_enc: true
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.9.7
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
huyue012/wav2vec2-base-cynthia-tedlium-2500
|
huyue012
| 2021-11-17T22:08:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-cynthia-tedlium-2500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cynthia-tedlium-2500
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4356
- Wer: 0.1796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.2811 | 6.58 | 500 | 3.1455 | 0.9934 |
| 3.0533 | 13.16 | 1000 | 2.9568 | 0.9934 |
| 1.8269 | 19.73 | 1500 | 0.7595 | 0.2484 |
| 0.5103 | 26.31 | 2000 | 0.4356 | 0.1796 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
JaviBJ/sagemaker-distilbert-emotion
|
JaviBJ
| 2021-11-17T17:02:01Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: sagemaker-distilbert-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9165
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2469
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9351 | 1.0 | 500 | 0.2469 | 0.9165 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
usami/distilbert-base-uncased-finetuned-cola
|
usami
| 2021-11-17T06:31:12Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5491920151313351
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7767
- Matthews Correlation: 0.5492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5244 | 1.0 | 535 | 0.5349 | 0.4240 |
| 0.3471 | 2.0 | 1070 | 0.5087 | 0.5079 |
| 0.235 | 3.0 | 1605 | 0.6847 | 0.5106 |
| 0.1718 | 4.0 | 2140 | 0.7767 | 0.5492 |
| 0.1271 | 5.0 | 2675 | 0.8580 | 0.5469 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
relh/COHESIV
|
relh
| 2021-11-16T18:05:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
title: AnimeGANv2
emoji: ⚡
colorFrom: yellow
colorTo: blue
sdk: gradio
app_file: app.py
pinned: false
---
# Configuration
`title`: _string_
Display title for the Space
`emoji`: _string_
Space emoji (emoji-only character allowed)
`colorFrom`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`colorTo`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`sdk`: _string_
Can be either `gradio` or `streamlit`
`sdk_version` : _string_
Only applicable for `streamlit` SDK.
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
`app_file`: _string_
Path to your main application file (which contains either `gradio` or `streamlit` Python code).
Path is relative to the root of the repository.
`pinned`: _boolean_
Whether the Space stays on top of your list.
|
Jeska/BertjeWDialData
|
Jeska
| 2021-11-16T18:04:08Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: BertjeWDialData
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialData
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 297 | 2.2419 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
healx/biomedical-slot-filling-reader-base
|
healx
| 2021-11-16T09:16:36Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"arxiv:2109.08564",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
Reader model for Biomedical slot filling see https://arxiv.org/abs/2109.08564 for details. The model is initialized with [biobert-base](https://huggingface.co/dmis-lab/biobert-v1.1).
|
healx/biomedical-slot-filling-reader-large
|
healx
| 2021-11-16T09:15:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"arxiv:2109.08564",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
Reader model for Biomedical slot filling see https://arxiv.org/abs/2109.08564 for details. The model is initialized with [biobert-large](https://huggingface.co/dmis-lab/biobert-large-cased-v1.1).
|
ychu4/distilbert-base-uncased-finetuned-cola
|
ychu4
| 2021-11-16T03:23:59Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.509687043672971
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7512
- Matthews Correlation: 0.5097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5237 | 1.0 | 535 | 0.5117 | 0.4469 |
| 0.3496 | 2.0 | 1070 | 0.5538 | 0.4965 |
| 0.2377 | 3.0 | 1605 | 0.6350 | 0.4963 |
| 0.1767 | 4.0 | 2140 | 0.7512 | 0.5097 |
| 0.1383 | 5.0 | 2675 | 0.8647 | 0.5056 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.1+cu102
- Datasets 1.15.1
- Tokenizers 0.10.1
|
daqiao202/distilgpt2-finetuned-wikitext2
|
daqiao202
| 2021-11-16T02:28:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
nbroad/muril-bigbird-base
|
nbroad
| 2021-11-16T02:21:08Z | 4 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"big_bird",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
I put MuRIL base in BigBird.
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)
|
nbroad/muril-bigbird-large-1k
|
nbroad
| 2021-11-16T02:20:28Z | 4 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"big_bird",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
MAKE SURE TO PULL THE MODEL AT CHECKPOINT 90000 UNTIL I FIGURE OUT HOW TO FIX THE GIT HISTORY
git checkout 57f0ef792a759c022b83433c7a26df52f3da3608
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)
|
Waynehillsdev/wav2vec2-base-timit-demo-colab
|
Waynehillsdev
| 2021-11-16T00:41:40Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4180
- Wer: 0.3392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.656 | 4.0 | 500 | 1.8973 | 1.0130 |
| 0.8647 | 8.0 | 1000 | 0.4667 | 0.4705 |
| 0.2968 | 12.0 | 1500 | 0.4211 | 0.4035 |
| 0.1719 | 16.0 | 2000 | 0.4725 | 0.3739 |
| 0.1272 | 20.0 | 2500 | 0.4586 | 0.3543 |
| 0.1079 | 24.0 | 3000 | 0.4356 | 0.3484 |
| 0.0808 | 28.0 | 3500 | 0.4180 | 0.3392 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-100m-mls-german-ft-2
|
patrickvonplaten
| 2021-11-16T00:01:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"multilingual_librispeech",
"generated_from_trainer",
"dataset:multilingual_librispeech",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- multilingual_librispeech
- generated_from_trainer
datasets:
- multilingual_librispeech
model-index:
- name: wav2vec2-100m-mls-german-ft-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-100m-mls-german-ft-2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-100m](https://huggingface.co/facebook/wav2vec2-xls-r-100m) on the MULTILINGUAL_LIBRISPEECH - GERMAN dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9304
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 2.9545 | 14.29 | 500 | 2.9354 | 1.0 |
| 2.9537 | 28.57 | 1000 | 2.9359 | 1.0 |
| 2.9602 | 42.86 | 1500 | 2.9302 | 1.0 |
| 2.9586 | 57.14 | 2000 | 2.9298 | 1.0 |
| 2.9331 | 71.43 | 2500 | 2.9314 | 1.0 |
| 2.9321 | 85.71 | 3000 | 2.9304 | 1.0 |
| 2.9652 | 100.0 | 3500 | 2.9304 | 1.0 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
Jacobo/axiothea
|
Jacobo
| 2021-11-15T20:07:05Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"grc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
language:
- grc
model-index:
- name: dioBERTo
results: []
widget:
- text: "Πλάτων ὁ Περικτιόνης <mask> γένος ἀνέφερεν εἰς Σόλωνα."
- text: "ὁ Κριτίας ἀπέβλεψε <mask> τὴν θύραν."
- text: "Ὦ φίλε Κλεινία, καλῶς μὲν <mask>."
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# axiothea
This is an experimental roberta model trained with an ancient Greek corpus of about 900 MB, which was scrapped from the web and post-processed. Duplicate texts and editorial punctuation were removed. The training dataset will be soon available in the Huggingface datasets hub. Training a model of ancient Greek is challenging given that it is a low resource language from which 50% of the register has only survived in fragmentary texts. The model is provided by the Diogenet project at the University of California, San Diego.
It achieves the following results on the evaluation set:
- Loss: 3.3351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7013 | 1.0 | 341422 | 4.8813 |
| 4.2866 | 2.0 | 682844 | 4.4422 |
| 4.0496 | 3.0 | 1024266 | 4.2132 |
| 3.8503 | 4.0 | 1365688 | 4.0246 |
| 3.6917 | 5.0 | 1707110 | 3.8756 |
| 3.4917 | 6.0 | 2048532 | 3.7381 |
| 3.3907 | 7.0 | 2389954 | 3.6107 |
| 3.2876 | 8.0 | 2731376 | 3.5044 |
| 3.1994 | 9.0 | 3072798 | 3.3980 |
| 3.0806 | 10.0 | 3414220 | 3.3095 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
nateraw/my-cool-timm-model-2
|
nateraw
| 2021-11-15T20:06:24Z | 4 | 0 |
timm
|
[
"timm",
"pytorch",
"tensorboard",
"image-classification",
"generated_from_trainer",
"dataset:cats_vs_dogs",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- timm
- generated_from_trainer
library_tag: timm
datasets:
- cats_vs_dogs
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-cool-timm-model-2
This model is a fine-tuned version of [resnet18](https://huggingface.co/resnet18) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2510
- Acc1: 95.2150
- Acc5: 100.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc1 | Acc5 |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-----:|
| No log | 0.07 | 5 | 0.3436 | 92.0820 | 100.0 |
| 0.4914 | 0.14 | 10 | 0.2510 | 95.2150 | 100.0 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
vkk1710/xlnet-base-cased-finetuned-qqp
|
vkk1710
| 2021-11-15T19:25:06Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: xlnet-base-cased-finetuned-qqp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-finetuned-qqp
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the qqp dataset (part of glue dataset).
It achieves the following results on the evaluation set:
- eval_loss: 0.27
- eval_accuracy: 0.9084
- eval_f1: 0.8775
- epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- weight_decay: 0.01
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
huyue012/wav2vec2-base-cynthia-timit
|
huyue012
| 2021-11-15T17:29:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-cynthia-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cynthia-timit
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4888
- Wer: 0.3315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.7674 | 1.0 | 500 | 2.8994 | 1.0 |
| 1.3538 | 2.01 | 1000 | 0.5623 | 0.5630 |
| 0.5416 | 3.01 | 1500 | 0.4595 | 0.4765 |
| 0.3563 | 4.02 | 2000 | 0.4435 | 0.4328 |
| 0.2869 | 5.02 | 2500 | 0.4035 | 0.4145 |
| 0.2536 | 6.02 | 3000 | 0.4090 | 0.3945 |
| 0.2072 | 7.03 | 3500 | 0.4188 | 0.3809 |
| 0.1825 | 8.03 | 4000 | 0.4139 | 0.3865 |
| 0.1754 | 9.04 | 4500 | 0.4320 | 0.3763 |
| 0.1477 | 10.04 | 5000 | 0.4668 | 0.3699 |
| 0.1418 | 11.04 | 5500 | 0.4439 | 0.3683 |
| 0.1207 | 12.05 | 6000 | 0.4419 | 0.3678 |
| 0.115 | 13.05 | 6500 | 0.4606 | 0.3786 |
| 0.1022 | 14.06 | 7000 | 0.4403 | 0.3610 |
| 0.1019 | 15.06 | 7500 | 0.4966 | 0.3609 |
| 0.0898 | 16.06 | 8000 | 0.4675 | 0.3586 |
| 0.0824 | 17.07 | 8500 | 0.4844 | 0.3583 |
| 0.0737 | 18.07 | 9000 | 0.4801 | 0.3534 |
| 0.076 | 19.08 | 9500 | 0.4945 | 0.3529 |
| 0.0627 | 20.08 | 10000 | 0.4700 | 0.3417 |
| 0.0723 | 21.08 | 10500 | 0.4630 | 0.3449 |
| 0.0597 | 22.09 | 11000 | 0.5164 | 0.3456 |
| 0.0566 | 23.09 | 11500 | 0.4957 | 0.3401 |
| 0.0453 | 24.1 | 12000 | 0.5032 | 0.3419 |
| 0.0492 | 25.1 | 12500 | 0.5391 | 0.3387 |
| 0.0524 | 26.1 | 13000 | 0.5057 | 0.3348 |
| 0.0381 | 27.11 | 13500 | 0.5098 | 0.3331 |
| 0.0402 | 28.11 | 14000 | 0.5087 | 0.3353 |
| 0.0358 | 29.12 | 14500 | 0.4888 | 0.3315 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
lidiia/autonlp-trans_class_arg-32957902
|
lidiia
| 2021-11-15T16:48:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:lidiia/autonlp-data-trans_class_arg",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- lidiia/autonlp-data-trans_class_arg
co2_eq_emissions: 0.9756221672668951
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 32957902
- CO2 Emissions (in grams): 0.9756221672668951
## Validation Metrics
- Loss: 0.2765039801597595
- Accuracy: 0.8939828080229226
- Precision: 0.7757009345794392
- Recall: 0.8645833333333334
- AUC: 0.9552659749670619
- F1: 0.8177339901477833
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/lidiia/autonlp-trans_class_arg-32957902
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("lidiia/autonlp-trans_class_arg-32957902", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lidiia/autonlp-trans_class_arg-32957902", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
moussaKam/barthez-orangesum-abstract
|
moussaKam
| 2021-11-15T13:03:03Z | 2,038 | 7 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"summarization",
"bart",
"fr",
"arxiv:2010.12321",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
tags:
- summarization
- bart
language:
- fr
license: apache-2.0
widget:
- text: Citant les préoccupations de ses clients dénonçant des cas de censure après la suppression du compte de Trump, un fournisseur d'accès Internet de l'État de l'Idaho a décidé de bloquer Facebook et Twitter. La mesure ne concernera cependant que les clients mécontents de la politique de ces réseaux sociaux.
---
### Barthez model finetuned on orangeSum (abstract generation)
finetuning: examples/seq2seq (as of Feb 08 2021)
paper: https://arxiv.org/abs/2010.12321 \
github: https://github.com/moussaKam/BARThez
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
|
moussaKam/barthez-sentiment-classification
|
moussaKam
| 2021-11-15T13:02:33Z | 16 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text-classification",
"bart",
"fr",
"arxiv:2010.12321",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- text-classification
- bart
language:
- fr
license: apache-2.0
widget:
- text: Barthez est le meilleur gardien du monde.
---
### Barthez model finetuned on opinion classification task.
paper: https://arxiv.org/abs/2010.12321 \
github: https://github.com/moussaKam/BARThez
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
|
AdapterHub/roberta-base-pf-stsb
|
AdapterHub
| 2021-11-15T10:43:55Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"text-classification",
"roberta",
"adapterhub:sts/sts-b",
"en",
"arxiv:2104.08247",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- text-classification
- roberta
- adapterhub:sts/sts-b
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-stsb` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/sts-b](https://adapterhub.ml/explore/sts/sts-b/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-stsb", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-pmb_sem_tagging
|
AdapterHub
| 2021-11-15T10:40:53Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"token-classification",
"roberta",
"adapterhub:semtag/pmb",
"en",
"arxiv:2104.08247",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
tags:
- token-classification
- roberta
- adapterhub:semtag/pmb
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-pmb_sem_tagging` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [semtag/pmb](https://adapterhub.ml/explore/semtag/pmb/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-pmb_sem_tagging", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-mit_movie_trivia
|
AdapterHub
| 2021-11-15T10:40:13Z | 4 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"token-classification",
"roberta",
"adapterhub:ner/mit_movie_trivia",
"en",
"arxiv:2104.08247",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
tags:
- token-classification
- roberta
- adapterhub:ner/mit_movie_trivia
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-mit_movie_trivia` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ner/mit_movie_trivia](https://adapterhub.ml/explore/ner/mit_movie_trivia/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-mit_movie_trivia", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-hotpotqa
|
AdapterHub
| 2021-11-15T10:39:56Z | 3 | 2 |
adapter-transformers
|
[
"adapter-transformers",
"question-answering",
"roberta",
"en",
"dataset:hotpot_qa",
"arxiv:2104.08247",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
tags:
- question-answering
- roberta
- adapter-transformers
datasets:
- hotpot_qa
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-hotpotqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [hotpot_qa](https://huggingface.co/datasets/hotpot_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-hotpotqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-fce_error_detection
|
AdapterHub
| 2021-11-15T10:39:41Z | 4 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"token-classification",
"roberta",
"adapterhub:ged/fce",
"en",
"dataset:fce_error_detection",
"arxiv:2104.08247",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
tags:
- token-classification
- roberta
- adapterhub:ged/fce
- adapter-transformers
datasets:
- fce_error_detection
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-fce_error_detection` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ged/fce](https://adapterhub.ml/explore/ged/fce/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-fce_error_detection", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-duorc_s
|
AdapterHub
| 2021-11-15T10:38:32Z | 4 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"question-answering",
"roberta",
"en",
"dataset:duorc",
"arxiv:2104.08247",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
tags:
- question-answering
- roberta
- adapter-transformers
datasets:
- duorc
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-duorc_s` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-duorc_s", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/roberta-base-pf-cq
|
AdapterHub
| 2021-11-15T10:38:07Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"question-answering",
"roberta",
"adapterhub:qa/cq",
"en",
"arxiv:2104.08247",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
tags:
- question-answering
- roberta
- adapterhub:qa/cq
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/roberta-base-pf-cq` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/cq](https://adapterhub.ml/explore/qa/cq/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("roberta-base")
adapter_name = model.load_adapter("AdapterHub/roberta-base-pf-cq", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-stsb
|
AdapterHub
| 2021-11-15T10:35:40Z | 10 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"text-classification",
"bert",
"adapterhub:sts/sts-b",
"en",
"arxiv:2104.08247",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- text-classification
- bert
- adapterhub:sts/sts-b
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-stsb` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/sts-b](https://adapterhub.ml/explore/sts/sts-b/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-stsb", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-squad_v2
|
AdapterHub
| 2021-11-15T10:35:24Z | 6 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"question-answering",
"bert",
"adapterhub:qa/squad2",
"en",
"dataset:squad_v2",
"arxiv:2104.08247",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
tags:
- question-answering
- bert
- adapterhub:qa/squad2
- adapter-transformers
datasets:
- squad_v2
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-squad_v2` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad2](https://adapterhub.ml/explore/qa/squad2/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-squad_v2", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-squad
|
AdapterHub
| 2021-11-15T10:35:16Z | 1 | 2 |
adapter-transformers
|
[
"adapter-transformers",
"question-answering",
"bert",
"adapterhub:qa/squad1",
"en",
"dataset:squad",
"arxiv:2104.08247",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
tags:
- question-answering
- bert
- adapterhub:qa/squad1
- adapter-transformers
datasets:
- squad
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-squad` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad1](https://adapterhub.ml/explore/qa/squad1/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-squad", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-rte
|
AdapterHub
| 2021-11-15T10:34:33Z | 6 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"text-classification",
"bert",
"adapterhub:nli/rte",
"en",
"arxiv:2104.08247",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- text-classification
- bert
- adapterhub:nli/rte
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-rte` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/rte](https://adapterhub.ml/explore/nli/rte/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-rte", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-record
|
AdapterHub
| 2021-11-15T10:34:16Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"text-classification",
"bert",
"adapterhub:rc/record",
"en",
"arxiv:2104.08247",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- text-classification
- bert
- adapterhub:rc/record
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-record` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/record](https://adapterhub.ml/explore/rc/record/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-record", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-qnli
|
AdapterHub
| 2021-11-15T10:33:41Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"text-classification",
"bert",
"adapterhub:nli/qnli",
"en",
"arxiv:2104.08247",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- text-classification
- bert
- adapterhub:nli/qnli
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-qnli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/qnli](https://adapterhub.ml/explore/nli/qnli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-qnli", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-pmb_sem_tagging
|
AdapterHub
| 2021-11-15T10:33:33Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"token-classification",
"bert",
"adapterhub:semtag/pmb",
"en",
"arxiv:2104.08247",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
tags:
- token-classification
- bert
- adapterhub:semtag/pmb
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-pmb_sem_tagging` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [semtag/pmb](https://adapterhub.ml/explore/semtag/pmb/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-pmb_sem_tagging", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-mit_movie_trivia
|
AdapterHub
| 2021-11-15T10:32:53Z | 7 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"token-classification",
"bert",
"adapterhub:ner/mit_movie_trivia",
"en",
"arxiv:2104.08247",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
tags:
- token-classification
- bert
- adapterhub:ner/mit_movie_trivia
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-mit_movie_trivia` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ner/mit_movie_trivia](https://adapterhub.ml/explore/ner/mit_movie_trivia/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-mit_movie_trivia", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-fce_error_detection
|
AdapterHub
| 2021-11-15T10:32:25Z | 1 | 1 |
adapter-transformers
|
[
"adapter-transformers",
"token-classification",
"bert",
"adapterhub:ged/fce",
"en",
"dataset:fce_error_detection",
"arxiv:2104.08247",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
tags:
- token-classification
- bert
- adapterhub:ged/fce
- adapter-transformers
datasets:
- fce_error_detection
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-fce_error_detection` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ged/fce](https://adapterhub.ml/explore/ged/fce/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-fce_error_detection", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-duorc_s
|
AdapterHub
| 2021-11-15T10:32:00Z | 4 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"question-answering",
"bert",
"en",
"dataset:duorc",
"arxiv:2104.08247",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
tags:
- question-answering
- bert
- adapter-transformers
datasets:
- duorc
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-duorc_s` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-duorc_s", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-duorc_p
|
AdapterHub
| 2021-11-15T10:31:52Z | 4 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"question-answering",
"bert",
"en",
"dataset:duorc",
"arxiv:2104.08247",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
tags:
- question-answering
- bert
- adapter-transformers
datasets:
- duorc
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-duorc_p` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-duorc_p", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-cq
|
AdapterHub
| 2021-11-15T10:31:36Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"question-answering",
"bert",
"adapterhub:qa/cq",
"en",
"arxiv:2104.08247",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
tags:
- question-answering
- bert
- adapterhub:qa/cq
- adapter-transformers
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-cq` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/cq](https://adapterhub.ml/explore/qa/cq/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-cq", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
AdapterHub/bert-base-uncased-pf-comqa
|
AdapterHub
| 2021-11-15T10:30:57Z | 7 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"question-answering",
"bert",
"en",
"dataset:com_qa",
"arxiv:2104.08247",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
tags:
- question-answering
- bert
- adapter-transformers
datasets:
- com_qa
language:
- en
---
# Adapter `AdapterHub/bert-base-uncased-pf-comqa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [com_qa](https://huggingface.co/datasets/com_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("AdapterHub/bert-base-uncased-pf-comqa", source="hf")
model.active_adapters = adapter_name
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
```
|
DeepPavlov/xlm-roberta-large-en-ru-mnli
|
DeepPavlov
| 2021-11-15T08:49:43Z | 137 | 2 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"xlm-roberta-large",
"xlm-roberta-large-en-ru",
"xlm-roberta-large-en-ru-mnli",
"en",
"ru",
"dataset:glue",
"dataset:mnli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- en
- ru
datasets:
- glue
- mnli
model_index:
- name: mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
tags:
- xlm-roberta
- xlm-roberta-large
- xlm-roberta-large-en-ru
- xlm-roberta-large-en-ru-mnli
widget:
- text: "Люблю тебя. Ненавижу тебя"
- text: "I love you. I hate you"
---
# XLM-RoBERTa-Large-En-Ru-MNLI
xlm-roberta-large-en-ru finetuned on mnli.
|
ComCom/gpt2-large
|
ComCom
| 2021-11-15T07:26:07Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:04Z |
해당 모델은 [해당 사이트](https://huggingface.co/gpt2-medium)에서 가져온 모델입니다.
해당 모델은 [Teachable NLP](https://ainize.ai/teachable-nlp) 서비스에서 사용됩니다.
|
phailyoor/distilbert-base-uncased-finetuned-yahd-twval-hptune
|
phailyoor
| 2021-11-15T02:50:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-yahd-twval-hptune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-yahd-twval-hptune
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3727
- Accuracy: 0.2039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.1638 | 1.0 | 10106 | 2.1944 | 0.3646 |
| 1.7982 | 2.0 | 20212 | 2.6390 | 0.3333 |
| 1.3279 | 3.0 | 30318 | 3.1526 | 0.3095 |
| 0.8637 | 4.0 | 40424 | 4.8368 | 0.2470 |
| 0.5727 | 5.0 | 50530 | 6.3727 | 0.2039 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
sciarrilli/distilbert-base-uncased-cola
|
sciarrilli
| 2021-11-15T02:21:53Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5301312348234369
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2715
- Matthews Correlation: 0.5301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5216 | 1.0 | 535 | 0.5124 | 0.4104 |
| 0.3456 | 2.0 | 1070 | 0.5700 | 0.4692 |
| 0.2362 | 3.0 | 1605 | 0.7277 | 0.4844 |
| 0.1818 | 4.0 | 2140 | 0.7553 | 0.5007 |
| 0.1509 | 5.0 | 2675 | 0.9406 | 0.4987 |
| 0.1017 | 6.0 | 3210 | 0.9475 | 0.5387 |
| 0.0854 | 7.0 | 3745 | 1.0933 | 0.5317 |
| 0.051 | 8.0 | 4280 | 1.1719 | 0.5358 |
| 0.0512 | 9.0 | 4815 | 1.2296 | 0.5321 |
| 0.0308 | 10.0 | 5350 | 1.2715 | 0.5301 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
life4free96/DialogGPT-med-TeiaMoranta3
|
life4free96
| 2021-11-14T20:06:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- conversational
---
|
patrickvonplaten/wav2vec2-large-xlsr-53-common_voice-tr-ft
|
patrickvonplaten
| 2021-11-14T16:47:13Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"xls_r_repro_common_voice_tr",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
- xls_r_repro_common_voice_tr
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-common_voice-tr-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-common_voice-tr-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4231
- Wer: 0.3104
- Cer: 0.0737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
see under *Training Metrics* Tab.
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
Harshveer/autonlp-formality_scoring_2-32597818
|
Harshveer
| 2021-11-14T06:46:39Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:Harshveer/autonlp-data-formality_scoring_2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Harshveer/autonlp-data-formality_scoring_2
co2_eq_emissions: 8.655894631203154
---
# Model Trained Using AutoNLP
- Problem type: Single Column Regression
- Model ID: 32597818
- CO2 Emissions (in grams): 8.655894631203154
## Validation Metrics
- Loss: 0.5410276651382446
- MSE: 0.5410276651382446
- MAE: 0.5694561004638672
- R2: 0.6830431129198475
- RMSE: 0.735545814037323
- Explained Variance: 0.6834385395050049
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Harshveer/autonlp-formality_scoring_2-32597818
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Harshveer/autonlp-formality_scoring_2-32597818", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Harshveer/autonlp-formality_scoring_2-32597818", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.