repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ssavla2/bert-finetuned-ner
|
ssavla2
|
bert
| 8 | 12 |
transformers
| 0 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,423 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ssavla2/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0243
- Validation Loss: 0.0603
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1199 | 0.0570 | 0 |
| 0.0399 | 0.0586 | 1 |
| 0.0243 | 0.0603 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
9dce69c8b27b40648f82326e33d74c1d
|
Rocketknight1/europython-imdb
|
Rocketknight1
|
deberta-v2
| 8 | 6 |
transformers
| 0 |
text-classification
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,407 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# europython-imdb
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1279
- Train Accuracy: 0.9548
- Validation Loss: 0.1595
- Validation Accuracy: 0.9418
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2073 | 0.9203 | 0.1486 | 0.9435 | 0 |
| 0.1279 | 0.9548 | 0.1595 | 0.9418 | 1 |
### Framework versions
- Transformers 4.21.0.dev0
- TensorFlow 2.9.1
- Datasets 2.3.3.dev0
- Tokenizers 0.11.0
|
77036b012803aa0cbdfad8b1a1213746
|
fathyshalab/all-roberta-large-v1-auto_and_commute-1-16-5
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,521 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-auto_and_commute-1-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2614
- Accuracy: 0.4289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 |
| 2.267 | 2.0 | 2 | 2.4558 | 0.3533 |
| 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 |
| 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 |
| 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
d012167f14011551d585e534a4dcf1fe
|
Helsinki-NLP/opus-mt-fi-sg
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-fi-sg
* source languages: fi
* target languages: sg
* OPUS readme: [fi-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-sg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-sg/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sg/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-sg/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.sg | 29.3 | 0.480 |
|
d6e6221869e97d95e42a5e4480072bbd
|
andresca94/t5-small-finetuned-en-to-es
|
andresca94
|
t5
| 12 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,266 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-es
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8937
- Bleu: 7.4133
- Gen Len: 15.9653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.27 | 1.0 | 7061 | 1.8937 | 7.4133 | 15.9653 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
95506c960a47b10299bd5b12b804d47c
|
gokuls/bert-tiny-emotion-KD-BERT
|
gokuls
|
bert
| 13 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,036 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-emotion-KD-BERT
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4810
- Accuracy: 0.9175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.8247 | 1.0 | 1000 | 2.5170 | 0.7745 |
| 1.9864 | 2.0 | 2000 | 1.3436 | 0.874 |
| 1.1126 | 3.0 | 3000 | 0.8299 | 0.894 |
| 0.6924 | 4.0 | 4000 | 0.6500 | 0.9025 |
| 0.5272 | 5.0 | 5000 | 0.6097 | 0.908 |
| 0.4298 | 6.0 | 6000 | 0.5913 | 0.904 |
| 0.3936 | 7.0 | 7000 | 0.5165 | 0.9135 |
| 0.3238 | 8.0 | 8000 | 0.5120 | 0.9075 |
| 0.3018 | 9.0 | 9000 | 0.4989 | 0.916 |
| 0.2605 | 10.0 | 10000 | 0.4810 | 0.9175 |
| 0.2512 | 11.0 | 11000 | 0.4757 | 0.9135 |
| 0.219 | 12.0 | 12000 | 0.4676 | 0.914 |
| 0.2046 | 13.0 | 13000 | 0.4794 | 0.911 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
b081f0792437796d54cfc6fe9f4e0e9b
|
nandysoham/12-clustered
|
nandysoham
|
distilbert
| 8 | 13 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 2,074 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/12-clustered
This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6856
- Train End Logits Accuracy: 0.8145
- Train Start Logits Accuracy: 0.7542
- Validation Loss: 0.8791
- Validation End Logits Accuracy: 0.7585
- Validation Start Logits Accuracy: 0.7096
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 632, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.9975 | 0.7354 | 0.6632 | 0.8689 | 0.7719 | 0.7048 | 0 |
| 0.6856 | 0.8145 | 0.7542 | 0.8791 | 0.7585 | 0.7096 | 1 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
56e6f3d681ee37a09bba2a5a51f34be0
|
jonatasgrosman/exp_w2v2t_es_wavlm_s115
|
jonatasgrosman
|
wavlm
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['es']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'es']
| false | true | true | 439 | false |
# exp_w2v2t_es_wavlm_s115
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
344dfb4da504ed7b6ee6f5f2a0e69db6
|
jordyvl/bert-base-cased_conll2003-sm-all-ner
|
jordyvl
|
bert
| 13 | 74 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,574 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased_conll2003-sm-all-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0489
- Precision: 0.9487
- Recall: 0.9564
- F1: 0.9526
- Accuracy: 0.9916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.052 | 1.0 | 3511 | 0.0510 | 0.9374 | 0.9456 | 0.9415 | 0.9898 |
| 0.0213 | 2.0 | 7022 | 0.0497 | 0.9484 | 0.9519 | 0.9501 | 0.9911 |
| 0.0099 | 3.0 | 10533 | 0.0489 | 0.9487 | 0.9564 | 0.9526 | 0.9916 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
0f51dbf22e02929af11da7d6876b7091
|
sd-dreambooth-library/dtv-pkmn-monster-style
|
sd-dreambooth-library
| null | 20 | 22 |
diffusers
| 4 | null | false | false | false |
mit
| null | null | null | 3 | 3 | 0 | 0 | 1 | 0 | 1 |
[]
| false | true | true | 1,727 | false |
### JRPG Monster art style via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by wooshim
This your the Stable Diffusion model fine-tuned the dtv_pkmn_monster_style concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **image**
Please use **"feralplmr"** in your prompt to trigger the style.
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
Low steps (40~) and Euler_a is highly suggested.
Here is a prompt to try:
((wings)), dragon, bowser, creature, ((monster)), ((dinosaur)) intricate large dragon , ((rathalos)), detailed artwork in ((feralplmr artsyle)), feral, fullbody, monster character, scales, reptile, dragon, claws, wings, ((detailed))
image

|
a120a9e4312342bba01867e5349c6c54
|
edumunozsala/vit_base-224-in21k-ft-cifar10
|
edumunozsala
|
vit
| 6 | 9 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
|
['es']
|
['cifar10']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['sagemaker', 'vit', 'ImageClassification', 'generated_from_trainer']
| true | true | true | 2,819 | false |
# Model vit_base-224-in21k-ft-cifar10
## **A finetuned model for Image classification in Spanish**
This model was trained using Amazon SageMaker and the Hugging Face Deep Learning container,
The base model is **Vision Transformer (base-sized model)** which is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.[Link to base model](https://huggingface.co/google/vit-base-patch16-224-in21k)
## Base model citation
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Dataset
[Link to dataset description](http://www.cs.toronto.edu/~kriz/cifar.html)
The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
Sizes of datasets:
- Train dataset: 50,000
- Test dataset: 10,000
## Intended uses & limitations
This model is intented for Image Classification.
## Hyperparameters
{
"epochs": "5",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "1e-05",
}
## Test results
- Accuracy = 0.97
## Model in action
### Usage for Image Classification
```python
from transformers import ViTFeatureExtractor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
model = ViTModel.from_pretrained('edumunozsala/vit_base-224-in21k-ft-cifar10')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
|
99bde6a5085c741fa9f43e907d0f64f5
|
ftshijt/ESPnet2_pretrained_model_ftshijt_thchs30_tts_train_raw_phn_pypinyin_g2p_phone_train.loss.best
|
ftshijt
| null | 3 | 0 |
espnet
| 0 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['zh']
|
['thchs30']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 5,500 | false |
This model was trained by ftshijt using thchs30/tts1 recipe in <a href="https://github.com/espnet/espnet/">espnet</a>.
<p> </p>
<ul>
<li><strong>Python API</strong><pre><code class="language-python">See https://github.com/espnet/espnet_model_zoo</code></pre></li>
<li><strong>Evaluate in the recipe</strong><pre>
<code class="language-bash">Please see ESPNet for how to use pre-trained model
</pre></li>
<li><strong>Config</strong><pre><code>config: conf/train.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_raw_phn_pypinyin_g2p_phone
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 500
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 500
batch_size: 20
valid_batch_size: null
batch_bins: 3750000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/text_shape.phn
- exp/tts_stats_raw_phn_pypinyin_g2p_phone/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/text
- text
- text
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/xvector/train/xvector.scp
- spembs
- kaldi_ark
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/xvector/dev/xvector.scp
- spembs
- kaldi_ark
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ''
- d
- sh
- j
- zh
- l
- i4
- x
- b
- g
- h
- e
- q
- t
- m
- ch
- i1
- z
- u4
- i2
- i3
- n
- f
- s
- r
- k
- c
- p
- ai4
- e4
- a1
- an4
- ian4
- ing2
- u3
- ian2
- ong1
- e2
- in1
- eng2
- ui4
- ao4
- u2
- iao4
- üan2
- en2
- an1
- u1
- ai2
- ao3
- ing4
- eng1
- iou3
- ü4
- uo4
- üe4
- ong2
- ian1
- ing1
- uo3
- ie4
- ang1
- uei4
- ang4
- an2
- a4
- ou4
- ei4
- uai4
- ie3
- ang3
- ong4
- ai3
- ü2
- uo2
- an3
- ang2
- ou3
- er2
- ou1
- uo1
- en1
- ia1
- ü3
- uan1
- in2
- iong4
- ian3
- iang3
- a3
- iang2
- ia4
- ü1
- uan4
- iao3
- iang4
- uen2
- iang1
- uan3
- ai1
- ie2
- ei3
- uan2
- uang2
- in4
- üe2
- ao1
- eng3
- iu4
- iao1
- er4
- iu2
- in3
- un1
- uang1
- eng4
- a2
- uang3
- en3
- uang4
- ong3
- ing3
- e3
- ei2
- ou2
- ao2
- i
- ün4
- uei2
- ua4
- iou4
- ui1
- ua1
- en4
- ün2
- iao2
- ie1
- iou2
- iu3
- ün1
- üan4
- en
- ei1
- o2
- un4
- ui3
- iu1
- üan3
- e1
- v3
- ua2
- ia2
- ui2
- un2
- o4
- un3
- er3
- ia3
- iong1
- uei3
- o1
- üe1
- üan1
- iong3
- v4
- iong2
- uen4
- uai2
- uei1
- iou1
- a
- ua3
- uen1
- o3
- ueng1
- uai1
- uen3
- üe3
- ou
- uai3
- ve4
- er
- ün3
- o
- ua
- ia
- ' l ='
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: pypinyin_g2p_phone
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 16000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_phn_pypinyin_g2p_phone/train/feats_stats.npz
tts: tacotron2
tts_conf:
embed_dim: 512
elayers: 1
eunits: 512
econv_layers: 3
econv_chans: 512
econv_filts: 5
atype: location
adim: 512
aconv_chans: 32
aconv_filts: 15
cumulate_att_w: true
dlayers: 2
dunits: 1024
prenet_layers: 2
prenet_units: 256
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
output_activation: null
use_batch_norm: true
use_concate: true
use_residual: false
spk_embed_dim: 512
spk_embed_integration_type: add
use_gst: true
gst_heads: 4
gst_tokens: 16
dropout_rate: 0.5
zoneout_rate: 0.1
reduction_factor: 1
use_masking: true
bce_pos_weight: 10.0
use_guided_attn_loss: true
guided_attn_loss_sigma: 0.4
guided_attn_loss_lambda: 1.0
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: 0.10.2a1
distributed: false</code></pre></li>
</ul>
|
35b30b83e2ad77d174d0b6f748f1fbc9
|
espnet/americasnlp22-asr-gn
|
espnet
| null | 21 | 3 |
espnet
| 0 |
automatic-speech-recognition
| false | false | false |
cc-by-4.0
|
['gn']
|
['americasnlp22']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition']
| false | true | true | 7,877 | false |
## ESPnet2 ASR model
### `espnet/americasnlp22-asr-gn`
This model was trained by Pavel Denisov using americasnlp22 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout fc62b1ce3e50c5ef8a2ac8cedb0d92ac41df54ca
pip install -e .
cd egs2/americasnlp22/asr1
./run.sh \
--skip_data_prep false \
--skip_train true \
--download_model espnet/americasnlp22-asr-gn \
--lang gn \
--local_data_opts "--lang gn" \
--train_set train_gn \
--valid_set dev_gn \
--test_sets dev_gn \
--gpu_inference false \
--inference_nj 8 \
--lm_train_text data/train_gn/text \
--bpe_train_text data/train_gn/text
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Jun 5 12:17:58 CEST 2022`
- python version: `3.9.13 (main, May 18 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.11.0+cu115`
- Git hash: `d55704daa36d3dd2ca24ae3162ac40d81957208c`
- Commit date: `Wed Jun 1 02:33:09 2022 +0200`
## asr_train_asr_transformer_raw_gn_bpe100_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_gn|93|391|11.5|73.7|14.8|12.5|101.0|100.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_gn|93|2946|83.4|7.9|8.7|8.7|25.3|100.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.cer_ctc.best/dev_gn|93|2439|76.6|13.5|9.9|8.7|32.1|100.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_raw_gn_bpe100_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- cer_ctc
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream.model.feature_extractor
- frontend.upstream.model.encoder.layers.0
- frontend.upstream.model.encoder.layers.1
- frontend.upstream.model.encoder.layers.2
- frontend.upstream.model.encoder.layers.3
- frontend.upstream.model.encoder.layers.4
- frontend.upstream.model.encoder.layers.5
- frontend.upstream.model.encoder.layers.6
- frontend.upstream.model.encoder.layers.7
- frontend.upstream.model.encoder.layers.8
- frontend.upstream.model.encoder.layers.9
- frontend.upstream.model.encoder.layers.10
- frontend.upstream.model.encoder.layers.11
- frontend.upstream.model.encoder.layers.12
- frontend.upstream.model.encoder.layers.13
- frontend.upstream.model.encoder.layers.14
- frontend.upstream.model.encoder.layers.15
- frontend.upstream.model.encoder.layers.16
- frontend.upstream.model.encoder.layers.17
- frontend.upstream.model.encoder.layers.18
- frontend.upstream.model.encoder.layers.19
- frontend.upstream.model.encoder.layers.20
- frontend.upstream.model.encoder.layers.21
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 200000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_gn_bpe100_sp/train/speech_shape
- exp/asr_stats_raw_gn_bpe100_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_gn_bpe100_sp/valid/speech_shape
- exp/asr_stats_raw_gn_bpe100_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_gn_sp/wav.scp
- speech
- sound
- - dump/raw/train_gn_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_gn/wav.scp
- speech
- sound
- - dump/raw/dev_gn/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 300
token_list:
- <blank>
- <unk>
- ▁
- a
- i
- e
- o
- ''''
- .
- u
- '"'
- p
- r
- n
- y
- h
- ▁"
- ▁o
- é
- re
- va
- pe
- s
- ra
- á
- he
- t
- mb
- g
- ka
- ã
- v
- ve
- je
- ▁ha
- te
- k
- ñ
- ha
- py
- ta
- ku
- ẽ
- ja
- pa
- O
- mi
- ó
- mo
- j
- ko
- ʼ
- ña
- me
- ma
- c
- M
- í
- H
- ú
- A
- ̃
- õ
- ý
- m
- P
- U
- ','
- ũ
- l
- ỹ
- N
- ĩ
- E
- I
- J
- L
- Á
- V
- S
- z
- '-'
- '?'
- Ñ
- R
- G
- Y
- T
- K
- C
- d
- “
- B
- ’
- ”
- D
- b
- f
- q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/gn_token_list/bpe_unigram100/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_url
upstream_ckpt: https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr2_300m.pt
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: null
specaug_conf: {}
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 1.0
lsm_weight: 0.0
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: transformer
encoder_conf:
input_layer: conv2d2
num_blocks: 1
linear_units: 2048
dropout_rate: 0.2
output_size: 256
attention_heads: 8
attention_dropout_rate: 0.2
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf: {}
required:
- output_dir
- token_list
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
0267fefdebc39b68e9af0b4bb6f877b6
|
DOOGLAK/Tagged_Uni_100v3_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
|
bert
| 13 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['tagged_uni100v3_wikigold_split']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,565 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_100v3_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v3_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4884
- Precision: 0.2764
- Recall: 0.1080
- F1: 0.1553
- Accuracy: 0.8106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 26 | 0.6238 | 0.2 | 0.0089 | 0.0170 | 0.7822 |
| No log | 2.0 | 52 | 0.5210 | 0.2497 | 0.0587 | 0.0950 | 0.7971 |
| No log | 3.0 | 78 | 0.4884 | 0.2764 | 0.1080 | 0.1553 | 0.8106 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
9bb4ddb13b35b97aa1e0775700629c19
|
VioletaMG/dtu-scan114-128_50epochs
|
VioletaMG
| null | 16 | 2 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,213 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# dtu-scan114-128_50epochs
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/VioletaMG/dtu-scan114-128_50epochs/tensorboard?#scalars)
|
c7a4f07556eac3992d5e43c57bb11d45
|
RecordedFuture/Swedish-Sentiment-Fear-Targets
|
RecordedFuture
|
bert
| 11 | 10 |
transformers
| 0 |
token-classification
| true | true | true |
mit
|
['sv']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,032 | false |
## Swedish BERT models for sentiment analysis, Sentiment targets.
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for target/role assignment in Swedish. The two models are based on the [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased), the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.
This is a downstream model to be used in conjunction with the [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Violence) or [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Fear). The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on.
The NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Fear targets
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
classifier_fear_targets= BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Fear target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.8361 | 0.7903 | 0.8876 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
classifier_violence_targets = BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Violence target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.7831| 0.9155| 0.8442 |
|
0acd5e15d3db6577912717a1ac0ce4ca
|
XLab/rst-gaokao-cloze-11b
|
XLab
|
t5
| 6 | 1 |
transformers
| 2 |
text2text-generation
| true | false | false |
afl-3.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 11,247 | false |
<p align="center">
<br>
<img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/>
<br>
</p>
# reStructured Pre-training (RST)
official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html)
#### RST is a new paradigm for language pre-training, which
* unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model,
* surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc)
* achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam
In such a pre-training paradigm,
* Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing
* Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access.
## Model Description
We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters.
| Model | Description | Recommended Application
| ----------- | ----------- |----------- |
| rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) |
| rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker |
| rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) |
| rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction |
| rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains|
| rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction |
| rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification |
| rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning |
| rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning |
| rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification |
| rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering|
| **rst-gaokao-cloze-11b** | **Trained with manually crafted cloze datasets** | **General cloze filling**|
| rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks |
## Have a try?
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b")
model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b")
inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
## Data for reStructure Pre-training
This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research.
We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals.
###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush:
| Mine | Signal | #Sample | Use in DataLab | Some Applications |
| --- | --- | --- | --- | --- |
| [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification |
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion|
| [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing|
| [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification |
| [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning|
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection|
| [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion |
| [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning |
| [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation|
| [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition|
[Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing|
| [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation |
| [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference|
|[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension|
| [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension|
| [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension |
| [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension|
| [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension|
| [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension |
| [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification|
| [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion|
| [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition|
| [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion|
## Bibtext for Citation Info
```
@article{yuan2022restructured,
title={reStructured Pre-training},
author={Yuan, Weizhe and Liu, Pengfei},
journal={arXiv preprint arXiv:2206.11147},
year={2022}
}
```
|
9d3176c09b0f311f2b81817863564b19
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv
|
theojolliffe
|
bart
| 13 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null |
['scientific_papers']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,514 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-arxiv
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1382
- Rouge1: 42.1723
- Rouge2: 15.7664
- Rougel: 24.5336
- Rougelsum: 37.7532
- Gen Len: 127.6382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.125 | 1.0 | 67679 | 2.1382 | 42.1723 | 15.7664 | 24.5336 | 37.7532 | 127.6382 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
4ed4f77d9afce547a10bb6857c9410f1
|
facebook/wav2vec2-base-100h
|
facebook
|
wav2vec2
| 8 | 780 |
transformers
| 3 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['librispeech_asr']
| null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['audio', 'automatic-speech-recognition']
| false | true | true | 3,651 | false |
# Wav2Vec2-Base-100h
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h")
# define function to read in sound file
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import soundfile as sf
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 6.1 | 13.5 |
|
2a166344559a2509ddc28b18ca5ba02f
|
kejian/mighty-rwr
|
kejian
|
gpt2
| 36 | 4 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['kejian/codeparrot-train-more-filter-3.3b-cleaned']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,373 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mighty-rwr
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True},
'generation': {'batch_size': 128,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 2048},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_hits_threshold': 0,
'num_samples': 2048,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'value_head_config': {'is_detached': False}},
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'mighty-rwr',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/497nsm8z
|
d902abd61464f65c6e5e402c5e270e11
|
fathyshalab/all-roberta-large-v1-small_talk-8-16-5
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,515 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-small_talk-8-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3566
- Accuracy: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7259 | 1.0 | 1 | 2.5917 | 0.2551 |
| 2.217 | 2.0 | 2 | 2.5059 | 0.3275 |
| 1.7237 | 3.0 | 3 | 2.4355 | 0.3768 |
| 1.4001 | 4.0 | 4 | 2.3837 | 0.3739 |
| 1.1937 | 5.0 | 5 | 2.3566 | 0.3855 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
e40c0c58b2f9d05e53cc690143b27d77
|
sd-dreambooth-library/kim_jung_gi_art_style
|
sd-dreambooth-library
| null | 15 | 63 |
diffusers
| 4 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 757 | false |
### kim_jung_gi_art_style Dreambooth model trained by apurik-parv with [Shivam shri rao's DreamBooth](https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth/train_dreambooth.py) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Inference phrase: **kimjugi**
The model is trained on kim jung gi images for a total of 10000 steps.
The intention is to reproduce the art style faithfully.
Feel free to use it.
|
96a5db6e8c49094aefd0e92bfcb5eb2a
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s368
|
jonatasgrosman
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fr']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'fr']
| false | true | true | 479 | false |
# exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s368
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
d498183fedc7323a447436bd4a70d7e8
|
jdang/xlm-roberta-base-finetuned-panx-de
|
jdang
|
xlm-roberta
| 18 | 24 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,314 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
121d1e6365569de931323697391c337a
|
muhtasham/small-mlm-glue-mrpc-custom-tokenizer-expand-vocab
|
muhtasham
|
bert
| 12 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,684 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-mrpc-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.327 | 1.09 | 500 | 4.6505 |
| 4.486 | 2.18 | 1000 | 4.0830 |
| 4.0801 | 3.27 | 1500 | 3.9647 |
| 3.841 | 4.36 | 2000 | 3.6616 |
| 3.5786 | 5.45 | 2500 | 3.6047 |
| 3.4159 | 6.54 | 3000 | 3.4439 |
| 3.1616 | 7.63 | 3500 | 3.3131 |
| 3.1009 | 8.71 | 4000 | 3.2167 |
| 2.9661 | 9.8 | 4500 | 3.2388 |
| 2.8417 | 10.89 | 5000 | 3.0713 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
79fe1b0a623dd49f3f324060b05431cf
|
espnet/siddhana_fsc_unseen_asr_train_asr_hubert_transformer_adam_specaug_fine-truncated-ef9dab
|
espnet
| null | 20 | 1 |
espnet
| 0 |
automatic-speech-recognition
| false | false | false |
cc-by-4.0
|
['en']
|
['fsc_unseen']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition']
| false | true | true | 1,386 | false |
## ESPnet2 ASR pretrained model
### `siddhana/fsc_unseen_asr_train_asr_hubert_transformer_adam_specaug_finetune_raw_en_word_valid.acc.ave_5best`
♻️ Imported from https://zenodo.org/record/5655832
This model was trained by siddhana using fsc_unseen/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
d0dde0791de6100790a1ec1d7fa77c52
|
sd-concepts-library/leif-jones
|
sd-concepts-library
| null | 15 | 0 | null | 1 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,676 | false |
### leif jones on Stable Diffusion
This is the `<leif-jones>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










|
71059fb83998e04a310dc4d9bb060ace
|
Habana/vit
|
Habana
| null | 3 | 1,807 | null | 0 | null | false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,502 | false |
[Optimum Habana](https://github.com/huggingface/optimum-habana) is the interface between the Hugging Face Transformers and Diffusers libraries and Habana's Gaudi processor (HPU).
It provides a set of tools enabling easy and fast model loading, training and inference on single- and multi-HPU settings for different downstream tasks.
Learn more about how to take advantage of the power of Habana HPUs to train and deploy Transformers and Diffusers models at [hf.co/hardware/habana](https://huggingface.co/hardware/habana).
## ViT model HPU configuration
This model only contains the `GaudiConfig` file for running the [ViT](https://huggingface.co/google/vit-base-patch16-224-in21k) model on Habana's Gaudi processors (HPU).
**This model contains no model weights, only a GaudiConfig.**
This enables to specify:
- `use_habana_mixed_precision`: whether to use Habana Mixed Precision (HMP)
- `hmp_opt_level`: optimization level for HMP, see [here](https://docs.habana.ai/en/latest/PyTorch/PyTorch_Mixed_Precision/PT_Mixed_Precision.html#configuration-options) for a detailed explanation
- `hmp_bf16_ops`: list of operators that should run in bf16
- `hmp_fp32_ops`: list of operators that should run in fp32
- `hmp_is_verbose`: verbosity
- `use_fused_adam`: whether to use Habana's custom AdamW implementation
- `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator
## Usage
The model is instantiated the same way as in the Transformers library.
The only difference is that there are a few new training arguments specific to HPUs.
[Here](https://github.com/huggingface/optimum-habana/blob/main/examples/image-classification/run_image_classification.py) is an image classification example script to fine-tune a model. You can run it with ViT with the following command:
```bash
python run_image_classification.py \
--model_name_or_path google/vit-base-patch16-224-in21k \
--dataset_name cifar10 \
--output_dir /tmp/outputs/ \
--remove_unused_columns False \
--do_train \
--do_eval \
--learning_rate 2e-5 \
--num_train_epochs 5 \
--per_device_train_batch_size 64 \
--per_device_eval_batch_size 64 \
--evaluation_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--save_total_limit 3 \
--seed 1337 \
--use_habana \
--use_lazy_mode \
--gaudi_config_name Habana/vit \
--throughput_warmup_steps 2
```
Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
|
80d0bb96c859df2795b2b622b2a4fbd8
|
inkittmahdi/ddpm-butterflies-128
|
inkittmahdi
| null | 13 | 0 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/smithsonian_butterflies_subset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,233 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/inkittmahdi/ddpm-butterflies-128/tensorboard?#scalars)
|
b62dec5f56143720ea8f4b85e5a1430a
|
Matthijs/vit-base-patch16-224
|
Matthijs
| null | 5 | 0 | null | 0 |
image-classification
| false | false | false |
apache-2.0
| null |
['imagenet', 'imagenet-21k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-classification']
| false | true | true | 2,047 | false |
# Vision Transformer (base-sized model)
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
This repo contains a Core ML version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224).
## Usage instructions
Create a `VNCoreMLRequest` that loads the ViT model:
```swift
import CoreML
import Vision
lazy var classificationRequest: VNCoreMLRequest = {
do {
let config = MLModelConfiguration()
config.computeUnits = .all
let coreMLModel = try ViT(configuration: config)
let visionModel = try VNCoreMLModel(for: coreMLModel.model)
let request = VNCoreMLRequest(model: visionModel, completionHandler: { [weak self] request, error in
if let results = request.results as? [VNClassificationObservation] {
/* do something with the results */
}
})
request.imageCropAndScaleOption = .centerCrop
return request
} catch {
fatalError("Failed to create VNCoreMLModel: \(error)")
}
}()
```
Perform the request:
```swift
func classify(image: UIImage) {
guard let ciImage = CIImage(image: image) else {
print("Unable to create CIImage")
return
}
DispatchQueue.global(qos: .userInitiated).async {
let handler = VNImageRequestHandler(ciImage: ciImage, orientation: .up)
do {
try handler.perform([self.classificationRequest])
} catch {
print("Failed to perform classification: \(error)")
}
}
}
```
|
a83a301006877ce382cbf7e6e0853763
|
Laurie/billsum_t5_model
|
Laurie
|
t5
| 14 | 9 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['billsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,692 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum_t5_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5045
- Rouge1: 0.1393
- Rouge2: 0.0511
- Rougel: 0.117
- Rougelsum: 0.1171
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8011 | 0.1314 | 0.0398 | 0.111 | 0.1107 | 19.0 |
| No log | 2.0 | 124 | 2.5850 | 0.1371 | 0.049 | 0.1157 | 0.1158 | 19.0 |
| No log | 3.0 | 186 | 2.5221 | 0.1407 | 0.0531 | 0.1184 | 0.1186 | 19.0 |
| No log | 4.0 | 248 | 2.5045 | 0.1393 | 0.0511 | 0.117 | 0.1171 | 19.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
9720cf8027defecd0d890d20107dea3f
|
cardiffnlp/twitter-roberta-base-mar2022
|
cardiffnlp
|
roberta
| 9 | 17 |
transformers
| 2 |
fill-mask
| true | false | false |
mit
|
['en']
|
['twitter-api']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['timelms', 'twitter']
| false | true | true | 4,685 | false |
# Twitter March 2022 (RoBERTa-base, 128M)
This is a RoBERTa-base model trained on 128.06M tweets until the end of March 2022.
More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829).
Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms).
For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models).
## Preprocess Text
Replace usernames and links for placeholders: "@user" and "http".
If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data).
```python
def preprocess(text):
preprocessed_text = []
for t in text.split(): # expects whitespace tokenization
if len(t) > 1:
t = '@user' if t[0] == '@' and t.count('@') == 1 else t
t = 'http' if t.startswith('http') else t
preprocessed_text.append(t)
return ' '.join(preprocessed_text)
```
## Example Masked Language Model
```python
from transformers import pipeline, AutoTokenizer
MODEL = "cardiffnlp/twitter-roberta-base-mar2022"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)
def pprint(candidates, n):
for i in range(n):
token = tokenizer.decode(candidates[i]['token'])
score = candidates[i]['score']
print("%d) %.5f %s" % (i+1, score, token))
texts = [
"So glad I'm <mask> vaccinated.",
"I keep forgetting to bring a <mask>.",
"Looking forward to watching <mask> Game tonight!",
]
for text in texts:
t = preprocess(text)
print(f"{'-'*30}\n{t}")
candidates = fill_mask(t)
pprint(candidates, 5)
```
Output:
```
------------------------------
So glad I'm <mask> vaccinated.
1) 0.34390 fully
2) 0.28177 not
3) 0.16473 getting
4) 0.04932 still
5) 0.01754 double
------------------------------
I keep forgetting to bring a <mask>.
1) 0.05391 book
2) 0.04560 mask
3) 0.03456 pen
4) 0.03251 lighter
5) 0.03098 charger
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.60744 the
2) 0.15224 The
3) 0.02575 this
4) 0.01450 End
5) 0.01035 Championship
```
## Example Tweet Embeddings
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter
def get_embedding(text): # naive approach for demonstration
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
return np.mean(features[0], axis=0)
MODEL = "cardiffnlp/twitter-roberta-base-mar2022"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)
query = "The book was awesome"
tweets = ["I just ordered fried chicken 🐣",
"The movie was great",
"What time is the next game?",
"Just finished reading 'Embeddings in NLP'"]
sims = Counter()
for tweet in tweets:
sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
sims[tweet] = sim
print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
print("%d) %.5f %s" % (idx+1, sim, tweet))
```
Output:
```
Most similar to: The book was awesome
------------------------------
1) 0.98985 The movie was great
2) 0.96122 Just finished reading 'Embeddings in NLP'
3) 0.95733 I just ordered fried chicken 🐣
4) 0.93271 What time is the next game?
```
## Example Feature Extraction
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
MODEL = "cardiffnlp/twitter-roberta-base-mar2022"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy()
features_mean = np.mean(features[0], axis=0)
#features_max = np.max(features[0], axis=0)
# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0)
# #features_max = np.max(features[0], axis=0)
```
|
bb8f5d0d6bebae4030836b256dbe1dc8
|
jonatasgrosman/exp_w2v2t_ar_vp-sv_s445
|
jonatasgrosman
|
wav2vec2
| 10 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ar']
| false | true | true | 469 | false |
# exp_w2v2t_ar_vp-sv_s445
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
32ede7cc083e409471901bda3fe69212
|
StanfordAIMI/stanford-deidentifier-only-i2b2
|
StanfordAIMI
|
bert
| 7 | 3,942 |
transformers
| 1 |
token-classification
| true | false | false |
mit
|
['en']
|
['radreports']
| null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['token-classification', 'sequence-tagger-model', 'pytorch', 'transformers', 'pubmedbert', 'uncased', 'radiology', 'biomedical']
| false | true | true | 2,665 | false |
Stanford de-identifier was trained on a variety of radiology and biomedical documents with the goal of automatising the de-identification process while reaching satisfactory accuracy for use in production. Manuscript in-proceedings.
Associated github repo: https://github.com/MIDRC/Stanford_Penn_Deidentifier
## Citation
```bibtex
@article{10.1093/jamia/ocac219,
author = {Chambon, Pierre J and Wu, Christopher and Steinkamp, Jackson M and Adleberg, Jason and Cook, Tessa S and Langlotz, Curtis P},
title = "{Automated deidentification of radiology reports combining transformer and “hide in plain sight” rule-based methods}",
journal = {Journal of the American Medical Informatics Association},
year = {2022},
month = {11},
abstract = "{To develop an automated deidentification pipeline for radiology reports that detect protected health information (PHI) entities and replaces them with realistic surrogates “hiding in plain sight.”In this retrospective study, 999 chest X-ray and CT reports collected between November 2019 and November 2020 were annotated for PHI at the token level and combined with 3001 X-rays and 2193 medical notes previously labeled, forming a large multi-institutional and cross-domain dataset of 6193 documents. Two radiology test sets, from a known and a new institution, as well as i2b2 2006 and 2014 test sets, served as an evaluation set to estimate model performance and to compare it with previously released deidentification tools. Several PHI detection models were developed based on different training datasets, fine-tuning approaches and data augmentation techniques, and a synthetic PHI generation algorithm. These models were compared using metrics such as precision, recall and F1 score, as well as paired samples Wilcoxon tests.Our best PHI detection model achieves 97.9 F1 score on radiology reports from a known institution, 99.6 from a new institution, 99.5 on i2b2 2006, and 98.9 on i2b2 2014. On reports from a known institution, it achieves 99.1 recall of detecting the core of each PHI span.Our model outperforms all deidentifiers it was compared to on all test sets as well as human labelers on i2b2 2014 data. It enables accurate and automatic deidentification of radiology reports.A transformer-based deidentification pipeline can achieve state-of-the-art performance for deidentifying radiology reports and other medical documents.}",
issn = {1527-974X},
doi = {10.1093/jamia/ocac219},
url = {https://doi.org/10.1093/jamia/ocac219},
note = {ocac219},
eprint = {https://academic.oup.com/jamia/advance-article-pdf/doi/10.1093/jamia/ocac219/47220191/ocac219.pdf},
}
```
|
9d902c8b37be5c4749a085c78d4b241b
|
Helsinki-NLP/opus-mt-de-iso
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-de-iso
* source languages: de
* target languages: iso
* OPUS readme: [de-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-iso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-iso/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-iso/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-iso/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.de.iso | 21.4 | 0.389 |
|
9207434fd4604a1c1f62b8dffa1dce23
|
BeckyB/Bert_Classifier
|
BeckyB
|
bert
| 12 | 14 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['yelp_review_full']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,322 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert_Classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1067
- Accuracy: 0.5533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 188 | 1.0636 | 0.5 |
| No log | 2.0 | 376 | 1.0405 | 0.52 |
| 0.9962 | 3.0 | 564 | 1.1067 | 0.5533 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
88abde3fdf4ab3bb1447600a904d1f91
|
Fiddi/distilbert-base-uncased-finetuned-ner
|
Fiddi
|
distilbert
| 19 | 9 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,555 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Precision: 0.9291
- Recall: 0.9376
- F1: 0.9333
- Accuracy: 0.9841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2412 | 1.0 | 878 | 0.0688 | 0.9178 | 0.9246 | 0.9212 | 0.9815 |
| 0.0514 | 2.0 | 1756 | 0.0608 | 0.9251 | 0.9344 | 0.9298 | 0.9832 |
| 0.0304 | 3.0 | 2634 | 0.0604 | 0.9291 | 0.9376 | 0.9333 | 0.9841 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
3b2fdf25ff6ebb8b6126082554064bd7
|
Kuanchy/bauti
|
Kuanchy
| null | 22 | 2 |
diffusers
| 0 | null | false | false | false |
mit
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,171 | false |
### Bauti on Stable Diffusion via Dreambooth
#### model by Kuanchy
This your the Stable Diffusion model fine-tuned the Bauti concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a sks person **
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
73da6336e6157810d5cd2e68305697ee
|
Zekunli/flan-t5-large-extraction-cnndm_2000-all
|
Zekunli
|
t5
| 10 | 29 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,142 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-extraction-cnndm_2000-all
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7621
- Rouge1: 34.9258
- Rouge2: 15.2218
- Rougel: 29.9813
- Rougelsum: 29.9443
- Gen Len: 18.986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.1649 | 0.8 | 200 | 1.8161 | 34.9143 | 14.9085 | 29.8629 | 29.811 | 19.0 |
| 1.9114 | 1.6 | 400 | 1.7713 | 34.8733 | 14.6521 | 29.8186 | 29.7829 | 18.986 |
| 1.7997 | 2.4 | 600 | 1.7917 | 34.1481 | 14.7443 | 29.7078 | 29.6144 | 18.99 |
| 1.7477 | 3.2 | 800 | 1.7771 | 35.0882 | 15.3186 | 29.9749 | 29.9643 | 18.99 |
| 1.6821 | 4.0 | 1000 | 1.7621 | 34.9258 | 15.2218 | 29.9813 | 29.9443 | 18.986 |
| 1.6301 | 4.8 | 1200 | 1.7796 | 34.3705 | 14.8013 | 29.6128 | 29.5457 | 18.99 |
| 1.597 | 5.6 | 1400 | 1.7669 | 35.4342 | 15.7045 | 30.4953 | 30.4293 | 18.99 |
| 1.5543 | 6.4 | 1600 | 1.7857 | 34.5322 | 15.0244 | 29.8476 | 29.7596 | 18.99 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
968fdd866a88ffcdb68d005ad6573f66
|
vuiseng9/nncf-qat-kd-bert-l-squadv1.1-sl256
|
vuiseng9
|
bert
| 25 | 2 |
transformers
| 0 | null | true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 1,514 | false |
This model is quantized version of ```vuiseng9/bert-l-squadv1.1-sl256``` using OpenVINO NNCF.
### Training
```bash
# used 4xV100 GPUS
# --fp16 for lower turnaround and resource requirement
python run_qa.py \
--model_name_or_path vuiseng9/bert-l-squadv1.1-sl256 \
--dataset_name squad \
--do_eval \
--do_train \
--evaluation_strategy steps \
--eval_steps 250 \
--learning_rate 3e-5 \
--fp16 \
--num_train_epochs 2 \
--per_device_eval_batch_size 64 \
--per_device_train_batch_size 8 \
--max_seq_length 256 \
--doc_stride 128 \
--save_steps 500 \
--logging_steps 1 \
--overwrite_output_dir \
--nncf_config nncf_bert_config_squad_kd.json \ #stock config which has seq.len modified to 256.
--run_name $RUNID \
--output_dir $OUTDIR
```
### Evaluation
Require ```vuiseng9/transformers (fork)``` , commit: ```ff24569b```, NNCF v2.1+ commit (```8e26365```)
```bash
git clone https://huggingface.co/vuiseng9/nncf-qat-kd-bert-l-squadv1.1-sl256
python run_qa.py \
--model_name_or_path ./nncf-qat-kd-bert-l-squadv1.1-sl256 \
--dataset_name squad \
--nncf_config ./nncf-qat-kd-bert-l-squadv1.1-sl256/nncf_bert_config_squad_kd.json \
--nncf_ckpt ./nncf-qat-kd-bert-l-squadv1.1-sl256 \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 256 \
--doc_stride 128 \
--output_dir /tmp/eval-nncf-qat-kd-bert-l-squadv1.1-sl256 \
--overwrite_output_dir
```
### Results
```
eval_exact_match = 87.1902
eval_f1 = 93.0286
eval_samples = 12097
```
|
f61621e4e80dba06a5f2c57ebbf4ad48
|
mattyhew/charliee
|
mattyhew
| null | 15 | 4 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 611 | false |
### charliee Dreambooth model trained by mattyhew with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
95828382a8d860ad35bed57c0d25bd07
|
danhsf/distilbert-base-uncased-finetuned-emotion
|
danhsf
|
distilbert
| 16 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,345 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.9265
- F1: 0.9266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8631 | 1.0 | 250 | 0.3221 | 0.904 | 0.9011 |
| 0.254 | 2.0 | 500 | 0.2201 | 0.9265 | 0.9266 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
97d2c30f4418d618f2efc3608ac6c28d
|
Helsinki-NLP/opus-mt-zai-es
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-zai-es
* source languages: zai
* target languages: es
* OPUS readme: [zai-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zai-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zai.es | 20.8 | 0.372 |
|
3cb9f609a18d71c8a71f8129093b909a
|
aXhyra/sentiment_trained_1234567
|
aXhyra
|
distilbert
| 10 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['tweet_eval']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,408 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_trained_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2854
- F1: 0.7165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.2140338797769864e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.6603 | 1.0 | 11404 | 0.7020 | 0.6992 |
| 0.5978 | 2.0 | 22808 | 0.8024 | 0.7151 |
| 0.5495 | 3.0 | 34212 | 1.0837 | 0.7139 |
| 0.4026 | 4.0 | 45616 | 1.2854 | 0.7165 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
f3c3e30678f8ed3755b0c354980865f0
|
versae/bertin-roberta-base-spanish-finetuned-recores3
|
versae
|
roberta
| 15 | 0 |
transformers
| 0 |
multiple-choice
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,823 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertin-roberta-base-spanish-finetuned-recores3
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0975
- Accuracy: 0.3884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 3000
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6095 | 1.0 | 524 | 1.6094 | 0.2342 |
| 1.607 | 2.0 | 1048 | 1.5612 | 0.3058 |
| 1.4059 | 3.0 | 1572 | 1.6292 | 0.3361 |
| 0.7047 | 4.0 | 2096 | 2.5111 | 0.4132 |
| 0.2671 | 5.0 | 2620 | 3.2399 | 0.3499 |
| 0.1065 | 6.0 | 3144 | 5.1217 | 0.3444 |
| 0.0397 | 7.0 | 3668 | 4.3270 | 0.3691 |
| 0.0162 | 8.0 | 4192 | 5.1796 | 0.3719 |
| 0.0096 | 9.0 | 4716 | 5.2161 | 0.3994 |
| 0.0118 | 10.0 | 5240 | 4.9225 | 0.3719 |
| 0.0015 | 11.0 | 5764 | 5.0544 | 0.3829 |
| 0.0091 | 12.0 | 6288 | 5.7731 | 0.3884 |
| 0.0052 | 13.0 | 6812 | 4.1606 | 0.3939 |
| 0.0138 | 14.0 | 7336 | 6.2725 | 0.3857 |
| 0.0027 | 15.0 | 7860 | 6.2274 | 0.3857 |
| 0.0003 | 16.0 | 8384 | 6.0935 | 0.4022 |
| 0.0002 | 17.0 | 8908 | 5.7650 | 0.3994 |
| 0.0 | 18.0 | 9432 | 6.3595 | 0.4215 |
| 0.0 | 19.0 | 9956 | 5.8934 | 0.3747 |
| 0.0001 | 20.0 | 10480 | 6.0571 | 0.3884 |
| 0.0 | 21.0 | 11004 | 6.0718 | 0.3884 |
| 0.0 | 22.0 | 11528 | 6.0844 | 0.3884 |
| 0.0 | 23.0 | 12052 | 6.0930 | 0.3884 |
| 0.0 | 24.0 | 12576 | 6.0966 | 0.3884 |
| 0.0 | 25.0 | 13100 | 6.0975 | 0.3884 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
d9c87ae770de62856d56e22180ee0a1e
|
jmunoz/finetuning-sentiment-model-3000-samples
|
jmunoz
|
distilbert
| 14 | 11 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 925 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 1.2.1
- Tokenizers 0.12.1
|
4c0fb2769ce05f8802246ad536468c96
|
coreml/coreml-trinart-stable-diffusion
|
coreml
| null | 3 | 0 | null | 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['coreml', 'stable-diffusion', 'text-to-image']
| false | true | true | 1,150 | false |
# Core ML Converted Model:
- This model was converted to Core ML for use on Apple Silicon devices. Instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-files-to-Core-ML).<br>
- Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.<br>
- `split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
# Stable Diffusion TrinArt/Trin-sama AI finetune v2:
Source(s): [Hugging Face](https://huggingface.co/naclbit/trinart_stable_diffusion_v2)
Stable Diffusion TrinArt/Trin-sama AI finetune v2
trinart_stable_diffusion is a SD model finetuned by about 40,000 assorted high resolution manga/anime-style pictures for 8 epochs. This is the same model running on Twitter bot @trinsama (https://twitter.com/trinsama)
# Please Note!
This model is NOT the 19.2M images Characters Model on TrinArt, but an improved version of the original Trin-sama Twitter bot model. This model is intended to retain the original SD's aesthetics as much as possible while nudging the model to anime/manga style.
|
e78fa10cf549d5b5185b0b5c038cf1bb
|
abdoutony207/m2m100_418M-evaluated-en-to-ar-2000instancesUNMULTI-leaningRate2e-05-batchSize8-regu2
|
abdoutony207
|
m2m_100
| 12 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null |
['un_multi']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,351 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-evaluated-en-to-ar-2000instancesUNMULTI-leaningRate2e-05-batchSize8-regu2
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the un_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3642
- Bleu: 40.8245
- Meteor: 0.4272
- Gen Len: 41.8075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 5.1584 | 0.5 | 100 | 3.2518 | 30.3723 | 0.3633 | 41.5 |
| 2.1351 | 1.0 | 200 | 0.9929 | 32.9915 | 0.3833 | 41.8225 |
| 0.568 | 1.5 | 300 | 0.4312 | 33.705 | 0.3896 | 42.6225 |
| 0.3749 | 2.0 | 400 | 0.3697 | 36.9316 | 0.4084 | 40.57 |
| 0.2376 | 2.5 | 500 | 0.3587 | 37.6782 | 0.4124 | 41.99 |
| 0.2435 | 3.0 | 600 | 0.3529 | 37.9931 | 0.4128 | 42.02 |
| 0.1706 | 3.5 | 700 | 0.3531 | 39.9972 | 0.4252 | 41.8025 |
| 0.165 | 4.0 | 800 | 0.3514 | 39.3155 | 0.42 | 41.0275 |
| 0.1273 | 4.5 | 900 | 0.3606 | 40.0765 | 0.4234 | 41.6175 |
| 0.1307 | 5.0 | 1000 | 0.3550 | 40.4468 | 0.428 | 41.72 |
| 0.0926 | 5.5 | 1100 | 0.3603 | 40.5454 | 0.4307 | 41.765 |
| 0.1096 | 6.0 | 1200 | 0.3613 | 40.5691 | 0.4298 | 42.31 |
| 0.0826 | 6.5 | 1300 | 0.3642 | 40.8245 | 0.4272 | 41.8075 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
8eb7e78693ea1fbc23b1df922d5609b9
|
Ahmed007/mt5-small-ibn-Shaddad-v3
|
Ahmed007
|
mt5
| 13 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Poet', 'generated_from_trainer']
| true | true | true | 1,327 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-ibn-Shaddad-v3
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2668
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 5.4157 | 1.0 | 935 | 3.2668 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
b9b0df475ae5f56aea26c8469acf44ae
|
Helsinki-NLP/opus-mt-es-tn
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-es-tn
* source languages: es
* target languages: tn
* OPUS readme: [es-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-tn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-tn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.tn | 32.2 | 0.528 |
|
ec0c404a14cec5ab4ab71965f2cbba49
|
JoonJoon/koelectra-base-v3-discriminator-finetuned-ner
|
JoonJoon
|
electra
| 10 | 6 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['klue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,593 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-base-v3-discriminator-finetuned-ner
This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1957
- Precision: 0.6665
- Recall: 0.7350
- F1: 0.6991
- Accuracy: 0.9396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 438 | 0.2588 | 0.5701 | 0.6655 | 0.6141 | 0.9212 |
| 0.4333 | 2.0 | 876 | 0.2060 | 0.6671 | 0.7134 | 0.6895 | 0.9373 |
| 0.1944 | 3.0 | 1314 | 0.1957 | 0.6665 | 0.7350 | 0.6991 | 0.9396 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
e5f2e108847439227907d47702bf7562
|
Helsinki-NLP/opus-mt-uk-it
|
Helsinki-NLP
|
marian
| 11 | 19 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['uk', 'it']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,003 | false |
### ukr-ita
* source group: Ukrainian
* target group: Italian
* OPUS readme: [ukr-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ita/README.md)
* model: transformer-align
* source language(s): ukr
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ukr.ita | 46.0 | 0.662 |
### System Info:
- hf_name: ukr-ita
- source_languages: ukr
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['uk', 'it']
- src_constituents: {'ukr'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.test.txt
- src_alpha3: ukr
- tgt_alpha3: ita
- short_pair: uk-it
- chrF2_score: 0.662
- bleu: 46.0
- brevity_penalty: 0.9490000000000001
- ref_len: 27846.0
- src_name: Ukrainian
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: uk
- tgt_alpha2: it
- prefer_old: False
- long_pair: ukr-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
980026f7d8aafd586c53488484e95515
|
ksabeh/bert-base-uncased-attribute-correction-mlm-titles
|
ksabeh
|
bert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,439 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/bert-base-uncased-attribute-correction-mlm-titles
This model is a fine-tuned version of [ksabeh/bert-base-uncased-attribute-correction-mlm](https://huggingface.co/ksabeh/bert-base-uncased-attribute-correction-mlm) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0430
- Validation Loss: 0.0625
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 23878, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1429 | 0.0743 | 0 |
| 0.0430 | 0.0625 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
21c575b37ac774d6bfcbdfb17896d0f5
|
philschmid/distilbert-onnx
|
philschmid
|
distilbert
| 6 | 36,890 |
transformers
| 1 |
question-answering
| false | false | false |
apache-2.0
|
['en']
|
['squad']
| null | 2 | 1 | 0 | 1 | 0 | 0 | 0 |
[]
| false | true | true | 479 | false |
# ONNX Conversion of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad)
# DistilBERT base cased distilled SQuAD
This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1.
This model reaches a F1 score of 87.1 on the dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).
|
d46fbcfb68d5248e1a2ef07d2d636d84
|
philschmid/habana-xlm-r-large-amazon-massive
|
philschmid
|
xlm-roberta
| 15 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['AmazonScience/massive']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'habana']
| false | true | true | 2,206 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# philschmid/habana-xlm-r-large-amazon-massive
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the AmazonScience/massive dataset.
It achieves the following results on the evaluation set:
## 8x HPU approx. 41min
**train results**
```bash
{'loss': 0.2651, 'learning_rate': 2.4e-05, 'epoch': 1.0}
{'loss': 0.1079, 'learning_rate': 1.8e-05, 'epoch': 2.0}
{'loss': 0.0563, 'learning_rate': 1.2e-05, 'epoch': 3.0}
{'loss': 0.0308, 'learning_rate': 6e-06, 'epoch': 4.0}
{'loss': 0.0165, 'learning_rate': 0.0, 'epoch': 5.0}
```
total
```bash
{'train_runtime': 3172.4502, 'train_samples_per_second': 127.028, 'train_steps_per_second': 1.986, 'train_loss': 0.09531746031746031, 'epoch': 5.0}
```
**eval results**
```bash
{'eval_loss': 0.3128528892993927, 'eval_accuracy': 0.9125852013210597, 'eval_f1': 0.9125852013210597, 'eval_runtime': 45.1795, 'eval_samples_per_second': 314.988, 'eval_steps_per_second': 4.936, 'epoch': 1.0}
{'eval_loss': 0.36222779750823975, 'eval_accuracy': 0.9134987000210807, 'eval_f1': 0.9134987000210807, 'eval_runtime': 29.8241, 'eval_samples_per_second': 477.165, 'eval_steps_per_second': 7.477, 'epoch': 2.0}
{'eval_loss': 0.3943144679069519, 'eval_accuracy': 0.9140608530672476, 'eval_f1': 0.9140
608530672476, 'eval_runtime': 30.1085, 'eval_samples_per_second': 472.657, 'eval_steps_per_second': 7.407, 'epoch': 3.0}
{'eval_loss': 0.40938863158226013, 'eval_accuracy': 0.9158878504672897, 'eval_f1': 0.9158878504672897, 'eval_runtime': 30.4546, 'eval_samples_per_second': 467.286, 'eval_steps_per_second': 7.322, 'epoch': 4.0}
{'eval_loss': 0.4137658476829529, 'eval_accuracy': 0.9172932330827067, 'eval_f1': 0.9172932330827067, 'eval_runtime': 30.3464, 'eval_samples_per_second': 468.952, 'eval_steps_per_second': 7.348, 'epoch': 5.0}
```
# Environment
The training was run on a `DL1` instance on AWS using Habana Gaudi1 and `optimum`.
see for more information: https://github.com/philschmid/deep-learning-habana-huggingface
|
3cd7bb4f98050663322ce4cd07e45477
|
albert-xxlarge-v2
| null |
albert
| 8 | 64,439 |
transformers
| 7 |
fill-mask
| true | true | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert']
| false | true | true | 9,849 | false |
# ALBERT XXLarge v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the ALBERT model as inputs.
ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.
This is the second version of the xxlarge model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.
This model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 4096 hidden dimension
- 64 attention heads
- 223M parameters
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v2')
>>> unmasker("Hello I'm a [MASK] model.")
[
{
"sequence":"[CLS] hello i'm a modeling model.[SEP]",
"score":0.05816134437918663,
"token":12807,
"token_str":"â–modeling"
},
{
"sequence":"[CLS] hello i'm a modelling model.[SEP]",
"score":0.03748830780386925,
"token":23089,
"token_str":"â–modelling"
},
{
"sequence":"[CLS] hello i'm a model model.[SEP]",
"score":0.033725276589393616,
"token":1061,
"token_str":"â–model"
},
{
"sequence":"[CLS] hello i'm a runway model.[SEP]",
"score":0.017313428223133087,
"token":8014,
"token_str":"â–runway"
},
{
"sequence":"[CLS] hello i'm a lingerie model.[SEP]",
"score":0.014405295252799988,
"token":29104,
"token_str":"â–lingerie"
}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AlbertTokenizer, AlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2')
model = AlbertModel.from_pretrained("albert-xxlarge-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import AlbertTokenizer, TFAlbertModel
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2')
model = TFAlbertModel.from_pretrained("albert-xxlarge-v2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='albert-xxlarge-v2')
>>> unmasker("The man worked as a [MASK].")
[
{
"sequence":"[CLS] the man worked as a chauffeur.[SEP]",
"score":0.029577180743217468,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the man worked as a janitor.[SEP]",
"score":0.028865724802017212,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the man worked as a shoemaker.[SEP]",
"score":0.02581118606030941,
"token":29024,
"token_str":"â–shoemaker"
},
{
"sequence":"[CLS] the man worked as a blacksmith.[SEP]",
"score":0.01849772222340107,
"token":21238,
"token_str":"â–blacksmith"
},
{
"sequence":"[CLS] the man worked as a lawyer.[SEP]",
"score":0.01820771023631096,
"token":3672,
"token_str":"â–lawyer"
}
]
>>> unmasker("The woman worked as a [MASK].")
[
{
"sequence":"[CLS] the woman worked as a receptionist.[SEP]",
"score":0.04604868218302727,
"token":25331,
"token_str":"â–receptionist"
},
{
"sequence":"[CLS] the woman worked as a janitor.[SEP]",
"score":0.028220869600772858,
"token":29477,
"token_str":"â–janitor"
},
{
"sequence":"[CLS] the woman worked as a paramedic.[SEP]",
"score":0.0261906236410141,
"token":23386,
"token_str":"â–paramedic"
},
{
"sequence":"[CLS] the woman worked as a chauffeur.[SEP]",
"score":0.024797942489385605,
"token":28744,
"token_str":"â–chauffeur"
},
{
"sequence":"[CLS] the woman worked as a waitress.[SEP]",
"score":0.024124596267938614,
"token":13678,
"token_str":"â–waitress"
}
]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
The ALBERT procedure follows the BERT setup.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
## Evaluation results
When fine-tuned on downstream tasks, the ALBERT models achieve the following results:
| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE |
|----------------|----------|----------|----------|----------|----------|----------|
|V2 |
|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 |
|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 |
|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 |
|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 |
|V1 |
|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 |
|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 |
|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 |
|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1909-11942,
author = {Zhenzhong Lan and
Mingda Chen and
Sebastian Goodman and
Kevin Gimpel and
Piyush Sharma and
Radu Soricut},
title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
Representations},
journal = {CoRR},
volume = {abs/1909.11942},
year = {2019},
url = {http://arxiv.org/abs/1909.11942},
archivePrefix = {arXiv},
eprint = {1909.11942},
timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=albert-xxlarge-v2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
70197b4b31403a4f3171b694e708a897
|
sd-concepts-library/herge-style
|
sd-concepts-library
| null | 9 | 0 | null | 6 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,182 | false |
### Herge_style on Stable Diffusion
This is the `<herge>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




Here is the result at "Scarlett Johansson at <herge> style:

|
04a55e1e81622222a125caba0771b3aa
|
gopalkalpande/t5-small-finetuned-xsum
|
gopalkalpande
|
t5
| 18 | 2 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,529 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gopalkalpande/t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0422
- Validation Loss: 0.4407
- Train Rouge1: 19.5311
- Train Rouge2: 14.2402
- Train Rougel: 17.9781
- Train Rougelsum: 18.1546
- Train Gen Len: 19.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 1.0422 | 0.4407 | 19.5311 | 14.2402 | 17.9781 | 18.1546 | 19.0 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
516d886d8ed879f5d8fa0d4a73de9bf3
|
gotutiyan/gec-bart-large
|
gotutiyan
|
bart
| 9 | 18 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,062 | false |
This is a reproduction of the following paper:
```
@inproceedings{katsumata-komachi-2020-stronger,
title = "Stronger Baselines for Grammatical Error Correction Using a Pretrained Encoder-Decoder Model",
author = "Katsumata, Satoru and
Komachi, Mamoru",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.aacl-main.83",
pages = "827--832",
}
```
This model achieves the following results:
|Data|Metric|gotutiyan/gec-bart-large|Paper (bart-large)|
|:--|:--|:--|:--|
|CoNLL-2014|M2 (P/R/F0.5)|71.01 / 43.3 / 62.9|69.3 / 45.0 /62.6|
|BEA19-test|ERRANT (P/R/F0.5)3|70.4 / 55.0 / 66.6|68.3 / 57.1 /65.6|
|JFLEG-test|GLEU|57.8|57.3|
The details can be found in the [GitHub repository](https://github.com/gotutiyan/GEC-BART).
|
a7ab07610defa4fc87e7ea42bd8d2cba
|
tomekkorbak/jolly_saha
|
tomekkorbak
| null | 2 | 0 | null | 0 | null | false | false | false |
mit
|
['en']
|
['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 8,068 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jolly_saha
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 3147
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'every_n_steps': 32,
'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 4096}],
'scorer_config': {}},
'kl_gpt3_callback': {'every_n_steps': 32,
'force_call_on': [25177],
'gpt3_kwargs': {'model_name': 'davinci'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90',
'value_head_config': {'is_detached': False}},
'path_or_name': 'tomekkorbak/nervous_wozniak'},
'objective': {'alpha': 0.5, 'beta': 0.1, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 512,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'jolly_saha',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 3346,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/3q8ux7mw
|
f0c5e22495396e6de696f5c9e522b768
|
sentence-transformers/quora-distilbert-multilingual
|
sentence-transformers
|
distilbert
| 13 | 18,870 |
sentence-transformers
| 0 |
sentence-similarity
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
| false | true | true | 3,579 | false |
# sentence-transformers/quora-distilbert-multilingual
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/quora-distilbert-multilingual')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/quora-distilbert-multilingual')
model = AutoModel.from_pretrained('sentence-transformers/quora-distilbert-multilingual')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/quora-distilbert-multilingual)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
```
|
90cc48f229dcebc789de5b2b7e98caa7
|
jonatasgrosman/exp_w2v2r_fr_vp-100k_gender_male-5_female-5_s474
|
jonatasgrosman
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fr']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'fr']
| false | true | true | 498 | false |
# exp_w2v2r_fr_vp-100k_gender_male-5_female-5_s474
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
6839d3a22d2c0929ddbc14544a692e8f
|
yuhuizhang/finetuned_gpt2_sst2_negation0.05
|
yuhuizhang
|
gpt2
| 11 | 1 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null |
['sst2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,231 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2_sst2_negation0.05
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1134 | 1.0 | 1062 | 3.5060 |
| 2.926 | 2.0 | 2124 | 3.5158 |
| 2.8331 | 3.0 | 3186 | 3.5271 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
1e67e28afcb1ad701ba4b1c4793bd4cd
|
anuragshas/wav2vec2-xls-r-300m-mr-cv9-with-lm
|
anuragshas
|
wav2vec2
| 24 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['mr']
|
['mozilla-foundation/common_voice_9_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_9_0', 'generated_from_trainer']
| true | true | true | 2,402 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_9_0 - MR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3642
- Wer: 0.4190
- Cer: 0.0946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 6124
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|
| 3.5184 | 12.9 | 400 | 3.4210 | 1.0 | 1.0 |
| 2.3797 | 25.81 | 800 | 1.1068 | 0.8389 | 0.2584 |
| 1.5022 | 38.71 | 1200 | 0.5278 | 0.6280 | 0.1517 |
| 1.3181 | 51.61 | 1600 | 0.4254 | 0.5587 | 0.1297 |
| 1.2037 | 64.52 | 2000 | 0.3836 | 0.5143 | 0.1176 |
| 1.1245 | 77.42 | 2400 | 0.3643 | 0.4871 | 0.1111 |
| 1.0582 | 90.32 | 2800 | 0.3562 | 0.4676 | 0.1062 |
| 1.0027 | 103.23 | 3200 | 0.3530 | 0.4625 | 0.1058 |
| 0.9382 | 116.13 | 3600 | 0.3388 | 0.4442 | 0.1002 |
| 0.8915 | 129.03 | 4000 | 0.3430 | 0.4427 | 0.1000 |
| 0.853 | 141.94 | 4400 | 0.3536 | 0.4375 | 0.1000 |
| 0.8127 | 154.84 | 4800 | 0.3511 | 0.4344 | 0.0986 |
| 0.7861 | 167.74 | 5200 | 0.3595 | 0.4372 | 0.0993 |
| 0.7619 | 180.65 | 5600 | 0.3628 | 0.4316 | 0.0985 |
| 0.7537 | 193.55 | 6000 | 0.3633 | 0.4174 | 0.0943 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.1.1.dev0
- Tokenizers 0.12.1
|
be72f4dac958e604e4ab37ab5d3952b0
|
fathyshalab/all-roberta-large-v1-banking-16-16-5-oos
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,517 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-16-16-5-oos
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2920
- Accuracy: 0.3982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7211 | 1.0 | 1 | 2.5748 | 0.2301 |
| 2.2722 | 2.0 | 2 | 2.4566 | 0.3009 |
| 1.9185 | 3.0 | 3 | 2.3596 | 0.3805 |
| 1.667 | 4.0 | 4 | 2.2920 | 0.3982 |
| 1.4704 | 5.0 | 5 | 2.2565 | 0.3982 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
4362f8b84a06291977b2b8511e6adf26
|
DarwinAnim8or/GPT-DMV-125m
|
DarwinAnim8or
|
gpt_neo
| 10 | 11 |
transformers
| 0 |
text-generation
| true | false | false |
mit
|
['en']
|
['DarwinAnim8or/DMV-Plate-Review']
|
{'emissions': 20, 'source': 'https://mlco2.github.io/impact/#compute', 'training_type': 'fine-tuning', 'geographical_location': 'Oregon, USA', 'hardware_used': '1 T4, Google Colab'}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['dmv', 'fun']
| false | true | true | 1,236 | false |
# GPT-DMV-125m
A finetuned version of [GPT-Neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the 'DMV' dataset. (Linked above)
A demo is available [here](https://huggingface.co/spaces/DarwinAnim8or/GPT-DMV-Playground)
(I recommend using the demo playground rather than the Inference window on the right here)
# Training Procedure
This was trained on the 'DMV' dataset, using the "HappyTransformers" library on Google Colab.
This model was trained for 5 epochs with learning rate 1e-2.
# Biases & Limitations
This likely contains the same biases and limitations as the original GPT-Neo-125M that it is based on, and additionally heavy biases from the DMV dataset.
# Intended Use
This model is meant for fun, nothing else.
# Sample Use
```python
#Import model:
from happytransformer import HappyGeneration
happy_gen = HappyGeneration("GPT-NEO", "DarwinAnim8or/GPT-DMV-125m")
#Set generation settings:
from happytransformer import GENSettings
args_top_k = GENSettings(no_repeat_ngram_size=3, do_sample=True,top_k=80, temperature=0.4, max_length=50, early_stopping=False)
#Generate a response:
result = happy_gen.generate_text("""PLATE: LUCH
REVIEW REASON CODE: """, args=args_top_k)
print(result)
print(result.text)
```
|
1d80bc2a5fe7b49349fb7d463e4aea19
|
espnet/YushiUeda_swbd_sentiment_asr_train_asr_conformer_wav2vec2
|
espnet
| null | 20 | 8 |
espnet
| 0 |
automatic-speech-recognition
| false | false | false |
cc-by-4.0
|
['en']
|
['swbd_sentiment']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition']
| false | true | true | 180,335 | false |
## ESPnet2 ASR model
### `espnet/YushiUeda_swbd_sentiment_asr_train_asr_conformer_wav2vec2`
This model was trained by YushiUeda using swbd_sentiment recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout e5c0e0dbdab7e56ea9bf0a852bac10a1d99acf64
pip install -e .
cd egs2/swbd_sentiment/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_swbd_sentiment_asr_train_asr_conformer_wav2vec2
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Mar 4 07:57:13 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `3b53aedc654fd30a828689c2139a1e130adac077`
- Commit date: `Fri Feb 25 00:13:16 2022 -0500`
## Using Conformer based encoder, Transformer based decoder and self-supervised learning features (Wav2vec2.0) with spectral augmentation and predicting transcript along with sentiment
- ASR config: [conf/tuning/train_asr_conformer_wav2vec2.yaml](conf/tuning/train_asr_conformer_wav2vec2.yaml)
- token_type: word
- labels: Positive, Neutral, Negative
|dataset|Snt|Intent Classification Macro F1 (%)| Weighted F1 (%)| Micro F1 (%)|
|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/valid|2415|64.5|67.5|67.4|
|decode_asr_asr_model_valid.acc.ave_10best/test|2438|64.1|66.5|66.3|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_wav2vec2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_wav2vec2_raw_en_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 57795
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 3
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 40000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0025
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- i
- and
- the
- you
- that
- it
- a
- Neutral
- to
- uh
- '''s'
- of
- know
- Positive
- they
- in
- we
- '''t'
- have
- but
- so
- was
- like
- Negative
- yeah
- is
- just
- um
- well
- do
- for
- think
- don
- there
- or
- 'on'
- '''re'
- my
- what
- really
- be
- with
- not
- if
- are
- one
- he
- '''ve'
- because
- '''m'
- about
- all
- get
- can
- had
- out
- at
- them
- when
- this
- as
- oh
- lot
- up
- people
- some
- then
- would
- go
- right
- mean
- now
- time
- kind
- got
- going
- good
- she
- things
- more
- were
- from
- something
- been
- 'no'
- see
- me
- too
- an
- your
- much
- little
- guess
- how
- where
- our
- very
- here
- their
- thing
- two
- '''ll'
- other
- did
- years
- work
- even
- has
- any
- way
- probably
- those
- could
- say
- real
- back
- '''d'
- year
- down
- home
- than
- want
- didn
- into
- pretty
- okay
- who
- take
- huh
- school
- said
- make
- over
- kids
- never
- always
- put
- by
- her
- stuff
- went
- doing
- three
- these
- 'yes'
- which
- around
- only
- big
- maybe
- 'off'
- anything
- day
- t
- sure
- actually
- come
- money
- him
- different
- everything
- still
- used
- many
- five
- will
- sort
- nice
- us
- last
- his
- thought
- every
- most
- getting
- first
- feel
- bit
- need
- children
- same
- course
- also
- new
- care
- family
- hum
- long
- through
- before
- use
- done
- should
- house
- old
- let
- does
- car
- being
- problem
- doesn
- four
- seems
- though
- pay
- look
- whole
- great
- husband
- haven
- try
- live
- trying
- ever
- why
- read
- better
- find
- far
- keep
- ago
- sometimes
- watch
- interesting
- quite
- area
- hard
- talking
- else
- another
- part
- bad
- having
- twenty
- whatever
- place
- couple
- usually
- 'true'
- high
- texas
- seen
- fact
- s
- enough
- after
- own
- college
- while
- country
- hundred
- somebody
- few
- either
- times
- week
- away
- gonna
- type
- job
- six
- dollars
- tell
- might
- remember
- again
- came
- give
- started
- start
- ten
- made
- play
- able
- dallas
- enjoy
- working
- once
- c
- someone
- life
- least
- v
- everybody
- since
- fun
- both
- talk
- wouldn
- ones
- news
- anyway
- wasn
- person
- heard
- believe
- am
- th
- buy
- may
- point
- call
- night
- y
- almost
- bye
- isn
- system
- wanted
- called
- took
- state
- wife
- child
- half
- women
- goes
- next
- yet
- especially
- love
- looking
- parents
- gone
- such
- gets
- understand
- together
- movie
- until
- w
- days
- end
- saying
- idea
- saw
- music
- mother
- thirty
- couldn
- makes
- stay
- change
- m
- basically
- wonderful
- problems
- guy
- worked
- spend
- help
- lived
- credit
- whether
- seem
- eight
- n
- best
- world
- run
- hear
- bought
- young
- each
- months
- seven
- places
- supposed
- city
- matter
- coming
- exactly
- d
- small
- summer
- comes
- certain
- company
- less
- thinking
- won
- during
- b
- thousand
- agree
- show
- daughter
- sounds
- myself
- funny
- water
- o
- month
- dog
- fifty
- paper
- gotten
- found
- taking
- today
- certainly
- boy
- friends
- number
- mine
- program
- food
- son
- p
- older
- name
- air
- movies
- government
- moved
- schools
- outside
- deal
- close
- tried
- paying
- eat
- drive
- hours
- nine
- rather
- cars
- crime
- important
- war
- living
- between
- business
- anymore
- reason
- weeks
- public
- vote
- situation
- recently
- nothing
- easy
- sit
- pick
- taxes
- turn
- full
- percent
- making
- friend
- book
- happen
- minutes
- middle
- town
- watching
- paid
- eighty
- tax
- several
- listen
- set
- talked
- north
- takes
- reading
- definitely
- law
- jury
- kinds
- married
- u
- enjoyed
- says
- without
- works
- learn
- everyone
- drug
- major
- side
- cost
- room
- education
- morning
- computer
- involved
- mostly
- aren
- health
- l
- anybody
- along
- amount
- man
- against
- weather
- often
- under
- age
- forty
- insurance
- favorite
- hope
- card
- must
- happened
- lives
- left
- drugs
- expensive
- american
- miles
- yourself
- hour
- already
- plano
- cards
- decided
- large
- difference
- ahead
- fifteen
- camping
- told
- although
- second
- r
- woman
- twelve
- knew
- guys
- cut
- neat
- fish
- mind
- wrong
- unless
- sense
- instead
- leave
- wear
- class
- hand
- top
- walk
- bring
- past
- f
- running
- e
- absolutely
- weekend
- line
- books
- question
- team
- wish
- exercise
- interested
- areas
- baby
- states
- liked
- somewhere
- father
- experience
- phone
- case
- men
- lots
- cat
- society
- taken
- changed
- game
- worth
- seventy
- gun
- h
- wonder
- hit
- group
- service
- kept
- shows
- gosh
- early
- interest
- trouble
- control
- themselves
- ha
- finally
- using
- god
- dad
- cook
- hot
- difficult
- nursing
- front
- terms
- growing
- late
- kid
- looked
- felt
- rain
- teach
- tend
- realize
- weren
- sixty
- except
- needs
- social
- budget
- figure
- recycling
- lake
- wanna
- looks
- wh
- forth
- mom
- concerned
- south
- grew
- topic
- ways
- death
- christmas
- regular
- wait
- imagine
- television
- east
- trees
- check
- fairly
- hate
- general
- catch
- dinner
- built
- ready
- fine
- sister
- story
- playing
- starting
- homes
- office
- awful
- radio
- needed
- companies
- changes
- programs
- fishing
- nineteen
- ask
- tough
- cans
- easier
- yard
- cold
- ought
- street
- later
- door
- wants
- students
- national
- space
- across
- brother
- free
- local
- tha
- level
- happens
- sitting
- newspaper
- move
- countries
- store
- subject
- girl
- beautiful
- turned
- soon
- income
- putting
- church
- university
- dress
- information
- lately
- degree
- york
- vacation
- pollution
- totally
- winter
- america
- ah
- ours
- cats
- spent
- happy
- played
- consider
- cases
- spring
- california
- longer
- teacher
- oil
- send
- lost
- sports
- garden
- teachers
- families
- particular
- buying
- amazing
- likes
- football
- united
- teaching
- hey
- benefits
- brought
- gave
- party
- worry
- throw
- testing
- given
- bunch
- near
- nobody
- community
- driving
- open
- personal
- sell
- force
- chance
- wow
- test
- baseball
- within
- biggest
- quality
- building
- example
- seeing
- power
- afford
- support
- caught
- inside
- plan
- seemed
- ninety
- younger
- learned
- generation
- charge
- punishment
- rest
- dogs
- become
- clean
- short
- privacy
- g
- calls
- plus
- particularly
- decide
- terrible
- twice
- fall
- extra
- period
- choice
- hold
- ended
- hadn
- main
- guilty
- depends
- save
- excellent
- price
- strange
- feeling
- size
- trial
- military
- boys
- per
- bet
- judge
- parts
- noticed
- anywhere
- fan
- head
- center
- glad
- clothes
- rate
- stop
- eleven
- white
- stand
- suppose
- guns
- grade
- watched
- bigger
- scary
- issue
- special
- dollar
- green
- its
- jobs
- means
- black
- worse
- knows
- plastic
- low
- spending
- picked
- golf
- gas
- single
- neighborhood
- necessarily
- alone
- cooking
- newspapers
- pull
- fast
- completely
- road
- student
- crimes
- houses
- paint
- medical
- learning
- fair
- restaurant
- miss
- lawn
- giving
- washington
- doctor
- word
- killed
- recycle
- light
- cash
- visit
- familiar
- grass
- itself
- season
- chicken
- rid
- president
- stayed
- normally
- whenever
- machine
- graduate
- eighteen
- capital
- shouldn
- virginia
- private
- field
- magazines
- kill
- market
- apartment
- anyone
- waiting
- asked
- classes
- break
- crazy
- helps
- aware
- sunday
- hm
- speak
- term
- sound
- property
- sad
- comfortable
- waste
- channel
- evening
- cover
- heavy
- carry
- everyday
- systems
- gives
- wa
- answer
- higher
- unfortunately
- minute
- future
- serious
- snow
- available
- smaller
- handle
- ground
- behind
- huge
- west
- plant
- allowed
- wind
- peace
- costs
- cause
- serve
- rent
- lucky
- gee
- build
- english
- telling
- lose
- individual
- gardening
- busy
- order
- raised
- basic
- basis
- rock
- training
- happening
- opinion
- heart
- follow
- mainly
- history
- walking
- ye
- average
- towards
- houston
- games
- travel
- decision
- environment
- respect
- list
- hopefully
- grow
- others
- sorry
- san
- taught
- weight
- bags
- hurt
- finding
- attention
- hasn
- computers
- raise
- aerobics
- quick
- shot
- personally
- bedroom
- similar
- loved
- sixties
- park
- helping
- feet
- industry
- write
- generally
- weird
- record
- benefit
- pool
- mail
- pennsylvania
- glass
- notice
- calling
- process
- land
- originally
- richardson
- cities
- afraid
- utah
- entire
- colorado
- ball
- boat
- grandmother
- possible
- folks
- helped
- strong
- keeping
- bill
- keeps
- thank
- camp
- third
- types
- eventually
- obviously
- yesterday
- apparently
- instance
- pet
- central
- club
- flowers
- trash
- trip
- classical
- europe
- changing
- perhaps
- self
- color
- foot
- video
- based
- station
- saturday
- french
- normal
- fire
- '''clock'
- issues
- starts
- piece
- hobby
- quit
- prison
- parent
- oldest
- bush
- coverage
- police
- forget
- girls
- occasionally
- bank
- shape
- beginning
- moving
- sent
- vietnam
- nights
- current
- salary
- himself
- stories
- mountains
- aluminum
- luck
- invasion
- tape
- florida
- bed
- laws
- research
- mess
- hoping
- players
- tired
- thirteen
- magazine
- expect
- sleep
- words
- language
- push
- position
- hobbies
- background
- plants
- inches
- easily
- stopped
- murder
- shoot
- maryland
- hardly
- bills
- attitude
- pro
- civil
- sometime
- human
- wanting
- goodness
- security
- doctors
- kitchen
- somehow
- penalty
- county
- eating
- simply
- die
- bike
- reunion
- project
- typical
- j
- however
- total
- mexico
- base
- economy
- restaurants
- responsibility
- jail
- lower
- died
- tested
- safe
- voting
- elderly
- sh
- listening
- sudden
- numbers
- career
- stick
- born
- wondering
- poor
- painting
- active
- professional
- supposedly
- li
- lady
- reasons
- cool
- sixteen
- yep
- excuse
- horrible
- political
- red
- science
- federal
- besides
- shop
- opportunity
- ride
- planning
- degrees
- writing
- mexican
- engineering
- surprised
- bother
- share
- graduated
- account
- financial
- hands
- activities
- seventies
- step
- thanks
- bag
- role
- england
- limit
- willing
- hospital
- view
- band
- teams
- tonight
- groups
- advantage
- heat
- department
- turns
- tree
- telephone
- became
- brand
- criminal
- blue
- dry
- warm
- weekends
- grown
- stores
- rights
- garbage
- junior
- everywhere
- prices
- metric
- ran
- equipment
- till
- cross
- considered
- track
- moment
- figured
- americans
- met
- worst
- ridiculous
- grocery
- yours
- neighbor
- piano
- sold
- cowboys
- selling
- savings
- grandchildren
- nowadays
- add
- plays
- conversation
- lunch
- straight
- sentence
- floor
- dead
- fourteen
- meet
- ideas
- foods
- israel
- fix
- ourselves
- swimming
- upset
- sign
- sewing
- wood
- recipe
- van
- upon
- standard
- box
- win
- wall
- offer
- products
- otherwise
- pounds
- stations
- ex
- staying
- drop
- body
- carolina
- sales
- meal
- ice
- basketball
- mixed
- careful
- possibly
- sick
- farm
- retired
- compared
- western
- hearing
- finished
- separate
- mentioned
- soviet
- truck
- river
- defense
- oklahoma
- harder
- k
- re
- stuck
- cable
- trade
- favor
- positive
- related
- smoke
- effect
- various
- bottom
- awhile
- kindergarten
- beat
- court
- beach
- baltimore
- choose
- allow
- brown
- hang
- known
- sorts
- bathroom
- scared
- popular
- extremely
- politics
- hair
- policy
- wha
- saint
- covered
- ca
- sisters
- boston
- lakes
- forever
- fight
- downtown
- visa
- sauce
- garage
- lines
- suit
- whereas
- speech
- direction
- animals
- corps
- fit
- majority
- chinese
- dark
- painted
- milk
- concern
- dump
- nature
- safety
- shoes
- star
- questions
- switch
- clear
- trips
- management
- beyond
- depending
- sing
- iraq
- pressure
- cute
- runs
- windows
- salad
- board
- chicago
- population
- legal
- super
- '''all'
- puts
- slow
- pets
- forward
- thousands
- style
- debt
- becoming
- mo
- pop
- violent
- italian
- earlier
- cheap
- weapons
- coast
- austin
- traveling
- passed
- x
- speaking
- points
- prefer
- threat
- further
- master
- table
- broken
- random
- row
- northern
- simple
- appreciate
- district
- train
- continue
- rangers
- pittsburgh
- truth
- value
- quickly
- raising
- pass
- tennis
- flower
- bass
- engine
- becomes
- variety
- jeans
- exciting
- organization
- spread
- sat
- incredible
- somewhat
- loan
- engineer
- doubt
- southern
- monday
- backyard
- forced
- papers
- express
- saving
- owned
- recent
- toward
- fortunate
- liberal
- shopping
- rough
- brothers
- worried
- meals
- scouts
- vacations
- hunting
- lawyers
- wisconsin
- bucks
- act
- voice
- helpful
- wide
- retirement
- cannot
- picture
- picking
- suspect
- spare
- held
- election
- study
- report
- begin
- antonio
- drove
- opposed
- league
- ju
- se
- solution
- closer
- character
- finish
- knowing
- million
- common
- services
- thinks
- player
- violence
- wrote
- highway
- reasonable
- afternoon
- series
- developed
- effort
- christian
- fantastic
- saved
- seventeen
- barbecue
- sun
- conditioning
- ohio
- babies
- arlington
- hole
- visited
- rural
- herself
- knowledge
- kn
- plans
- instruments
- above
- border
- bible
- losing
- china
- events
- leaving
- written
- taste
- friday
- schedule
- anytime
- showed
- aspect
- range
- earth
- rice
- broke
- tent
- excited
- roles
- situations
- rooms
- spot
- laid
- duty
- bottles
- russia
- fighting
- pound
- letter
- convenient
- thi
- storm
- original
- wild
- showing
- percentage
- required
- grandparents
- extent
- economic
- voted
- canada
- trust
- healthy
- dealing
- face
- hired
- discuss
- larger
- pleased
- eye
- constantly
- perfect
- stupid
- square
- mix
- meat
- semester
- necessary
- mandatory
- burning
- fly
- mothers
- aids
- checked
- bedrooms
- fresh
- advice
- tomatoes
- treat
- sale
- ford
- japanese
- burn
- correct
- limited
- sleeping
- actual
- ends
- female
- hundreds
- feelings
- impact
- leaves
- section
- lay
- provide
- planted
- factor
- fill
- rich
- deep
- someplace
- drives
- circumstances
- honda
- jersey
- smoking
- feels
- fifties
- access
- doors
- pattern
- names
- payment
- facilities
- automatic
- boxes
- hi
- pictures
- versus
- ability
- edge
- politicians
- amazed
- boss
- union
- neighbors
- distance
- prime
- article
- mistake
- grades
- bread
- bothers
- jeez
- rented
- fourth
- alcohol
- gulf
- catfish
- license
- shooting
- touch
- asking
- realized
- require
- natural
- expenses
- purchase
- energy
- talks
- colors
- smart
- considering
- lessons
- tremendous
- participate
- ages
- missed
- quiet
- cheaper
- cents
- payments
- iron
- frightening
- forgot
- cheese
- daughters
- lawyer
- creek
- dental
- seat
- humid
- belt
- michigan
- extended
- flat
- driver
- foreign
- stays
- adults
- songs
- due
- wet
- double
- stress
- desert
- drink
- material
- equal
- deterrent
- machines
- eastern
- boring
- apart
- vegetables
- recipes
- unusual
- responsible
- hire
- garland
- ho
- dangerous
- loans
- colleges
- served
- prisons
- recycled
- cousins
- gorgeous
- member
- values
- fell
- fund
- metal
- wolves
- technology
- form
- enjoyable
- entertainment
- successful
- juries
- brings
- likely
- convicted
- appeal
- minimum
- opposite
- sport
- complete
- smell
- gallon
- lord
- employees
- centers
- alive
- blow
- meant
- cutting
- relatives
- bus
- commit
- none
- jus
- holding
- sand
- swing
- courses
- ski
- breed
- heck
- casual
- blood
- admit
- join
- fi
- draw
- upper
- bell
- youngest
- traffic
- protect
- tends
- medicine
- strongly
- committed
- opinions
- brick
- sides
- congress
- gasoline
- regularly
- plenty
- collect
- williams
- tickets
- perspective
- damage
- present
- bowl
- kidding
- employee
- tests
- loves
- round
- nations
- german
- roof
- august
- october
- disney
- pieces
- solid
- knock
- facts
- concept
- specific
- option
- jump
- stage
- block
- items
- murders
- breaks
- dirty
- shirts
- package
- pair
- pants
- data
- opera
- standing
- roll
- count
- action
- physical
- differently
- teenagers
- checks
- replace
- independent
- neither
- tuition
- eyes
- theater
- educational
- bins
- animal
- reports
- senior
- window
- curious
- de
- argument
- june
- date
- extreme
- innocent
- december
- germany
- salt
- et
- cetera
- tomorrow
- educated
- clubs
- bird
- sons
- journal
- visiting
- pulled
- letting
- tech
- fixed
- el
- shorts
- assume
- message
- primarily
- signs
- cuts
- john
- jazz
- balance
- un
- walked
- shirt
- dropped
- latin
- feed
- influence
- wondered
- adult
- aid
- inner
- elementary
- negative
- swim
- projects
- raleigh
- practically
- grand
- nearly
- turning
- cleaning
- fort
- recommend
- ate
- skiing
- rules
- yellow
- cruise
- impressed
- address
- labor
- dish
- highly
- repair
- prior
- fee
- terribly
- experiences
- lead
- accept
- mart
- immediately
- portion
- nicer
- seafood
- fault
- disease
- truly
- wearing
- male
- dances
- closed
- product
- expected
- caused
- tapes
- relaxing
- culture
- technical
- criminals
- sentencing
- summertime
- indiana
- killing
- encourage
- housing
- practice
- ups
- stitch
- compare
- sentenced
- freedom
- belong
- purpose
- throwing
- crafts
- pushing
- sweet
- decent
- sew
- campus
- carpet
- channels
- repairs
- preschool
- please
- minnesota
- activity
- naturally
- cooked
- quarterback
- wise
- satisfied
- cadillac
- streets
- businesses
- honest
- automatically
- routine
- coach
- arm
- driven
- dishes
- mornings
- contact
- mall
- deficit
- humidity
- location
- fortunately
- atmosphere
- corporate
- meeting
- improvement
- engineers
- network
- dressed
- mcdonald
- spanish
- catholic
- organizations
- hill
- model
- fifth
- elected
- articles
- expecting
- seriously
- volunteer
- handy
- riding
- threw
- ooh
- trend
- ba
- arts
- thursday
- uncle
- relationship
- members
- throughout
- buffalo
- solve
- pain
- auto
- cholesterol
- planned
- prepared
- presented
- staff
- choices
- march
- filled
- overall
- discipline
- justice
- weights
- mile
- unit
- bringing
- beef
- camped
- wal
- mow
- microwave
- weapon
- inch
- rule
- traveled
- subscribe
- proper
- di
- classic
- software
- pays
- complex
- missing
- shepherd
- pleasure
- st
- cream
- expense
- automobile
- hers
- orleans
- king
- philosophy
- singing
- eighties
- enjoys
- democratic
- significant
- chore
- ev
- combination
- patterns
- disappointed
- republican
- media
- pre
- sesame
- fixing
- seconds
- passing
- daily
- trek
- signed
- raining
- accident
- scale
- interests
- route
- ma
- whoever
- reach
- judges
- evidence
- european
- seasons
- supporting
- dirt
- loose
- france
- cancer
- planting
- iowa
- increase
- hospitals
- maintain
- odd
- pregnant
- math
- press
- agency
- shrimp
- beer
- key
- puppy
- sending
- hardest
- tr
- wi
- return
- corner
- suits
- dakota
- al
- immediate
- possibility
- hooked
- song
- stadium
- frame
- dig
- navy
- comedy
- annual
- fear
- island
- exercising
- fancy
- fat
- enjoying
- motivated
- design
- affect
- investment
- recall
- co
- luxury
- trim
- flexible
- international
- furniture
- potatoes
- wou
- fellow
- breakfast
- bath
- trucks
- uses
- onto
- beans
- apple
- alabama
- records
- musical
- tie
- setting
- offs
- michael
- bugs
- freeze
- anyhow
- properly
- underneath
- dining
- aside
- quarter
- kentucky
- skills
- parole
- parks
- nation
- complain
- wine
- summers
- fans
- golden
- unanimous
- shift
- warranty
- plastics
- rates
- rains
- charged
- lincoln
- decisions
- checking
- gray
- laugh
- hills
- commercial
- recognize
- quote
- receive
- recording
- illegal
- generations
- advance
- motor
- outdoor
- lab
- honestly
- rap
- oriented
- match
- art
- fiction
- manage
- flip
- appropriate
- strict
- mad
- mental
- hung
- adds
- mileage
- bicycle
- thoroughly
- elections
- deserve
- indian
- according
- latest
- bu
- ta
- vehicle
- holidays
- july
- junk
- emergency
- convinced
- graduating
- kick
- including
- teenage
- ceiling
- valley
- victim
- ocean
- hell
- steel
- rainy
- noise
- marvelous
- drunk
- studying
- mountain
- hood
- greatest
- facility
- generate
- desk
- improve
- tells
- sex
- results
- si
- manager
- goal
- teenager
- concert
- copy
- africa
- paycheck
- woods
- lubbock
- sentences
- prevent
- impossible
- split
- faster
- speed
- thin
- chose
- monthly
- stands
- turkey
- repeat
- japan
- financially
- lights
- page
- pulling
- explain
- potential
- rape
- wash
- minor
- thrown
- professor
- pan
- vegetable
- fried
- onions
- roommate
- effects
- wire
- shame
- individuals
- sweat
- scene
- yards
- whose
- thoughts
- draft
- useful
- welfare
- organized
- communities
- realistic
- directly
- print
- printer
- purchased
- aunt
- prepare
- millions
- challenge
- twins
- badly
- thick
- pure
- bar
- roads
- missouri
- tall
- library
- added
- sam
- marriage
- gardens
- lesser
- views
- understanding
- prove
- deer
- delicious
- containers
- depend
- denver
- favorites
- tear
- site
- code
- winds
- parties
- relatively
- opened
- falling
- fascinating
- forties
- options
- sharing
- attached
- owner
- version
- modern
- standpoint
- eaten
- fully
- neck
- trials
- knee
- uncomfortable
- temperature
- chemical
- processing
- fruit
- lovely
- bothered
- pot
- causes
- rea
- diet
- theory
- conflict
- earn
- disagree
- exposed
- administration
- breaking
- buildings
- fence
- shocked
- retire
- wedding
- ch
- dust
- acid
- pushed
- blame
- contract
- carried
- nurse
- overseas
- texan
- fuel
- whe
- vehicles
- increased
- necessity
- plate
- hitting
- reduce
- blocks
- hide
- silly
- length
- writer
- film
- development
- refrigerator
- engines
- louis
- relate
- citizens
- dorm
- began
- hawaii
- january
- wheel
- gourmet
- shots
- bushes
- theirs
- outrageous
- sea
- hook
- conscious
- videos
- mastercard
- suburb
- chevy
- tiny
- mowing
- bulbs
- flag
- detroit
- brakes
- charges
- retriever
- towns
- contribute
- arms
- slacks
- definite
- difficulty
- produce
- cultures
- cou
- discovered
- whatnot
- philadelphia
- ou
- electronic
- strictly
- tendency
- mister
- regard
- con
- approach
- friendly
- handled
- governor
- louisiana
- urban
- develop
- pardon
- construction
- classroom
- personality
- currently
- tour
- apply
- memory
- francisco
- affected
- complicated
- risk
- shock
- roses
- movement
- tied
- teaches
- nuts
- halfway
- softball
- masters
- causing
- cake
- unbelievable
- cast
- characters
- actor
- association
- wallpaper
- habit
- blowing
- expert
- screen
- bake
- dessert
- tents
- minneapolis
- tin
- wars
- steps
- structure
- motivation
- buddy
- minds
- wound
- coat
- holes
- covers
- shell
- tries
- undergraduate
- springs
- banks
- kuwait
- kansas
- established
- dozen
- steak
- following
- massachusetts
- jewish
- affects
- hotel
- sight
- tight
- birthday
- statement
- weeds
- consumer
- understood
- tastes
- cartoons
- apartments
- cares
- settled
- september
- letters
- atlanta
- newer
- guarantee
- citizen
- occasion
- attorneys
- tom
- levels
- sweaters
- tires
- direct
- wagon
- remarkable
- result
- shower
- hello
- commercials
- cassette
- forms
- standards
- james
- native
- falls
- comment
- peers
- wore
- pleasant
- mid
- region
- essentially
- differences
- fitness
- symphony
- finger
- ad
- sounded
- joined
- trained
- toyota
- motors
- aspects
- candidate
- votes
- hunt
- electronics
- charging
- registered
- ed
- electric
- bite
- gifts
- manufacturing
- farmers
- participating
- legislation
- los
- angeles
- ticket
- survive
- catching
- eliminate
- ryan
- luckily
- teeth
- ill
- hated
- offices
- file
- hassle
- universal
- entertain
- roast
- traditional
- entertaining
- crisis
- officer
- saudi
- participated
- profession
- gue
- soap
- johnson
- task
- dumb
- gain
- broad
- surgery
- dressing
- condition
- tex
- grill
- camper
- note
- managed
- increasing
- rained
- parking
- wake
- mistakes
- pitch
- cucumbers
- prescription
- shut
- forgotten
- conditions
- rehabilitation
- gold
- waited
- substitute
- lift
- crowd
- gym
- tools
- divorced
- practical
- avoid
- spray
- seats
- severe
- litter
- trunk
- programming
- soft
- discover
- cs
- zero
- firm
- army
- post
- rarely
- virtually
- suddenly
- relative
- technically
- frustrating
- nursery
- checkbook
- rolls
- colored
- division
- jack
- districts
- guitar
- leaders
- permanent
- puerto
- su
- ultimately
- race
- biking
- statistics
- accepted
- hussein
- steal
- shown
- menu
- pension
- youth
- pride
- create
- knit
- walks
- guide
- fry
- til
- requirements
- reporting
- networks
- chain
- soil
- jumped
- hysterical
- target
- wasting
- horse
- buses
- dear
- butter
- thanksgiving
- instrument
- cared
- unemployment
- switchboard
- vice
- morals
- focus
- beds
- wednesday
- george
- principal
- non
- scores
- grandfather
- qualified
- burned
- courts
- cousin
- proud
- ham
- hits
- literally
- transferred
- institution
- debts
- collection
- weed
- cigarettes
- homework
- corruption
- clarion
- purposes
- improved
- applied
- closet
- corn
- tomato
- lasagna
- pickup
- collecting
- immigration
- sooner
- resources
- largest
- hurting
- soccer
- treated
- shore
- bored
- abuse
- mayor
- continental
- professionals
- verdict
- carrying
- button
- drinking
- dying
- reliable
- transportation
- subjects
- fees
- unfortunate
- evenings
- craft
- scout
- languages
- scratch
- sears
- thirties
- solutions
- sherman
- stack
- funds
- skirt
- fed
- correctly
- listened
- clothing
- serving
- supervisor
- mark
- materials
- lewisville
- below
- chemicals
- era
- incentive
- coffee
- offered
- interior
- determine
- sets
- alternative
- instructor
- dance
- saddam
- discussion
- joke
- boating
- fabulous
- ship
- funding
- groceries
- entirely
- sitter
- communications
- democrat
- cafeteria
- corporation
- squash
- peppers
- nor
- pour
- flour
- waco
- controls
- argentina
- flying
- coal
- nuclear
- february
- saturdays
- phoenix
- electrical
- wage
- laying
- effective
- robin
- wealthy
- hampshire
- concerns
- hall
- figures
- rochester
- agreement
- pages
- bitty
- cowboy
- dealers
- features
- argue
- commitment
- hanging
- policeman
- critical
- user
- dried
- strip
- pie
- balls
- eggs
- among
- lifting
- phase
- desire
- final
- jogging
- bless
- attack
- taxed
- acres
- april
- oven
- pack
- claim
- gorbachev
- wherever
- troops
- illinois
- industries
- trailer
- grab
- pitching
- nineties
- ranch
- ti
- mortgage
- mill
- sue
- register
- attorney
- alike
- adopted
- tournament
- involvement
- silver
- perfectly
- slightly
- meetings
- primary
- sixth
- employer
- survey
- indoor
- partly
- addition
- nervous
- georgia
- recreation
- internal
- rise
- schooling
- previous
- mood
- stolen
- birds
- director
- named
- mustang
- mystery
- upstairs
- goods
- reunions
- perform
- reality
- hurry
- scattered
- environmental
- limits
- cleaned
- tons
- concrete
- belts
- cabin
- rolling
- review
- invaded
- invade
- obvious
- requires
- typically
- religious
- religion
- opportunities
- intelligent
- peter
- album
- drawing
- trumpet
- stock
- household
- customer
- kay
- cotton
- tennessee
- specifically
- lowest
- moon
- reputation
- honor
- secretary
- rico
- assumed
- realizing
- attitudes
- rat
- vegetarian
- occurred
- practicing
- promote
- adding
- designed
- delivered
- nah
- category
- disk
- exact
- pilot
- costing
- brake
- mercedes
- pr
- abortion
- texans
- moral
- capable
- applications
- beneficial
- flavor
- drain
- reporter
- clock
- aggravating
- politically
- governments
- clearly
- designing
- burden
- laughed
- topics
- chunk
- spots
- streams
- efficient
- slowly
- arkansas
- discussed
- conservative
- flute
- choir
- sugar
- answering
- lists
- babysitter
- impression
- lets
- david
- forces
- thumb
- cop
- creative
- dip
- switched
- pine
- content
- aerobic
- conversations
- touched
- candidates
- legitimate
- assistant
- annoying
- finance
- vietnamese
- husbands
- storms
- pump
- lawns
- patio
- roots
- russian
- plot
- mouth
- amounts
- suffering
- headlines
- hunter
- acre
- ties
- measure
- la
- trout
- guidelines
- bonus
- emotional
- cow
- unique
- providing
- encouraged
- positions
- barely
- criteria
- olds
- tradition
- scares
- workers
- iran
- toys
- tornado
- moves
- ton
- recyclable
- crowded
- ladies
- melt
- crack
- finances
- score
- crawfish
- transmission
- purple
- mavericks
- eve
- babysitting
- committing
- maintenance
- exposure
- cassettes
- socially
- reagan
- soup
- hiking
- athlete
- cheesecake
- grandson
- skunk
- addison
- skied
- realistically
- profit
- emissions
- skirts
- heels
- awards
- silence
- lambs
- whatsoever
- lotus
- offering
- unquote
- forest
- phones
- miniature
- medium
- grandma
- goo
- finishing
- judicial
- penalties
- ki
- hose
- hungry
- success
- monitor
- application
- pink
- depressing
- supper
- bureaucracy
- status
- territory
- mississippi
- exercises
- preference
- peo
- packages
- broadcast
- doctorate
- scholarship
- grows
- lean
- anxious
- core
- voluntary
- minority
- couples
- ears
- crochet
- selected
- voters
- democrats
- authority
- airport
- horror
- fox
- sub
- professors
- legs
- stir
- celery
- eats
- chocolate
- cup
- asleep
- studies
- afterwards
- slip
- lap
- connection
- individually
- dependent
- foundation
- worthwhile
- fields
- freedoms
- giants
- stars
- kittens
- vet
- balanced
- homeless
- birth
- mu
- campaign
- empty
- scenes
- heads
- kicked
- messed
- arabia
- greatly
- bob
- talent
- nurses
- strike
- reached
- dedicated
- suggested
- guard
- basement
- laughing
- communication
- ghost
- abused
- token
- plane
- beating
- former
- films
- fought
- failed
- lesson
- lo
- walls
- sink
- girlfriend
- accused
- hurts
- loud
- gang
- consistent
- stereo
- fa
- struggling
- interview
- employment
- borrowed
- spoiled
- tub
- tea
- mex
- lemon
- bin
- evidently
- grant
- tremendously
- cartons
- opening
- mi
- skin
- seed
- acceptable
- filter
- golly
- sits
- coke
- followed
- basics
- psychology
- operate
- owns
- freezing
- nissan
- te
- accidents
- settle
- leader
- poverty
- dr
- masking
- fiancee
- jugs
- landfill
- heavily
- lie
- trends
- interstate
- competitive
- arguments
- weigh
- competition
- surprising
- temporary
- inclined
- overnight
- priority
- darn
- honey
- roy
- accurate
- rocks
- babysit
- priced
- twin
- le
- ban
- athletes
- lack
- pond
- muscles
- connecticut
- anyways
- pacific
- owners
- freon
- responsibilities
- toxic
- permit
- closely
- pitched
- dresses
- scenery
- kevin
- costner
- greater
- enemy
- granted
- welcome
- define
- advertising
- salesman
- reverse
- ideal
- locked
- directions
- object
- figuring
- frequently
- boot
- therefore
- jails
- murdered
- purdue
- received
- led
- picks
- include
- democracy
- studied
- fond
- climate
- alaska
- sake
- avid
- healthier
- fired
- connected
- stealing
- chances
- humane
- supported
- enjoyment
- penny
- turtles
- encouraging
- ea
- marketing
- garlic
- broccoli
- potato
- suburbs
- formal
- rush
- concentrate
- woodworking
- leaf
- cent
- automobiles
- ozone
- devices
- source
- comedies
- landing
- semi
- agent
- string
- precious
- ugly
- phenomenal
- hilarious
- winning
- doe
- mobile
- farther
- chili
- landscape
- path
- someday
- complaining
- sky
- load
- baked
- stove
- bend
- en
- command
- decides
- attacks
- wished
- ac
- yearly
- weekly
- indeed
- brief
- mike
- dealer
- emergencies
- event
- charlotte
- slapstick
- purely
- included
- unfair
- meaning
- injuries
- vermont
- cornstarch
- egg
- worrying
- wrap
- buff
- advertisements
- plain
- chores
- mention
- allows
- novels
- bases
- billion
- protected
- workout
- cancel
- daddy
- outdoors
- novel
- bruce
- awfully
- constant
- spends
- accent
- deductions
- dealt
- informed
- tournaments
- snake
- penn
- sox
- tho
- root
- rip
- combat
- polls
- sundays
- blank
- frozen
- assistance
- ads
- hiring
- drivers
- recession
- convert
- alternate
- dryer
- lightning
- gr
- chair
- emotionally
- angry
- mature
- treatment
- lousy
- seventh
- ninth
- deck
- printed
- answers
- jumping
- mentality
- popcorn
- shade
- oaks
- reasonably
- budgeting
- controlled
- british
- unreal
- mini
- performance
- tip
- ge
- handgun
- toy
- skip
- armed
- fleas
- redo
- deposit
- goldfish
- childhood
- removed
- surprises
- dodge
- consulting
- sacrifice
- placed
- sailing
- classics
- bottle
- secretaries
- diesel
- liter
- chosen
- boats
- returned
- item
- november
- adoption
- fewer
- pizza
- feature
- nebraska
- cafe
- alzheimer
- agreed
- choosing
- council
- bermuda
- suspense
- satisfaction
- winters
- headed
- murphy
- customers
- habits
- norm
- loss
- bec
- crawl
- exist
- attractive
- wor
- leg
- selection
- prob
- sources
- audience
- styles
- davis
- borrow
- goals
- determined
- accounts
- pat
- vs
- whi
- advantages
- diapers
- pin
- models
- queen
- sticks
- mesquite
- canal
- incredibly
- feeding
- importance
- salvador
- fathers
- regardless
- translation
- frustrated
- bond
- structured
- counting
- factors
- economical
- involves
- radical
- depressed
- universities
- shall
- tank
- jesus
- counselor
- proposal
- allowing
- pocket
- airplane
- gangs
- saints
- consideration
- dolls
- horses
- spouse
- midwest
- fashioned
- screw
- curriculum
- oakland
- candy
- blanket
- backpack
- industrial
- smog
- canyon
- elect
- backed
- bear
- comfort
- economically
- warmer
- sunny
- exhausted
- afternoons
- ranger
- worries
- orange
- physically
- experiment
- famous
- copies
- cardboard
- pa
- demand
- polluted
- tail
- compatible
- wordperfect
- drag
- float
- carter
- presidential
- dug
- israelis
- relations
- arab
- rings
- estate
- salaries
- recognition
- headline
- nowhere
- ratings
- asia
- ei
- lifestyle
- tenth
- preparing
- cookies
- fifteenth
- bait
- experienced
- defendant
- surprise
- cocaine
- reminds
- liquid
- destroy
- century
- admire
- rare
- tuned
- schwartzkopf
- reduced
- cruel
- cheers
- picnic
- accounting
- pace
- jane
- tune
- knees
- holy
- owe
- pepper
- worms
- bricks
- mound
- additional
- flow
- tended
- refuse
- landfills
- stance
- cry
- dumping
- memories
- anyplace
- geared
- arrangements
- depth
- tuesday
- raw
- neighborhoods
- policemen
- net
- located
- trail
- edition
- purchases
- injury
- beliefs
- statements
- sin
- cultural
- shorter
- guilt
- 'false'
- economics
- enormous
- lifetime
- advanced
- adopt
- mechanical
- liters
- dream
- bachelor
- nasty
- scare
- laundry
- strikes
- quilt
- chlorine
- shed
- whom
- ds
- convince
- courtroom
- volleyball
- domestic
- stomach
- concerts
- stepfather
- typewriter
- clouds
- rating
- gifted
- generals
- clip
- screwed
- australia
- maine
- quarters
- chrysler
- oldsmobile
- pistol
- membership
- seldom
- supply
- tornadoes
- hu
- oth
- porch
- persian
- lakers
- tarpley
- seattle
- thrilled
- boards
- brian
- roughly
- paints
- attic
- ceilings
- baths
- pig
- killer
- pros
- paris
- brooks
- dealership
- developing
- islands
- kennedy
- ending
- ratio
- created
- separated
- lasts
- wives
- jean
- spaghetti
- village
- biased
- operating
- enid
- crappie
- employers
- conference
- tuna
- tole
- pollutants
- jones
- handling
- emission
- vary
- initially
- finds
- obligation
- select
- carefully
- barrier
- strangest
- spaniel
- blues
- comparison
- attend
- focused
- ver
- blacks
- jurors
- floors
- spell
- wears
- heel
- wooden
- assistants
- accustomed
- mild
- bands
- bang
- alrighty
- campbell
- tours
- panama
- believes
- corrupt
- cocoa
- interestingly
- makeup
- communism
- etcetera
- historical
- heating
- hispanic
- bilingual
- ultimate
- bicycling
- elsewhere
- scientific
- combine
- ar
- consequences
- gal
- cure
- grader
- corporations
- stitching
- grief
- leading
- graphics
- regards
- rank
- personalities
- mission
- whiz
- voter
- controlling
- believed
- minded
- kyle
- author
- certified
- shelter
- historically
- protecting
- fits
- carrots
- knitting
- professionally
- specialty
- jars
- needlework
- robert
- regarding
- billions
- rental
- nolan
- ruined
- searching
- taco
- mama
- relationships
- exchange
- highways
- handicapped
- scouting
- discouraging
- dropping
- electricity
- stacks
- catalytic
- muffler
- pipe
- error
- compete
- cajun
- haul
- discussing
- kurds
- anti
- orchestra
- needle
- ireland
- investments
- dramatically
- drawback
- raises
- growth
- definition
- guatemala
- receiving
- reported
- aikman
- shoulder
- banking
- highest
- jimmy
- jim
- cardinals
- jamaica
- magic
- convictions
- usage
- hamburgers
- sporting
- muscle
- sophisticated
- element
- occur
- designated
- depression
- covering
- tooth
- filling
- sharp
- strawberry
- relax
- advise
- enter
- throat
- instances
- allowance
- stronger
- debate
- literature
- shelves
- remove
- advertised
- progress
- smith
- richard
- raped
- offense
- detail
- christians
- tore
- accomplish
- released
- loaning
- bright
- intense
- dies
- peas
- steaks
- spicy
- conditioner
- convenience
- drought
- cups
- nee
- russians
- yeltsin
- thirds
- acting
- northwest
- freeway
- curbside
- corpus
- publicized
- mets
- memorial
- onion
- garages
- employed
- lazy
- wrestling
- crab
- loaded
- stationary
- coupons
- ripped
- balances
- convict
- loving
- represent
- judgment
- pork
- wasted
- selecting
- recover
- divide
- civic
- builds
- quicker
- translate
- churches
- slice
- discount
- swear
- nap
- centered
- vitamins
- planes
- contractor
- drastically
- elaborate
- continued
- decline
- uncles
- utilities
- camera
- musicians
- musician
- condominium
- augustine
- tolerant
- southwest
- counselors
- mirrors
- communicate
- worker
- medication
- powerful
- manure
- replaced
- redone
- shotgun
- memphis
- turtle
- supreme
- owning
- cycle
- jay
- airline
- sir
- method
- mayonnaise
- execution
- plea
- mower
- buttons
- campaigns
- log
- quarterbacks
- hamburger
- arizona
- ignore
- bred
- indianapolis
- envelope
- conversion
- hail
- flooding
- spanked
- fluid
- bay
- leather
- italy
- locations
- blew
- extensive
- traded
- transition
- kilometers
- robbing
- kills
- cadillacs
- randomly
- institute
- triangle
- mercury
- volvo
- dan
- leads
- pe
- rome
- attraction
- aunts
- latex
- texoma
- rabbit
- audi
- methodist
- basements
- tee
- clarinet
- walker
- massive
- stroke
- leak
- sites
- deals
- lined
- embarrassed
- slab
- officially
- behavior
- examples
- witness
- wishes
- unlisted
- terminal
- modem
- poodle
- weighs
- paul
- subscription
- chapter
- likewise
- documents
- shoe
- miserable
- jacket
- lax
- varies
- peach
- blows
- disco
- suicide
- bo
- downhill
- profitable
- twenties
- official
- pressures
- image
- monies
- absentee
- senate
- ethnic
- involve
- proven
- offenders
- afghans
- borders
- peaceful
- ab
- blown
- lock
- adequate
- scholarships
- offers
- bat
- injection
- useless
- revolution
- mormon
- enforce
- cosby
- preapproved
- fortune
- messing
- promised
- sum
- frankly
- damn
- gravy
- boil
- remembered
- consuming
- metropolitan
- gift
- seeds
- factories
- layer
- costly
- usual
- cooler
- daytime
- appearance
- sufficient
- balcony
- chasing
- chest
- las
- plumbing
- farming
- becau
- cleaner
- packed
- cried
- lover
- indians
- racial
- occasional
- rivers
- pollute
- locally
- contribution
- presentations
- laser
- represented
- guests
- apples
- hank
- closest
- oak
- missionaries
- rob
- mailing
- ring
- bias
- newsweek
- nicely
- tables
- zone
- faith
- cheapest
- excuses
- fail
- administrator
- baylor
- sued
- emotions
- appeared
- notes
- tying
- nail
- shake
- comp
- entry
- peer
- sore
- sticky
- pudding
- knowledgeable
- haze
- mass
- stressed
- academy
- considerably
- rowlett
- shortly
- nose
- ordered
- crying
- handed
- wages
- input
- praying
- warfare
- accomplished
- woke
- regulation
- equivalent
- bankrupt
- jog
- ell
- ri
- appeals
- extraordinary
- metroplex
- absolute
- conclusion
- accountable
- glory
- pray
- prisoners
- bomb
- destroyed
- testament
- pu
- suggest
- polish
- principle
- gardener
- beets
- behave
- periods
- shrubs
- sprinkler
- fajitas
- describe
- release
- motorcycle
- bound
- styrofoam
- valuable
- tolerate
- attempt
- jordan
- exists
- screaming
- stump
- breathing
- selfish
- dick
- blonde
- maximum
- max
- secret
- holds
- landscaping
- reads
- prevalent
- galveston
- weirdest
- joy
- nationwide
- soda
- coin
- dukakis
- steam
- embarrassing
- plates
- incorporate
- deductible
- machinery
- categories
- funded
- chairs
- recommended
- handicap
- bowling
- meantime
- accord
- tyler
- mosquitoes
- booklet
- coaches
- syria
- dinners
- holiday
- baltic
- priorities
- recognized
- wipe
- longest
- suburban
- delayed
- backgrounds
- varied
- eighth
- den
- coats
- theme
- nicest
- penney
- adjust
- hou
- toilet
- bullet
- rapidly
- capabilities
- hilly
- container
- layoff
- watches
- jewelry
- maker
- infant
- resent
- blade
- watering
- wildlife
- decorating
- fabric
- leadership
- privilege
- exotic
- loop
- seasoning
- chopped
- retiring
- backseat
- par
- leukemia
- ammunition
- barrel
- pontiac
- mazda
- expressway
- administer
- unions
- function
- stopping
- organize
- parenting
- schedules
- slept
- wheels
- resource
- competing
- sees
- careers
- pits
- carpeting
- legislature
- functional
- divorce
- bridge
- transfer
- needlepoint
- cookbook
- breast
- published
- portland
- throws
- counts
- larry
- louisville
- com
- glued
- tube
- slide
- protective
- felony
- dursban
- renting
- rebuild
- london
- shingles
- lea
- stink
- puppies
- schnauzer
- steering
- plugs
- mechanic
- worn
- inflation
- diving
- stretch
- purse
- introduced
- stripped
- occupied
- siamese
- controversy
- buick
- religiously
- allergic
- edges
- sail
- nancy
- biographies
- nonfiction
- thunderstorms
- intend
- educate
- nerve
- recordings
- concentration
- steve
- academic
- freshman
- sophomore
- neutered
- ponds
- disgusting
- narrow
- comparing
- associate
- adjusted
- cottage
- foster
- rake
- outstanding
- appreciated
- malpractice
- thankful
- personnel
- selective
- administrative
- comparable
- pier
- contributing
- cart
- explore
- commits
- affair
- cleveland
- glasses
- downstairs
- details
- backpacking
- blackberries
- alternator
- antilock
- peeves
- chris
- billy
- henry
- smooth
- polluting
- sweats
- fever
- sweater
- wyoming
- filmed
- guts
- respond
- theories
- database
- culturally
- threatened
- tears
- messages
- ear
- bark
- grandpa
- versions
- lee
- wave
- analysis
- gear
- comments
- colorful
- photography
- victims
- resolution
- stiff
- brazil
- minister
- interpret
- hero
- lebanon
- declare
- heritage
- escape
- columbia
- prescriptions
- assumption
- berkeley
- combined
- traditionally
- relaxation
- entering
- regulate
- consciousness
- react
- sexual
- proved
- booze
- cloth
- herald
- instructors
- vested
- consultant
- taxpayer
- lethal
- restricted
- pub
- directed
- frequent
- tempted
- hat
- treadmill
- abilene
- hates
- skinny
- turnout
- bouncing
- wayne
- beforehand
- deserves
- ninja
- expand
- probation
- eliminated
- yogurt
- powder
- boyfriend
- blankets
- alarm
- vacuum
- chop
- strips
- ruin
- knots
- bits
- rogers
- guessing
- addicted
- pitcher
- fingers
- rascal
- whip
- ag
- vegas
- response
- advocate
- donate
- proposed
- emphasis
- transit
- carpool
- map
- sheets
- punch
- calories
- strenuous
- laboratory
- resolve
- serves
- drum
- compact
- tigon
- initial
- moms
- identify
- respected
- vision
- visits
- eagle
- summary
- illustrated
- dial
- extraordinarily
- intelligence
- stages
- troy
- injured
- increases
- joints
- dayton
- mary
- deduct
- administrators
- pressing
- contest
- arguing
- marked
- seek
- gross
- roberts
- mentally
- session
- failing
- occasions
- videotape
- clever
- jerry
- mutant
- warning
- intellectual
- approve
- declared
- hallway
- edging
- pressed
- strawberries
- nieces
- sour
- homemade
- trick
- mixture
- solar
- inspection
- global
- winner
- drawn
- trace
- sympathetic
- managing
- anchors
- sulphur
- chuck
- overcrowded
- stole
- dean
- steven
- bi
- thursdays
- appear
- collapse
- dome
- flex
- stressful
- ok
- paroled
- apt
- patient
- injustice
- farmer
- socialized
- snap
- clay
- wintertime
- beaches
- touching
- curb
- clippings
- flowerbeds
- toes
- buffer
- hardware
- republic
- battle
- heading
- units
- shadow
- yankees
- rounded
- immigrant
- diseases
- caesar
- saves
- nephews
- slowed
- grounds
- snakes
- abilities
- missiles
- nova
- pen
- digging
- drew
- pools
- strung
- port
- sticking
- orioles
- hopes
- ov
- fertilizer
- railroad
- rub
- robberies
- theft
- tourist
- sta
- stood
- eligible
- freshwater
- saltwater
- shark
- fool
- commute
- deciding
- fam
- terrific
- catalogs
- froze
- ethic
- controversial
- crossed
- georgetown
- soy
- hoi
- pasta
- dreams
- painful
- filthy
- innocence
- leaning
- cleared
- feasible
- perception
- lottery
- parochial
- announced
- ll
- gallons
- kindercare
- behavioral
- classrooms
- merchandise
- washer
- refrigerators
- tinker
- supplies
- stimulation
- alert
- furthest
- cease
- reward
- biology
- starter
- prairie
- drill
- johnny
- experiments
- exercised
- paneling
- tougher
- strain
- noisy
- instill
- housework
- gap
- auditor
- dot
- maternity
- butler
- amarillo
- mulch
- actions
- lawsuits
- senators
- anniversary
- bonding
- leisure
- fertilize
- dragging
- decorated
- statewide
- format
- skeptical
- pad
- mode
- justify
- budgets
- seniors
- chief
- efforts
- hispanics
- drastic
- frost
- layoffs
- temperatures
- airlines
- hoses
- safer
- nails
- salads
- clients
- vans
- surely
- pulls
- operation
- sells
- bikes
- unable
- permanently
- slight
- rifle
- impulse
- manual
- handguns
- gauge
- someth
- youngsters
- karate
- hotels
- demanding
- wool
- warnings
- sanctions
- attract
- mysteries
- tenths
- pots
- neglected
- sliced
- leagues
- bulls
- celtics
- struggle
- qualify
- bars
- lucked
- cliff
- cabins
- relaxed
- gates
- oregon
- loads
- crystal
- fumes
- previews
- floating
- reviews
- peaks
- poorer
- matters
- continues
- costa
- geographic
- earthquake
- intrigued
- ain
- albums
- singapore
- proof
- bulb
- spayed
- fr
- skating
- robbery
- sector
- horn
- drafting
- premeditated
- frustration
- radiator
- boundaries
- bureau
- belonged
- nephew
- officers
- serger
- seam
- choral
- dating
- genuine
- requirement
- gradually
- asians
- establish
- effectively
- reel
- ra
- steady
- produces
- switzerland
- calm
- anthony
- suzuki
- plymouth
- sized
- thread
- centimeters
- recorder
- signal
- brands
- resolved
- converted
- dumped
- spur
- trap
- yell
- smarter
- humanities
- amherst
- sheriff
- safely
- completed
- equally
- labs
- foam
- sociology
- entertained
- lobster
- title
- recommendation
- residential
- vicious
- lease
- outer
- honesty
- switching
- freezer
- tollway
- heavier
- bahamas
- sperry
- rollers
- mowed
- cougar
- chi
- crooks
- lips
- remodeled
- cocker
- eigh
- syndrome
- overweight
- titles
- lettuce
- gather
- span
- greenville
- drip
- senator
- dam
- zip
- lexus
- peninsula
- counseling
- grapevine
- parental
- branch
- travels
- atlantic
- screening
- thr
- veterans
- substance
- golfers
- golfer
- manually
- carbon
- disposition
- harrison
- putt
- disability
- marry
- infants
- engaged
- braves
- mums
- provo
- boots
- commercialized
- replacing
- moisture
- assign
- router
- saws
- translators
- alleviate
- acquainted
- caring
- incinerator
- receipt
- scrub
- setup
- hazardous
- wardrobe
- jackets
- blouses
- suspenseful
- graphic
- gary
- monitoring
- hacker
- india
- desirable
- invite
- reaction
- fantasy
- shocking
- recorded
- addresses
- rig
- instructions
- faced
- advances
- paperwork
- tongue
- cha
- accommodate
- motion
- performed
- composer
- horrendous
- beatles
- crop
- applying
- budgeted
- coda
- seminars
- challenging
- righty
- cave
- dragged
- conscientious
- lenient
- warehouse
- managers
- windy
- allergies
- flu
- inordinately
- cinderella
- shoulders
- progressive
- cam
- colonial
- nicaragua
- exception
- translations
- scream
- independence
- cope
- economies
- tropical
- consequently
- difficulties
- plead
- disturbed
- correlation
- movements
- athletic
- stoned
- invested
- coincidence
- analyze
- chip
- miracle
- fif
- kee
- inmates
- external
- civilian
- trapped
- ghetto
- amenities
- clutch
- disposable
- makers
- pursue
- organ
- blast
- pluses
- racquetball
- lobbyists
- republicans
- outskirts
- carpenter
- buck
- predict
- backwards
- wok
- sweets
- ugh
- tablespoon
- singer
- shops
- singers
- stockings
- mirror
- crocheting
- zucchini
- voices
- pockets
- exhaust
- oxides
- victimized
- cynical
- colder
- castle
- listed
- deliberately
- spoken
- adventure
- repeats
- imagination
- viewing
- bench
- catcher
- bull
- corners
- dustin
- hoffman
- kmart
- concerning
- bulk
- accepting
- eerie
- na
- properties
- lying
- sturdy
- logic
- dated
- slick
- separating
- talented
- raiders
- device
- macintosh
- statistical
- sausage
- italians
- canoe
- thrill
- honeymoon
- arabs
- defending
- stability
- pops
- musicals
- sends
- asks
- ringing
- versa
- opens
- offhand
- dana
- envision
- philosophical
- charity
- volunteering
- commentaries
- informal
- commentary
- viewpoint
- independently
- sections
- nope
- firmly
- forcing
- flags
- gathered
- gett
- neil
- jagged
- awakening
- julia
- beside
- initiated
- pole
- kidnapping
- witnesses
- handles
- panel
- refined
- portions
- moments
- accessible
- hollywood
- norman
- assets
- tire
- pursued
- factory
- au
- romance
- fuels
- presentation
- closets
- hips
- rated
- publish
- protestant
- females
- crowds
- poorly
- identified
- buys
- stuffed
- chamber
- brass
- arrest
- productive
- ticks
- earned
- prisoner
- reimbursement
- spiritual
- z
- pronounce
- riskier
- protection
- consistently
- endless
- charles
- rebellion
- pacifist
- curse
- unto
- spirit
- barbara
- bombs
- tearing
- struck
- heaven
- theaters
- northeast
- licensed
- reducing
- peoples
- lithuania
- damaged
- bacon
- worm
- bug
- sprays
- bloom
- rye
- leasing
- nightmare
- beautifully
- washing
- nurseries
- neglect
- mixes
- frying
- guacamole
- disc
- populated
- cooperation
- bundle
- nickel
- rely
- insulation
- powers
- soldiers
- leery
- iraqi
- germans
- safest
- appears
- whoa
- republics
- participation
- reference
- disgusted
- hauling
- permitted
- orientals
- excluded
- stone
- sack
- crush
- fills
- crap
- fisher
- leap
- interact
- publicity
- brooklyn
- idiot
- easter
- vines
- extensively
- fou
- extras
- shootings
- knife
- outcome
- pensacola
- fished
- interviews
- disappointing
- overworked
- speedy
- apathy
- juror
- ann
- appointed
- spite
- ballot
- counter
- appetite
- technician
- complaints
- begins
- reaching
- referred
- influences
- swayed
- award
- slips
- stranded
- bankruptcy
- users
- socialize
- boom
- secondary
- captured
- backward
- intellectually
- bean
- measured
- remind
- bolt
- swung
- dryers
- extension
- hooks
- trinity
- lasting
- hatred
- snack
- altogether
- heal
- restore
- restored
- deeper
- strength
- link
- graders
- noticeable
- lowering
- preferred
- remarkably
- baroque
- barry
- townhouse
- fertilizing
- decade
- slower
- pl
- hop
- creates
- alternatives
- gains
- operated
- forgetting
- detector
- deliberate
- cycling
- legally
- bridges
- prize
- adolescents
- gamut
- slant
- fascinated
- baskets
- glue
- collector
- accountant
- rides
- def
- remote
- professions
- suggesting
- crafty
- remembers
- bears
- identical
- burns
- basket
- believer
- document
- korea
- lasted
- meatballs
- waist
- rear
- stretching
- fold
- kroger
- linoleum
- angle
- wo
- diverse
- buyer
- bullets
- banning
- bargain
- breeding
- humor
- evil
- q
- illness
- peop
- oldsmobiles
- fiance
- bodied
- educating
- showers
- mud
- connect
- bothering
- rebuilding
- kuwaiti
- possibilities
- overcast
- cloudy
- hurricanes
- forecast
- ru
- therapist
- scott
- rugs
- angel
- wheat
- editor
- caretaker
- liking
- kiss
- inevitably
- chat
- unhappy
- comfortably
- litt
- variation
- protest
- fences
- samples
- messy
- affectionate
- disabled
- barking
- production
- kelly
- corvette
- fanatic
- towel
- firing
- coaching
- presents
- burglar
- overcrowding
- lane
- imprisonment
- arrested
- asian
- wrecked
- beauty
- olympics
- conviction
- playground
- garth
- rs
- jam
- literary
- cre
- execute
- cartoon
- nearby
- fundamental
- ribbon
- bobby
- montessori
- sofa
- fetched
- rolled
- sewed
- starters
- crocheted
- liberties
- nintendo
- majoring
- associated
- threatening
- freezes
- traction
- perspectives
- southeast
- carp
- advertise
- pint
- merit
- durham
- meryl
- snowed
- advisors
- terrorism
- sectors
- joint
- terrain
- citizenship
- melted
- ounces
- ounce
- keys
- races
- smokers
- sensible
- bradshaw
- hip
- af
- richmond
- sen
- readily
- consistency
- canned
- enforcement
- contracts
- cons
- differ
- suffer
- tool
- specialist
- flies
- confidence
- esteem
- ironing
- inexpensive
- slots
- buffet
- cuisine
- congressman
- persuaded
- minorities
- stranger
- brush
- coastline
- blind
- cape
- dow
- partially
- calcium
- vast
- abroad
- museum
- physician
- physicians
- redid
- erie
- cooperative
- survival
- har
- exac
- intentionally
- affecting
- urine
- grandkids
- agricultural
- beam
- display
- constitution
- capitol
- ordinary
- babysat
- aggressive
- journalism
- grad
- tia
- olive
- collin
- casserole
- cakes
- operas
- accents
- almo
- oprah
- tiles
- tile
- trillions
- struggled
- tips
- tulsa
- museums
- sailboat
- perch
- styling
- seville
- rotten
- ken
- dentist
- maverick
- medicare
- douglas
- leased
- insane
- madison
- dock
- subdivision
- pouring
- wooded
- departments
- airplanes
- pilots
- premium
- ol
- liberty
- malls
- fossil
- produced
- bumper
- purchasing
- gentleman
- tribe
- wordstar
- rinse
- santa
- broth
- thomas
- addressed
- unconsciously
- enchiladas
- slickers
- rib
- lawry
- housekeeping
- opener
- doll
- sierra
- nuskin
- legend
- ruben
- batteries
- drywall
- disturbing
- relief
- devastating
- confined
- strides
- incineration
- drums
- cement
- leaked
- presently
- semiconductor
- firms
- foremost
- hoods
- sample
- client
- update
- predominantly
- gory
- dancing
- inherent
- harmed
- sneak
- invisible
- obligated
- invariably
- supervisors
- dentists
- chew
- randy
- understandable
- springer
- artist
- stardom
- taylor
- synthesis
- adapt
- pla
- labeled
- label
- attended
- manuals
- stephen
- stimulating
- improvements
- veterinarian
- serial
- wrongly
- preschoolers
- conditioned
- detailed
- unload
- highs
- collar
- identification
- stones
- zoo
- owens
- sandinistas
- greedy
- kings
- roosevelt
- bananas
- tempting
- lessened
- performances
- greek
- plots
- sean
- statehood
- quo
- assuming
- significantly
- woul
- ve
- occurring
- stringent
- troubled
- resistance
- regional
- disastrous
- practices
- alternates
- approved
- believing
- joe
- iraqis
- habitual
- bone
- dope
- threaten
- inventory
- bibs
- tasted
- afghan
- quilts
- riot
- earning
- backup
- christ
- begun
- guaranteed
- beats
- monetary
- ne
- involving
- punishable
- instantly
- hog
- logistics
- joining
- tutor
- doggone
- hats
- remodeling
- allen
- cabinets
- motivate
- inspired
- computerized
- pers
- extremes
- willingness
- excitement
- jacobs
- architect
- lump
- shared
- evaluate
- exclusive
- expanded
- tablespoons
- ginger
- peanuts
- sang
- choirs
- finals
- aggravated
- okra
- ruled
- landmark
- restrictions
- smack
- investing
- drier
- hotter
- orlando
- adventures
- scrap
- battery
- timing
- boeing
- alcoholic
- sullivan
- continuing
- ukraine
- adjustments
- astros
- claws
- declawed
- rushed
- stray
- void
- chase
- messes
- procedures
- underwear
- skill
- politician
- mitch
- caddo
- prizes
- lids
- files
- tra
- questioned
- wolf
- thunder
- howl
- buffaloes
- honduras
- wealth
- contributes
- wider
- soak
- installed
- converter
- authorities
- visible
- ash
- suspected
- agencies
- mouse
- printout
- producing
- unix
- blueberry
- hike
- overly
- baker
- assault
- restraint
- enj
- danny
- couch
- arnold
- ridge
- gene
- clo
- unemployed
- ahold
- dislike
- equality
- mistaken
- aged
- quoted
- harsh
- realizes
- upstate
- expend
- brinkley
- complaint
- slanted
- restricting
- halls
- wheelchair
- supervised
- terry
- monstrous
- drawbacks
- fights
- learns
- fallen
- challenged
- rewarding
- mailed
- snowing
- ni
- wreck
- amongst
- misery
- schwarzenegger
- goofy
- entered
- rationale
- prosecutor
- excused
- bare
- lawsuit
- audio
- teti
- eh
- lacking
- memorable
- wisdom
- succeed
- jokes
- frenchman
- liability
- workmen
- executives
- marijuana
- surface
- lengths
- fondue
- cheddar
- watermelon
- saucepan
- lukewarm
- cookbooks
- collected
- saran
- hollow
- warming
- spa
- bathing
- incur
- institutions
- freshmen
- sinking
- description
- graduates
- nelson
- commerce
- recruiting
- homemaker
- cri
- ankle
- install
- sympathy
- burnt
- episode
- awesome
- scandal
- grasp
- multiple
- fonda
- tolerance
- enforced
- lighter
- enemies
- gentle
- avoided
- approaches
- sheep
- grace
- reserve
- claimed
- abusing
- borrowing
- servants
- stops
- moist
- ass
- kin
- trimmed
- varieties
- experimenting
- mashed
- foo
- barbecued
- barbecues
- marinate
- manages
- sacks
- giant
- pact
- confused
- stepping
- seams
- michener
- blooming
- stewart
- tim
- rebel
- grammar
- yankee
- restriction
- biblical
- paychecks
- request
- stable
- diego
- lush
- ga
- limb
- flooded
- strokes
- animated
- muddy
- sharks
- quantum
- partners
- deedee
- formula
- subtle
- solved
- tow
- bounds
- rooting
- championship
- toronto
- ontario
- cabbage
- cantaloupe
- siding
- twist
- sirens
- reminded
- affluent
- bee
- captain
- tackle
- advancement
- isolated
- destroying
- foggy
- regulating
- cigarette
- linguistics
- canadian
- payless
- cashways
- bucket
- cereal
- maxed
- rally
- richards
- convention
- everytime
- mar
- dairy
- doubts
- pursuing
- flight
- crew
- oops
- misses
- amazingly
- punished
- suited
- flexibility
- rehabilitate
- deduction
- debit
- executive
- requested
- implemented
- disadvantage
- shoddy
- naive
- moscow
- marcos
- shoots
- blessed
- cad
- noon
- formed
- bargains
- circuit
- dissertation
- serviceable
- roughing
- cots
- condo
- poles
- locks
- ob
- hearts
- passover
- seder
- catholics
- attacking
- syrian
- bagels
- affairs
- iranian
- ideals
- dividend
- voluntarily
- devote
- performing
- pipes
- arteriosclerosis
- nonexistent
- torn
- outfits
- prejudice
- invited
- remembering
- remedial
- certification
- textured
- insides
- tone
- tornados
- exxon
- brain
- photographer
- audit
- mainframe
- jet
- upgraded
- baghdad
- scheduled
- receptacles
- continual
- potentially
- prestige
- perceived
- trivial
- broader
- sided
- claims
- adjustment
- tread
- richland
- discouraged
- stepdaughter
- sacrificed
- possession
- castroville
- timer
- shady
- lehrer
- editorial
- embroidery
- envelopes
- continuous
- typing
- claude
- aging
- attending
- trainable
- watered
- composition
- dis
- disabilities
- intentions
- inter
- gay
- facing
- interviewed
- seasonal
- patch
- peculiar
- rec
- brilliant
- invest
- payday
- buddies
- wiped
- indoors
- fiddle
- inspect
- peel
- hors
- impress
- ridden
- objects
- surprisingly
- servicemen
- teeny
- equitable
- tier
- stair
- targets
- knocked
- accuracy
- impressive
- cycles
- writers
- rehabilitated
- fleet
- drops
- quarts
- peeve
- sa
- pregnancy
- meets
- campsite
- specialized
- indicated
- beings
- obnoxious
- stereotype
- communist
- sway
- soviets
- monetarily
- circle
- blah
- carnival
- outs
- indication
- gigantic
- ownership
- feeds
- latch
- pansies
- cau
- screened
- references
- tabs
- steamed
- blueberries
- desserts
- sandwich
- slices
- mba
- describing
- duke
- mechanics
- secorski
- financing
- punishments
- whack
- addiction
- '7'
- specials
- climbing
- shells
- spectrum
- ins
- ants
- painter
- painters
- noises
- rats
- sequel
- rocky
- stallone
- pai
- exterior
- afterward
- greasy
- builders
- intervention
- solving
- appliances
- fu
- hesitant
- incorrectly
- lizards
- bats
- evils
- refugees
- permission
- dive
- instituted
- parked
- landry
- scope
- eagles
- cows
- orders
- tokyo
- subway
- remorse
- heinous
- manufacturer
- occupation
- neal
- brushes
- manhattan
- stud
- leftover
- coll
- rifles
- shelf
- robbed
- temporarily
- inconvenient
- limitations
- spelling
- precise
- commodore
- specifications
- belief
- aggravates
- nev
- bites
- knox
- overheard
- rows
- frederick
- pointed
- stu
- rusty
- reelected
- loses
- pretend
- symptoms
- biography
- destroys
- delicate
- speakers
- happier
- grub
- raiser
- petroleum
- menial
- jeff
- blink
- recommending
- diner
- streep
- copper
- explosives
- disappear
- cosmopolitan
- swimmer
- vogue
- felon
- converting
- bolts
- ross
- ro
- reject
- outfit
- automotive
- mexicans
- envious
- risking
- shifts
- cylinder
- gaining
- tragic
- expressing
- expression
- chilly
- yorker
- dall
- deny
- bonuses
- lucrative
- congressmen
- portray
- needing
- scallops
- susan
- protein
- gained
- baking
- academically
- kenyon
- admissions
- sciences
- provides
- preparation
- logical
- cage
- owed
- devastated
- despite
- pillsbury
- surrounding
- prosecution
- liable
- limitation
- writes
- follows
- nash
- paso
- juice
- reusable
- procedure
- vegetation
- bach
- delivery
- rapes
- thou
- contemporary
- brookhaven
- heater
- curiosity
- fuse
- assembly
- limestone
- danger
- ferry
- ducks
- pilgrimage
- annoyance
- seniority
- ben
- partner
- executed
- healing
- darker
- diff
- routes
- touring
- footage
- abandoned
- retain
- warped
- leslie
- mockingbird
- tricky
- steep
- overwhelming
- killers
- calendar
- faculty
- bingo
- fog
- rationing
- visas
- awareness
- howard
- repairing
- bathrooms
- upside
- symbol
- conception
- veteran
- daylight
- babysitters
- valentine
- ideally
- driveway
- digest
- danielle
- severely
- confident
- idaho
- searched
- appointment
- givers
- pappasito
- dillard
- expertise
- tasty
- publisher
- reruns
- soaps
- repaired
- theatre
- cedar
- mainstream
- refer
- tina
- secure
- rockets
- loo
- contacts
- carpooling
- appalachian
- adventurous
- hostages
- fatal
- patients
- '2'
- sunfish
- donated
- shepherds
- joey
- treats
- researcher
- unnecessary
- stucco
- payroll
- scan
- conductors
- versed
- midway
- beard
- princess
- naked
- custom
- mount
- marshmallows
- mommy
- committee
- allegedly
- tap
- woodstock
- routinely
- rod
- tuesdays
- patterned
- czar
- donald
- booked
- intent
- granddaughter
- chips
- sedan
- discounts
- inn
- dent
- crib
- deliver
- schutzhund
- alsatian
- refused
- nola
- grapes
- marinated
- maxima
- oahu
- conferences
- newly
- kauai
- maui
- hunters
- concentrated
- bakery
- hay
- sleeve
- niro
- builder
- curtain
- spain
- crust
- intriguing
- reimbursed
- licenses
- physics
- reaches
- donahue
- cruises
- nassau
- olives
- lodge
- grandsons
- acoustics
- waves
- uniforms
- fancier
- mesa
- dalmatians
- soapdish
- mushroom
- milwaukee
- violin
- harpsichord
- rumor
- disneyworld
- thinner
- carolyn
- risque
- saxophone
- jodie
- hopkins
- credibility
- barbies
- motel
- wendy
- broncos
- chico
- troop
- warranties
- picky
- aberdeen
- solicitors
- autumn
- nevada
- marlin
- operations
- exhibit
- shuttle
- wycliffe
- sheltie
- particulates
- colombo
- duties
- burner
- hometown
- permits
- contributions
- astronomical
- attire
- blazer
- critics
- omaha
- disturbs
- politeness
- polite
- presumably
- conscience
- canceled
- respects
- norms
- rang
- solicitations
- gossipy
- obtained
- frequency
- turf
- soliciting
- medications
- chow
- smiling
- leash
- acts
- gin
- dispute
- reactions
- intimidated
- alm
- inundated
- switches
- influenced
- rhythm
- sim
- mus
- jimi
- hendrix
- pitiful
- promise
- simon
- qualities
- achieve
- unexpected
- alw
- loaned
- quota
- holler
- leeway
- pains
- wing
- coordinated
- spelled
- skid
- counsel
- violation
- actu
- modeling
- lyrics
- oldies
- phil
- collins
- criticize
- suggestions
- petting
- farms
- exit
- determination
- preservation
- ted
- teddy
- underclass
- considerable
- watcher
- gathering
- sexually
- justified
- territories
- capita
- carefree
- taxing
- weak
- territorial
- resist
- attempts
- craze
- uni
- subscribed
- tractors
- regulated
- cal
- organic
- weaponry
- tanks
- offender
- cured
- slave
- foul
- flipping
- shades
- acclimated
- squares
- tapped
- jerusalem
- fearful
- interrupt
- interrupted
- erase
- monterey
- jose
- ram
- supplement
- standardized
- overtime
- amazes
- circumstance
- summons
- conservation
- indestructible
- littlest
- missionary
- wrapped
- ellen
- toyotas
- preferences
- rag
- straw
- wallpapering
- hoe
- vo
- tubes
- dulles
- incoming
- eldorado
- coun
- tenure
- evaluation
- assigned
- flatter
- chickens
- curry
- overextended
- compl
- housewife
- simmer
- yarn
- demo
- ensemble
- bas
- transmissions
- frivolous
- sessions
- grind
- ranges
- quits
- disconnected
- substances
- etched
- notion
- redeeming
- grabbing
- scrape
- por
- funniest
- rotted
- harvest
- adaptations
- mining
- incaviglia
- excess
- exhibition
- da
- nightmares
- biscuits
- echoes
- actress
- believable
- drafted
- truman
- snider
- extend
- planet
- packing
- dumpsters
- awakenings
- deniro
- actors
- ser
- garp
- attacked
- ralph
- rapid
- agreements
- forests
- polluters
- penalize
- undergrad
- output
- sensational
- failure
- fattening
- catered
- brownies
- crock
- downy
- delta
- cooled
- duplicate
- clearing
- pheasant
- genuinely
- capability
- shield
- agenda
- coup
- briefly
- context
- governors
- irish
- reserved
- collectors
- ole
- antique
- eights
- irate
- noticing
- solo
- shipped
- dramatic
- grateful
- segments
- updates
- trite
- platter
- inc
- incidences
- estimate
- walter
- cronkite
- mold
- efficiency
- spouses
- widely
- redskins
- lynn
- deaths
- observe
- educators
- nother
- visual
- graded
- objectives
- principals
- passes
- poli
- interaction
- prescribed
- breakthrough
- fake
- fears
- web
- housewives
- awake
- reservations
- suggestion
- genre
- innovative
- umbrella
- annoyed
- myth
- proportion
- generational
- exams
- gung
- essential
- pushers
- cathy
- sassafras
- dye
- barn
- outlets
- hollering
- dents
- scratches
- layers
- swiss
- cauliflower
- trays
- pans
- boiling
- vanilla
- custard
- unsweetened
- spoon
- freons
- officials
- disaster
- contributor
- analyzing
- respiratory
- powered
- desired
- trainer
- butt
- psychological
- majors
- staggering
- hamilton
- tracy
- protesting
- prejudices
- dale
- willie
- summoned
- questionnaire
- skipped
- bail
- hebert
- mangione
- breeze
- fairer
- regulations
- seriousness
- darkness
- remem
- judith
- dedicate
- owes
- domino
- insured
- backing
- risks
- devalued
- magnitude
- taped
- breakdown
- beep
- murderers
- murderer
- insanity
- slap
- wrist
- merry
- reinstated
- atrocities
- prayer
- premature
- pushes
- offend
- ridiculously
- bind
- identity
- bombed
- keepers
- deducted
- offset
- owing
- giveaway
- immigrants
- seeking
- insects
- daffodils
- bud
- dandelions
- plagued
- tiller
- trie
- plum
- fescue
- dries
- greenbelt
- cracks
- smokey
- megahertz
- samna
- proficient
- poison
- reused
- mash
- heights
- lone
- vicksburg
- handful
- futuristic
- patrick
- foggiest
- soldier
- buckets
- tot
- immigrate
- render
- fab
- principles
- payoff
- incinerators
- smelled
- ozarks
- disappeared
- tad
- tiers
- glance
- enlightening
- nashville
- fellows
- communicated
- catalog
- insight
- spoke
- flounder
- padre
- aransas
- dingy
- marriages
- becky
- squeezed
- triple
- caribbean
- bees
- lilac
- overhead
- static
- lumber
- juan
- irresponsible
- bold
- carmel
- smarts
- surf
- snappers
- snapper
- described
- aetna
- medi
- irving
- provided
- wells
- romania
- resort
- affords
- printing
- seminar
- thaw
- payoffs
- persuade
- judeo
- litigious
- opponent
- underdog
- equate
- fred
- divided
- separately
- turnover
- descent
- filet
- sole
- jerk
- therapy
- companions
- dresser
- explained
- hush
- agrees
- aff
- drama
- at&t
- modest
- bef
- prep
- vocational
- col
- inevitable
- atomic
- disadvantages
- distracted
- measurement
- arrogant
- clientele
- jelly
- biting
- acceptance
- fir
- overdue
- optima
- suckers
- honored
- chevrolet
- taurus
- recreational
- campers
- shines
- holly
- mattresses
- elastic
- hectic
- volunteered
- heartbreaking
- bargaining
- forgive
- adamant
- moderates
- egypt
- muslims
- palestinians
- poem
- naps
- demonstrations
- restless
- underlying
- dissatisfied
- proposing
- upbringing
- outlook
- quilting
- amish
- acreage
- eyed
- motivates
- vitamin
- drilled
- extensions
- quantities
- carson
- doses
- experimented
- chlorinated
- rode
- nationalities
- exam
- memorize
- readers
- scales
- grain
- matching
- explains
- semigloss
- marks
- experiencing
- upbeat
- connections
- dah
- seated
- alley
- uncertainty
- hoot
- itemize
- processors
- portable
- hewlett
- rival
- rugged
- decks
- printers
- obsolete
- quitting
- approximately
- martin
- achieved
- tact
- disappointment
- trusting
- corrected
- opted
- perjured
- barred
- script
- ironic
- witnessed
- answered
- dependents
- mobility
- preventative
- lung
- carrier
- filed
- pissed
- offensive
- opinionated
- textbooks
- forbid
- advertisement
- cordless
- porcelain
- sandy
- tracks
- amateur
- sings
- contraceptives
- luxuries
- continually
- perennials
- arriving
- bows
- ribbons
- designs
- bunny
- ink
- canvas
- crewel
- decorations
- victorian
- stiffen
- uncommon
- compensate
- typed
- correcting
- frustrations
- acted
- rumors
- lebanese
- newsmen
- chemistry
- tw
- literacy
- jackson
- macho
- hint
- cer
- cutbacks
- slogan
- preserving
- trigger
- greenhouse
- plattsburgh
- digital
- sane
- boost
- vacationing
- stationed
- slope
- attach
- starving
- distant
- mideast
- bureaucratic
- bearing
- nightline
- eng
- centuries
- decking
- crawling
- buds
- vine
- chops
- guest
- sucks
- tails
- '''oeuvres'
- cooks
- elegant
- crumbs
- crunchy
- bouillon
- 20/20
- cord
- irritated
- luggage
- climates
- richer
- civilized
- israeli
- jazzercise
- ego
- exer
- leaned
- firearm
- firearms
- twirling
- edited
- dribble
- accidental
- resale
- trading
- strangely
- cutlass
- semesters
- recipients
- recipient
- pathetic
- import
- partnership
- ambition
- disciplined
- prenatal
- peru
- thir
- filters
- tourists
- canadians
- panamanians
- initiate
- concentrating
- cellular
- awkward
- aw
- sanitation
- kuwaitis
- accomplishment
- defend
- amy
- sunshine
- hurricane
- flood
- muggy
- royals
- pitchers
- nat
- indicator
- lineup
- knives
- publishing
- laptop
- search
- significance
- chains
- jonathan
- petunias
- blooms
- stitches
- fruits
- righ
- opportune
- tang
- inspiring
- incomes
- ferraro
- isaiah
- alma
- mater
- dominant
- greed
- hud
- pit
- bounced
- installation
- stinking
- forgets
- morally
- millionaire
- observer
- restrict
- ancestors
- kitchenette
- neatest
- miniskirts
- grandmothers
- feminine
- marching
- bizarre
- overboard
- gu
- neon
- tints
- condominiums
- walt
- crummy
- flake
- woodwork
- widespread
- worldwide
- bow
- contrast
- vocal
- removing
- passive
- colonies
- bury
- presence
- quietly
- whichever
- vacant
- equity
- litters
- fin
- aquarium
- commands
- anticipate
- resulted
- ranches
- repentance
- mas
- olympic
- wicked
- climbed
- stretched
- explaining
- wayside
- combinations
- carpets
- str
- tickled
- tinted
- carmakers
- sporty
- miata
- authentic
- demands
- parkway
- gabriel
- shannon
- patriot
- mansion
- alan
- blessing
- catnip
- bombay
- himmy
- champion
- gloves
- devon
- curly
- mice
- associations
- haired
- qualifications
- attracted
- irritating
- cops
- irks
- ron
- relation
- germantown
- hondas
- skins
- errands
- pigs
- substituting
- spoil
- butts
- experts
- markets
- hong
- kong
- tens
- conflicts
- bangladesh
- prevention
- barrels
- lily
- humongous
- azaleas
- fielder
- cubs
- pri
- aft
- kinder
- callers
- capone
- arsenio
- flatliners
- scheduling
- threads
- bedspread
- lobby
- mckinney
- spaced
- ethical
- expenditures
- recovery
- sitters
- reader
- authors
- scraping
- backlash
- estes
- sensitive
- taxpayers
- fisherman
- soul
- lures
- hea
- propose
- reinforcement
- exempt
- pendulum
- applies
- flea
- skilled
- petty
- brochures
- bussed
- african
- glen
- godfather
- sooners
- hump
- summit
- strengthen
- meaningful
- steamer
- sprinkle
- skillet
- teflon
- passion
- increasingly
- privileges
- constitutional
- thousandths
- motorcycles
- eighths
- annoys
- horizon
- tooling
- essence
- decimal
- inherited
- fifths
- sweatshirts
- blouse
- programmer
- fashions
- taiwan
- keyboard
- unpopular
- plumber
- sucker
- transporting
- indifferent
- shallow
- undo
- seeming
- kilograms
- dates
- propaganda
- confidently
- badge
- clipper
- steelers
- temperament
- scoring
- warren
- proving
- arthritis
- revenue
- scheme
- os
- wholeheartedly
- unknown
- capacity
- noodles
- instincts
- lecture
- stanford
- unlike
- academics
- cannon
- instinct
- stereotypical
- mac
- firepower
- mug
- antenna
- denton
- psych
- hamsters
- smelling
- expenditure
- dec
- diploma
- radioactive
- packaging
- detect
- stream
- particles
- cattle
- creeks
- alaskan
- roam
- booster
- contagious
- scientist
- wednesdays
- shopper
- species
- tribes
- underpaid
- ambience
- texture
- enthralled
- mel
- presidents
- consultants
- persons
- sweaty
- speaker
- subsidy
- lies
- ano
- offenses
- housekeeper
- hottest
- firewheel
- salisbury
- hams
- locking
- prosecuting
- gettysburg
- arena
- openness
- duplex
- fords
- carburetor
- cap
- notch
- overlap
- dash
- vegetarians
- cleanliness
- vegan
- bodies
- utilize
- coo
- hens
- ballpark
- kicking
- getaway
- des
- vitelle
- a&m
- oriental
- yellowstone
- lion
- rio
- grande
- marble
- jealous
- ruins
- objecting
- fireman
- malicious
- compensation
- executing
- falsely
- statistic
- meanwhile
- storing
- internship
- cooper
- clinic
- cardiovascular
- rotate
- picturesque
- biggie
- killeen
- purebred
- virus
- affection
- caravan
- storage
- libber
- heated
- shrubbery
- supportive
- unacceptable
- appalled
- reimburse
- explorer
- middlekauff
- stiffer
- disneyland
- amusement
- solely
- lafayette
- allies
- liars
- masses
- majored
- discriminated
- valid
- lonely
- smile
- consists
- lisa
- floods
- historian
- societies
- eater
- rewiring
- praised
- openly
- logically
- nest
- pap
- supporter
- runner
- moth
- devastate
- mediocre
- excel
- insist
- halloween
- toning
- dramas
- shakespeare
- multimillionaire
- supervise
- imports
- inferior
- wallet
- dwell
- po
- iguana
- br
- twentieth
- assertive
- chewing
- freelance
- reputable
- avenues
- smoothly
- avenue
- classify
- spices
- tort
- riots
- methods
- textbook
- sprayed
- wiring
- busting
- minimal
- youngster
- manner
- fringe
- beeper
- pill
- spraying
- heavens
- splitting
- maturity
- cues
- nineteenth
- velcro
- cole
- codependency
- losses
- worlds
- representation
- roller
- maternal
- franchise
- bones
- quickie
- resorts
- inept
- tossed
- superior
- enthusiastic
- stripper
- eth
- shotguns
- vital
- mutual
- laura
- lotion
- accumulate
- dime
- unfinished
- toned
- treatments
- rust
- instruction
- productivity
- wherewithal
- indigent
- employ
- medicaid
- desperately
- equipped
- alto
- jerker
- christopher
- reeves
- climb
- mastercards
- beaver
- champions
- pines
- berries
- dutch
- shou
- cathedral
- constructed
- rainfall
- chased
- tossing
- peonies
- hardy
- divorces
- drank
- tan
- sunburn
- interfere
- fo
- custody
- bottoms
- guidance
- flew
- jar
- eisenhower
- bitter
- motivational
- presidency
- leaps
- noriega
- tunnel
- anger
- roger
- mis
- universe
- bargained
- interviewing
- potluck
- trump
- hyacinths
- purply
- mugged
- paroling
- int
- avon
- spectator
- deeply
- amou
- crepe
- pile
- toll
- dependable
- cavalier
- squish
- drinks
- census
- pell
- vienna
- waitresses
- ultra
- regency
- progressing
- retrievers
- prompt
- brisket
- reliability
- graveyard
- submit
- reception
- watercolor
- jan
- shanghai
- effected
- micro
- satisfying
- preston
- broiled
- violated
- appealed
- martha
- melodies
- speaks
- squad
- cutback
- texasville
- breathe
- homemakers
- dreyfuss
- spit
- presumed
- cra
- coordination
- irons
- perry
- stepmother
- ambulance
- deteriorated
- bunk
- flan
- vinegar
- pies
- happiest
- wheeling
- geriatric
- cockapoo
- rabbits
- ignored
- earnings
- pencil
- taller
- glorified
- sch
- eyre
- sung
- madam
- butterfly
- puccini
- canoeing
- receptive
- jackie
- gymnastics
- im
- steadily
- ronald
- brownwood
- temple
- substantial
- les
- broadway
- orthodontic
- verge
- orthopedic
- silverton
- drafter
- drawings
- unbiased
- equals
- secretarial
- overturned
- thelma
- louise
- tacky
- chipped
- sledding
- ambulatory
- reluctantly
- adequately
- cheryl
- hearty
- skim
- thai
- lunches
- molestation
- releasing
- sketch
- subscriptions
- upright
- paddle
- appliance
- tops
- pant
- gail
- centralized
- claus
- earns
- coit
- orchestras
- breasts
- chill
- punk
- '101'
- rebate
- perkins
- fluffy
- parker
- coppell
- bleeding
- pittosporum
- thumper
- carney
- trailers
- eager
- signature
- whoops
- discovery
- macaroni
- golfing
- superbowl
- tease
- includes
- desperate
- entitled
- dill
- suing
- semiautomatic
- cuddle
- legislate
- hubbard
- screams
- competitiveness
- mechanically
- jesuit
- duh
- haiti
- constituents
- ordering
- striped
- bonham
- donna
- du
- nist
- sheet
- sergeant
- rebuilt
- spy
- thorough
- fame
- hydrocarbons
- nitrogen
- ville
- manufacturers
- mats
- algebra
- glossy
- pathology
- towncar
- missions
- mat
- gut
- precaution
- kenosha
- pianos
- commissioners
- exemptions
- daytona
- holder
- gloss
- exploring
- hatchback
- abuses
- royalty
- rehearsals
- meg
- boise
- barbie
- radial
- lathe
- distributor
- parakeets
- chimney
- telecom
- bran
- piedmont
- howse
- duncanville
- admitted
- warriors
- marketplace
- dunn
- bradstreet
- vivaldi
- boutique
- decorative
- volume
- honeywell
- quicken
- strengthened
- quantity
- hinge
- cumbersome
- qua
- transport
- makings
- seal
- entitle
- opacity
- abouts
- forum
- ductwork
- shave
- interchange
- ber
- scruffy
- critic
- trivia
- sharon
- invitation
- astounded
- effectiveness
- insulted
- conspiracy
- paranoia
- surmise
- latches
- invading
- knocking
- ritual
- introducing
- click
- occurrences
- summed
- absenteeism
- errand
- discrimination
- improving
- uncertain
- suspicious
- detectors
- hammer
- royalties
- hideous
- militant
- objections
- absurd
- frampton
- performer
- eclectic
- listener
- ravi
- shankar
- spreadsheet
- dedication
- mardi
- gras
- straps
- convincing
- carl
- casually
- horrifying
- litigation
- retention
- dusty
- regulars
- texteller
- stripe
- tipped
- pastel
- pallet
- patent
- spin
- coul
- southbend
- variable
- intended
- workplace
- inputs
- toured
- reich
- genesis
- bottomed
- shoul
- devoted
- detriment
- manipulating
- softly
- alleged
- accuse
- exploiting
- cuba
- starve
- hun
- ashamed
- connery
- dwarf
- favors
- freer
- imposed
- demanded
- natives
- representative
- undoubtedly
- abou
- melting
- clinging
- quebec
- mountaineering
- implies
- fads
- institutes
- newsletter
- orientation
- meditation
- desks
- laborers
- keyed
- enc
- incorporated
- predominant
- intending
- trafficking
- aghast
- frito
- artistic
- kits
- pinks
- kit
- lilly
- greens
- stocking
- selections
- chapel
- percentile
- stabilized
- illegally
- errors
- nasa
- quaint
- mem
- supplemental
- applaud
- competitors
- generous
- repayment
- celebrated
- negatives
- ind
- privately
- brutal
- hoped
- slim
- administrating
- latter
- nickname
- customs
- defeating
- gadgets
- bluegrass
- pizzas
- anderson
- predominately
- standings
- moore
- pennant
- pirates
- appraised
- overpriced
- longevity
- satisfy
- resell
- editing
- availability
- prohibit
- janitors
- endurance
- mutually
- supervisory
- quotas
- swampers
- laborer
- happ
- mushrooms
- consisted
- terr
- siren
- alarms
- jamaican
- knitted
- granny
- moderate
- carpentry
- candle
- contributors
- ai
- comply
- helicopter
- sting
- nitrous
- chemist
- unseasonable
- ust
- nostalgic
- calligraphy
- tidbits
- mcgyver
- inventing
- baling
- washers
- junkyard
- portraying
- invented
- attempting
- innings
- ke
- weaned
- meows
- docile
- traumatic
- secretive
- daisy
- hype
- mimic
- predicting
- fictional
- swamp
- margin
- teasing
- crosses
- dang
- dumpster
- openings
- recycles
- imaginable
- folded
- straightened
- reminding
- settlement
- beaten
- ramifications
- margaret
- thatcher
- gandhi
- volcanos
- rhode
- residue
- pitted
- comeback
- nader
- volcano
- indicates
- previously
- regulatory
- arrows
- zoom
- calculate
- yugo
- pricing
- dos
- pastor
- sauces
- coleman
- sacramento
- backpacked
- undeveloped
- opposition
- negotiate
- factions
- refreshing
- reveal
- occupy
- responding
- tunes
- jigs
- instrumental
- mickey
- wills
- nickelodeon
- fl
- shenandoah
- flimsy
- programmers
- mentioning
- irritates
- aspen
- contel
- demonstrated
- surrogacy
- crass
- nurturing
- donation
- auction
- shelters
- bedridden
- gals
- '''am'
- factual
- nightly
- chancellor
- gaps
- newscaster
- excerpts
- rises
- choi
- assisted
- deteriorate
- sponsor
- caretakers
- supplemented
- possessions
- signing
- sectioned
- zones
- vikings
- hart
- educator
- beg
- initiative
- administrations
- maj
- sabbatical
- minuscule
- referring
- hourly
- gardened
- remotely
- shack
- broaden
- ivy
- couches
- careless
- anybo
- oreo
- twisted
- actresses
- kenny
- columbus
- disrupted
- mistrial
- chooses
- confession
- placing
- inception
- insure
- burglars
- jacques
- lewis
- chagrin
- ame
- preferably
- loudly
- epileptic
- aftermath
- snob
- broadened
- expectations
- swore
- amphetamines
- endangering
- hassles
- splotches
- scratching
- dread
- hardwood
- toothbrush
- proclaimed
- nicks
- breads
- chunks
- quart
- slender
- blender
- thickens
- thickened
- thicken
- cooling
- leaded
- endorse
- caprice
- converters
- arguable
- lit
- meteorological
- circulation
- lungs
- focal
- volkswagen
- pinned
- fulfilling
- obligations
- belonging
- wealthier
- adulthood
- functioning
- monster
- wandering
- ropes
- appreciation
- confess
- tolerances
- pete
- arnett
- sporadically
- impartial
- diversity
- affiliate
- cutesy
- beeped
- moody
- wonderfully
- vowed
- booklets
- recruit
- courthouse
- strangled
- testify
- neurotic
- crooked
- bracelet
- instructed
- whereabouts
- bracket
- koontz
- bachman
- letterman
- hologram
- pitches
- speculative
- deregulation
- teapot
- vaguely
- hoover
- pennies
- nickels
- investors
- holders
- asphalt
- charts
- kathy
- walkman
- simmons
- rapists
- manson
- repealed
- thousandth
- pac
- kingdoms
- ruler
- scriptural
- elses
- discernment
- walters
- wiley
- communists
- assaulted
- compensated
- medicines
- rude
- returns
- indebted
- deli
- strings
- crabgrass
- slimy
- tempered
- standby
- surgeon
- pruning
- undertaking
- irrigation
- leafy
- remain
- flowering
- chick
- lem
- humus
- barbe
- stoves
- flame
- grease
- tortillas
- turkeys
- smoked
- hickories
- spreadsheets
- specs
- montana
- hazards
- crash
- burlap
- coupon
- subtract
- compost
- branches
- heed
- staunch
- withstand
- buffers
- scuds
- provinces
- merely
- demilitarize
- confusing
- sucked
- incomprehensible
- disarm
- socialism
- boris
- nationality
- nut
- sabine
- consequence
- wade
- camps
- kingsley
- centennial
- canton
- dinky
- proclamation
- mason
- dixon
- seller
- avalon
- chilling
- wits
- characteristics
- tuberculosis
- wafer
- linear
- mismanaged
- outraged
- breyiana
- demos
- boggles
- contaminated
- refineries
- desires
- delaware
- caves
- fading
- anythi
- pantry
- crushers
- hallways
- casualties
- magnified
- tones
- questionable
- andy
- creatures
- extends
- fork
- spills
- degrading
- spark
- probab
- hints
- stereotypes
- romanticize
- thugs
- beaumont
- predictions
- barring
- substantially
- separates
- zealous
- farmhouse
- pumpkins
- planter
- creosote
- landlord
- brushing
- rose
- cantaloupes
- cubic
- wary
- youths
- hostilities
- judging
- burlington
- confronted
- slit
- divisions
- rash
- monterrey
- objective
- hamper
- grouper
- oysters
- tiring
- canals
- grabs
- grabbed
- dogfish
- antibiotics
- commuting
- deprived
- clinics
- infections
- enrolled
- rigid
- fined
- mills
- deceiving
- surroundings
- paths
- motive
- motivations
- upwards
- bundled
- doubling
- financed
- integrity
- benefitted
- perceive
- unfairness
- wiser
- segment
- vengeful
- pitifully
- massively
- respon
- represents
- speeches
- slapped
- inflammatory
- atrocious
- blitz
- zoning
- wholesaler
- turnovers
- argentine
- microwaves
- waxed
- flakes
- purplish
- cubes
- sherry
- argentinean
- sausages
- breaded
- publications
- thesis
- disgruntled
- cries
- replaces
- belongings
- roaches
- overhaul
- uniform
- discretionary
- emotion
- hence
- fines
- documentary
- dealings
- declaring
- dire
- squirrelly
- miscellaneous
- nd
- deposited
- scurried
- skaggs
- endangerment
- assumes
- endanger
- endangered
- accidentally
- suspicion
- continents
- ingrained
- confuse
- trans
- centimeter
- measurements
- peanut
- kindercares
- alphabet
- scold
- inappropriate
- trauma
- weath
- predictable
- inversions
- threesome
- novice
- rut
- yo
- delightful
- ferrari
- resembled
- satellite
- bathed
- jacuzzi
- wings
- fastest
- ant
- kitchens
- dented
- refresher
- kosher
- knishes
- mea
- unstable
- relevant
- americanized
- hugged
- scam
- apologize
- hug
- shiite
- poss
- wheth
- countrymen
- wom
- implementing
- decreasing
- finland
- selfishness
- benefited
- mil
- flunk
- canning
- zinc
- processed
- bogged
- distributed
- moderately
- companion
- organs
- sally
- petite
- isometrics
- ingestation
- plight
- surrounded
- directing
- coed
- subbing
- calculator
- behaved
- versatile
- applicable
- depot
- spackling
- creamy
- similarly
- formative
- contacting
- aptitude
- sounding
- upkeep
- cellar
- rents
- complexes
- nanny
- prefabs
- enou
- scoot
- emulate
- guru
- auditors
- packard
- matrix
- transparencies
- outdated
- advisor
- panhandle
- piling
- shredded
- pessimism
- racism
- destined
- fronts
- hippie
- texaco
- pennzoil
- miscarriage
- rational
- testimony
- testifying
- paralegal
- priors
- aggravate
- enlightened
- niceties
- flop
- horrified
- absence
- taxation
- flabbergasted
- gracious
- flops
- certificate
- explanation
- univer
- dustbuster
- plated
- bowls
- patty
- womb
- soothing
- repetitious
- wilder
- eleventh
- painless
- necessities
- harm
- magnolias
- raking
- underground
- grasses
- blend
- macneil
- jennings
- informative
- bureaus
- comics
- mourning
- lace
- weave
- lacy
- draping
- batting
- anticipating
- splurge
- deci
- typist
- damme
- bland
- widow
- dummies
- caan
- rescuers
- submarine
- studio
- survived
- einstein
- stepson
- literate
- honors
- lifesaver
- framing
- hindsight
- incidents
- outsiders
- jesse
- complains
- threatens
- entrepreneur
- achievement
- clue
- sights
- transplant
- glamorous
- uncontrollable
- constitute
- denial
- champlain
- resume
- technicians
- fad
- timid
- macon
- hous
- espec
- contacted
- liquor
- repairman
- popped
- radishes
- turnips
- loam
- intensive
- attachment
- pickles
- unfairly
- seasonings
- paralyzed
- spinal
- discrete
- seatbelt
- arrow
- reuse
- collects
- dorms
- perimeter
- orthopedist
- freak
- diane
- diver
- limping
- tights
- casts
- nautilus
- cushion
- singled
- tighter
- lonesome
- naw
- everyb
- imitate
- oscars
- booth
- demographic
- judgments
- texins
- crest
- demonstrator
- reps
- partying
- tracking
- perpetuate
- manpower
- coincide
- cl
- soreness
- nighttime
- evacuated
- winnebago
- benefiting
- incidence
- abundance
- creature
- aim
- shah
- felons
- unseasonably
- comparisons
- waning
- surviving
- diplomacy
- eliminating
- processes
- righteous
- filtered
- launch
- unmet
- strife
- ray
- blatant
- fax
- proactive
- buil
- treaty
- bully
- repay
- swallow
- evolve
- tug
- skewed
- intersection
- trampoline
- downs
- cy
- swept
- streak
- averages
- catches
- tigers
- strategy
- bayless
- advised
- brunt
- rooted
- dseg
- documentation
- floppy
- disks
- hus
- touchy
- linda
- rossa
- teen
- boo
- livingston
- seagull
- wro
- midland
- odessa
- practiced
- fur
- contra
- haunt
- resentment
- laughable
- arises
- browns
- topping
- toast
- mustard
- cucumber
- bonanza
- meta
- rearing
- robinson
- cylinders
- akeem
- dominate
- reselling
- jap
- wichita
- galen
- amrein
- snacks
- elephant
- transferring
- fare
- veterinarians
- wonders
- developer
- breathed
- limiting
- cookouts
- individuality
- frills
- fluctuates
- tastefully
- smashed
- organizing
- dare
- reform
- bri
- gate
- felonies
- ima
- racist
- gripe
- gar
- width
- spreader
- lightly
- freshly
- arthur
- waterfront
- movers
- frames
- enamel
- spun
- descendants
- favorable
- intervening
- advancing
- frightened
- revolting
- upsetting
- acquired
- creeps
- kitten
- teacup
- frustrates
- cheaply
- brunch
- crook
- mock
- primaries
- workday
- chows
- guinea
- harming
- bellies
- rubbed
- terrified
- louder
- lid
- collie
- mechanism
- inspected
- cheated
- fingernails
- uninformed
- disinterested
- honduran
- rica
- tourism
- enabled
- policies
- engrossed
- virgo
- elder
- ricans
- rican
- loaner
- revival
- christianity
- revered
- pyramid
- birthdays
- disciplinarian
- nutri
- stairs
- elevator
- powerhouse
- alway
- rehearse
- patriots
- photo
- guards
- congested
- incarcerating
- foreground
- snatched
- astro
- minivan
- subaru
- ticking
- rack
- upgrade
- retail
- campgrounds
- bearable
- dipper
- addict
- sportsmanship
- describes
- strasbourg
- missile
- bounce
- goll
- humiliating
- chauffeur
- valet
- condemning
- airs
- tithe
- blessings
- foley
- croak
- critters
- turkish
- himalayan
- patches
- paws
- lanky
- hillside
- communicating
- swam
- supervision
- stephanie
- keel
- tuba
- nerves
- turntable
- dual
- processor
- edit
- layout
- preventing
- overloaded
- mentions
- sevren
- montgomery
- piddly
- compressor
- prelude
- impractical
- wharf
- colts
- seahawks
- winners
- champs
- expansion
- attendance
- kites
- strangers
- tasting
- arrangement
- rewards
- interfering
- inhumane
- overtaken
- underwater
- intention
- philippines
- tag
- quarterly
- incentives
- justification
- sorting
- insurmountable
- forestry
- trails
- emphasized
- obtain
- cubicles
- advent
- op
- accurately
- orchids
- dodgers
- brat
- petrified
- circular
- terrifies
- niece
- laughs
- exc
- negate
- rejected
- lawlessness
- founded
- crippled
- perpetrators
- breath
- intake
- valleys
- pencils
- abreast
- ethics
- scandalous
- churchill
- dickens
- withstood
- mindless
- pi
- sincerely
- whew
- spreading
- petersburg
- finest
- southwestern
- cincinnati
- roaring
- perpetual
- lhasa
- scuba
- pampered
- dinosaur
- fires
- ventured
- dooming
- plunked
- cooperated
- adjusting
- decades
- valued
- downstream
- lure
- bumble
- wasp
- squirrels
- popularity
- isolation
- disciplining
- spank
- isolate
- handicraft
- dough
- ornaments
- empties
- posted
- ruining
- kurdish
- roseanne
- matthew
- brando
- levinson
- follower
- marino
- keystone
- cunningham
- tactics
- granada
- cuban
- salinas
- terrorist
- buried
- hyundee
- helicopters
- stepper
- pillow
- staring
- aqua
- blisters
- rubber
- trashed
- dwindling
- cooker
- cherry
- blackening
- gumbo
- portuguese
- ribs
- ya
- jumbo
- initiatives
- revolt
- obliged
- argues
- constrained
- fools
- indoctrinated
- millimeters
- fractions
- fittings
- wrench
- header
- screws
- progressively
- pullover
- smokes
- sw
- othe
- designer
- foolish
- puzzled
- warned
- cab
- tractor
- sixes
- diesels
- injector
- asylum
- governmental
- antiwar
- translated
- soapbox
- usable
- antimetric
- sweden
- midnight
- plains
- collapsible
- helper
- motivator
- huff
- phenomena
- temper
- miami
- cyclical
- oilers
- stallworth
- swan
- oppose
- decisive
- wrath
- constituency
- nuggets
- meatless
- ingredients
- hostess
- soybeans
- proteins
- belton
- pennsyl
- lsats
- als
- sev
- abcs
- especiall
- affordable
- carpools
- symbolic
- scenario
- gunfire
- outlaw
- abiding
- restrictive
- concealed
- sp
- deterrence
- weighed
- objection
- misusing
- impose
- crackdown
- dawn
- liners
- gerbils
- mutts
- counted
- eel
- tiniest
- debated
- symptom
- furnish
- nonsense
- handicrafts
- awarding
- topsy
- turvy
- worldly
- sparked
- reg
- flours
- dublin
- bulldozers
- overflow
- posters
- chained
- tabby
- rampant
- girlfriends
- inadequate
- '8088'
- monitors
- respectable
- secondly
- binary
- calibrated
- qualification
- brackets
- rescue
- passport
- mou
- alcoholics
- returning
- laurie
- clout
- grilled
- buffets
- brunches
- woodland
- colo
- prix
- seagal
- starred
- premise
- preoccupation
- belly
- millimeter
- darndest
- assembled
- hauled
- fertilizers
- prohibited
- facets
- denied
- loaf
- dawned
- boulders
- marbles
- duck
- shish
- odor
- boneless
- scrambled
- armenian
- consume
- punishing
- devil
- suffered
- agreeing
- enforcing
- burglaries
- rationalize
- busiest
- airy
- wires
- compartment
- soldered
- restrain
- overeat
- pastas
- minerals
- accepts
- supplements
- toledo
- oriole
- steeper
- moines
- bleachers
- collapsed
- herbs
- sill
- appleseed
- pecans
- wes
- enterprise
- bulletin
- electrician
- terminology
- gaithersburg
- valedictorian
- pushy
- seemingly
- rockies
- carries
- yells
- breezed
- solicit
- coworkers
- alright
- humans
- bust
- holdup
- underst
- convicting
- restoring
- ankles
- landscaped
- sal
- continuance
- pensions
- allergy
- baxter
- ceo
- homa
- rallies
- anaerobic
- improves
- ls
- adverse
- hunk
- pulse
- resting
- mirrored
- fireplace
- tucked
- condos
- abandon
- dennis
- distributing
- refuses
- glove
- pricey
- passenger
- lowered
- questioning
- dummy
- mans
- occupations
- norma
- techniques
- karen
- spotted
- incompetent
- exper
- priest
- kindergartners
- conform
- creativity
- manners
- mannerisms
- establishment
- norfork
- farthest
- charleston
- hairs
- follicles
- rehab
- fro
- weddings
- graduation
- med
- saudis
- thieves
- chaos
- promotion
- unconditional
- offspring
- quotes
- dumps
- bluebonnets
- absorb
- es
- flash
- medina
- salty
- beirut
- penalized
- lining
- faucets
- repainting
- arrange
- tripping
- ingest
- ingesting
- arteries
- reacts
- framers
- framed
- viable
- supports
- viewpoints
- delay
- nevertheless
- allocation
- infrastructure
- expended
- restock
- twen
- spider
- marigolds
- impatiens
- replacement
- teased
- bacillus
- gypsy
- toddlers
- recommendations
- skits
- attachments
- slacked
- contributed
- bombarded
- mrs
- cleaver
- senses
- romantic
- illiterate
- paced
- ridged
- totaled
- hesitate
- technologies
- stacked
- renters
- counties
- citibank
- scams
- swayze
- clyde
- drummer
- scratched
- demographics
- companionship
- dependency
- everyth
- prospective
- pairs
- unsupervised
- morton
- lu
- offended
- drinker
- measures
- lions
- arapaho
- drool
- yuppie
- cheat
- reinforced
- fashion
- defrosting
- pilaf
- mixing
- mushy
- korean
- auxiliary
- curriculums
- kathleen
- accordingly
- residency
- sportswise
- blitzer
- fanny
- treadmills
- cinema
- dripping
- shorted
- enlarge
- valves
- shingle
- fixtures
- detached
- stigma
- pioneers
- households
- beepers
- bulky
- vibrates
- hepatitis
- freed
- expectation
- boyfriends
- homeowners
- existence
- anguish
- charming
- weathered
- leveled
- wallpapered
- conserving
- diagnosed
- inspiration
- alerted
- swimmers
- extracurricular
- loser
- sats
- barber
- verses
- robber
- dachshunds
- spaniels
- anthropology
- presses
- clerical
- forthcoming
- homecoming
- famil
- familiarized
- virgin
- qui
- divine
- skates
- cot
- shove
- nannies
- objectivity
- digressing
- ordinarily
- weirder
- revolved
- hatchery
- intimate
- calendars
- decoration
- passage
- continuity
- percentages
- cavaliers
- ewing
- highlights
- patience
- bethesda
- beijing
- pooling
- restful
- pends
- dells
- starring
- rage
- terminator
- twists
- treble
- mackerel
- pike
- stung
- fleetwood
- displayed
- freaks
- backs
- buicks
- convertible
- vintage
- setter
- feathers
- conducted
- ethically
- patrol
- kidnapped
- pun
- exceedingly
- albany
- syracuse
- rapist
- investigation
- pamper
- waits
- assistantship
- newlyweds
- hopping
- annually
- journals
- figurines
- sanded
- 4h
- refinish
- hormones
- lip
- fender
- sparingly
- lime
- sands
- upscale
- gum
- rips
- shreds
- sponge
- mate
- averaged
- harvard
- successfully
- approaching
- nutrition
- conductor
- cringe
- mcneil
- criticism
- palo
- columns
- candles
- psycho
- deadly
- uneasy
- robocop
- molly
- savage
- resented
- retrospect
- juggling
- density
- crucial
- oft
- lame
- assaulting
- pleading
- psychiatrist
- psychiatrists
- psychotics
- assaults
- sponsors
- rainier
- snowy
- immune
- tawakoni
- cones
- fearless
- enclosed
- roofs
- sizes
- cei
- furnace
- ambitious
- poking
- fountains
- latitude
- underpass
- hiding
- petals
- slows
- oscar
- durant
- alo
- notorious
- settles
- smoker
- sponsored
- educations
- ele
- approached
- proponent
- thus
- endeavor
- wri
- fingerprints
- slipped
- fingerprinted
- astounding
- intervals
- contracted
- dea
- imm
- soaking
- visitors
- rug
- daddies
- conformist
- revolutionary
- kramer
- celebration
- feeder
- nets
- minnow
- burping
- purina
- parade
- compound
- pursuit
- refuted
- refute
- turnouts
- vi
- relates
- regain
- moats
- staubach
- encountered
- unrealistic
- landon
- portrayed
- josey
- clint
- jot
- baptist
- reflection
- damages
- shortage
- clerks
- doubled
- smallest
- pavilion
- fuses
- alter
- sensing
- bandit
- theatres
- ellison
- activist
- photographs
- hyacinth
- hollies
- spike
- perennial
- gomphrena
- repeating
- minimize
- ornamental
- happiness
- acquire
- congratulations
- simpler
- circles
- wham
- forgiving
- detrimental
- immature
- maple
- myrtles
- screwing
- disguise
- formatting
- paragraph
- voyager
- crank
- pepsi
- mcmahon
- racking
- recharged
- seabrook
- nucleus
- billed
- mints
- adaptation
- crown
- lunchtime
- celebrate
- incident
- shreveport
- limbo
- diaper
- chassis
- bent
- soapies
- bichon
- frise
- personable
- rin
- tervurien
- latchkey
- considerations
- sunroom
- rambler
- sandstone
- beltway
- adored
- surrendering
- cooperate
- allah
- sakes
- stirring
- pineapple
- oatmeal
- casseroles
- bronze
- catherine
- nissans
- escort
- trusted
- insurances
- provider
- postal
- recourse
- invades
- complained
- susceptible
- newhart
- comedians
- contrary
- bart
- simpson
- morocco
- continent
- ripping
- photos
- reef
- melbourne
- squirrel
- agents
- hockey
- christi
- diverted
- pea
- fiasco
- liver
- caution
- expediency
- misplaced
- technicalities
- technicality
- ruffle
- conducive
- sandwiches
- vendors
- pins
- ligaments
- beethoven
- mozart
- softer
- banned
- regime
- liberalization
- civics
- dart
- wasteful
- wounded
- mcmurtry
- trashy
- grou
- grouchy
- projectionist
- subtitles
- intuitive
- footnotes
- footnote
- operator
- lands
- appetizers
- premed
- specialize
- matinee
- cocoon
- alien
- maintained
- sharif
- oddly
- exceed
- incapacitated
- images
- dangerfield
- stacking
- leftovers
- catering
- scooped
- amelia
- anyth
- wolfe
- myths
- haggard
- phonetics
- relearning
- wheelers
- transaction
- checkup
- reserves
- cranky
- measuring
- coating
- cognitive
- jour
- austen
- reviewed
- attracts
- grandchild
- congealed
- soprano
- canoed
- cancun
- bummer
- teenaged
- manhood
- ostracized
- liken
- pear
- daytimes
- ransom
- sightseeing
- gubernatorial
- robb
- receipts
- gambling
- sedentary
- tortilla
- picante
- grated
- jell
- timely
- subjected
- athletics
- bathe
- commercially
- accordion
- miserables
- milkman
- travis
- phantom
- lloyd
- listens
- illnesses
- diligent
- invaluable
- scotland
- jaw
- periodically
- durango
- jeep
- destin
- jetty
- draftsman
- roman
- recognizes
- regarded
- mediation
- crises
- bystander
- awe
- prac
- gannan
- valerie
- addicts
- sayings
- possi
- restrooms
- festival
- alpine
- uneven
- sleds
- knob
- mows
- mulched
- presbyterian
- willingly
- littler
- strategies
- rapport
- walnut
- impersonal
- hack
- cheerful
- emily
- dell
- preschools
- pediatrician
- dane
- tangent
- backfire
- ethiopian
- venison
- fries
- waitress
- waiter
- attentive
- adventuresome
- heyday
- bernie
- dra
- assortment
- piled
- veal
- evident
- unleaded
- ambivalent
- clothe
- rehabilitating
- confessed
- amendment
- xeros
- quartet
- technique
- carols
- mechanisms
- decompose
- murray
- sorted
- dimes
- crusher
- renewed
- prostate
- antigen
- fourths
- smells
- spinner
- baits
- fisherwoman
- imitation
- sticker
- sn
- pantsuit
- pantsuits
- enthusiasm
- begging
- fitting
- harold
- taft
- milder
- gimmicks
- hemorrhaging
- mennonite
- sealer
- premier
- landed
- suites
- invalid
- invalids
- labels
- frugal
- substituted
- legacy
- reside
- partial
- yuck
- balloting
- sibling
- colds
- discontinued
- primitive
- tulips
- hazard
- codes
- zenith
- ques
- slides
- purity
- richie
- bushel
- wines
- napa
- ronnie
- whittle
- satire
- monotonous
- menus
- frankenstein
- blazing
- saddles
- grants
- hitler
- paintings
- specimen
- fussing
- presume
- pollu
- decorate
- kindergartner
- arguably
- cradle
- grave
- fluff
- swings
- queens
- beltline
- thrus
- aerosol
- corny
- fridays
- camry
- elway
- moneys
- exponentially
- crawls
- grieve
- greg
- foresee
- uninsured
- noses
- rudman
- accountability
- proportionally
- gruesome
- couscous
- repercussions
- wimpy
- shortened
- befitting
- nece
- asset
- flushed
- dressy
- slack
- sl
- tro
- bidness
- apiece
- smokeys
- sur
- outlawed
- legislating
- creating
- activated
- steinbeck
- grizzly
- encounters
- doubting
- doug
- ranked
- sierras
- rai
- tempe
- yelling
- explored
- bogey
- burgled
- plop
- pee
- ay
- handyman
- tighten
- loopholes
- withhold
- advantageous
- bueno
- librarian
- coma
- seasick
- minnows
- seas
- fore
- calico
- yaupon
- labrador
- wax
- scalp
- salsa
- hidden
- continuously
- hibiscus
- wetter
- mitsubishi
- '90210'
- nicole
- matlock
- charlene
- beverly
- shred
- pierre
- recognizing
- cinematography
- invasions
- premises
- '911'
- sitcoms
- misbehaving
- faces
- censor
- morality
- jumps
- finite
- infinite
- whining
- panels
- resurfaced
- cimarron
- jeopardizing
- retirees
- ladder
- investigative
- catastrophes
- existed
- halogen
- sulfur
- combustion
- hitch
- moynihan
- skillman
- lynch
- chil
- amnesty
- abstinence
- crayon
- detest
- ph
- allante
- peppy
- saddle
- inca
- dub
- regiment
- twisters
- toe
- prone
- adjustable
- conspired
- premiums
- reasonableness
- parkland
- losers
- witt
- greave
- wins
- dilemma
- reallowed
- implement
- unsmashed
- crazies
- fabricating
- sampling
- steele
- youn
- upsets
- magnetic
- resonance
- sober
- molesting
- boar
- constraints
- betcha
- severity
- entitlements
- reductions
- defaults
- blackman
- manned
- dealerships
- purrs
- feeders
- frontier
- jetsons
- nearest
- trough
- sli
- howatch
- birmingham
- disregard
- darned
- greenery
- tahoe
- skidding
- surveyors
- tracer
- '486'
- measles
- crunch
- burger
- cameroon
- scoutmaster
- sitcom
- seato
- colony
- nato
- disbanded
- arrive
- uncooked
- overdone
- yummy
- bendix
- pontiacs
- hattiesburg
- bir
- boa
- constrictor
- parrot
- overspending
- coughing
- julio
- misuse
- sniff
- milan
- anchoring
- tedious
- stragglers
- tobogganing
- baggy
- reduction
- hewett
- scaffolds
- excessive
- rep
- disappoints
- nairobi
- safari
- wesley
- hospice
- theoretically
- mishap
- electoral
- stew
- hardaway
- dioxide
- vapor
- aye
- pickings
- legitimately
- sails
- bisquick
- lopsided
- boarding
- freezers
- genealogy
- stash
- proliferates
- brokers
- patterson
- subsidized
- amway
- nonpolluting
- bicycles
- bullheads
- nikki
- jig
- stroll
- ogden
- puzzles
- combo
- airless
- scroll
- dolphin
- torpedo
- malamute
- trillion
- ludicrous
- payers
- column
- dumbbells
- controllers
- harrisville
- specialties
- virtue
- accrued
- transfusion
- refund
- pup
- patron
- parenthesis
- earmarked
- greatful
- striper
- senegalese
- perks
- parkinson
- industrialized
- truer
- dispose
- mega
- tonnage
- scrubber
- ammonia
- compounds
- acids
- thickness
- pronto
- finalization
- utmost
- cognizitive
- scarves
- uns
- unseasonal
- sleeves
- sweatpants
- corduroy
- compliments
- skorts
- nominated
- dud
- recurring
- fami
- overreact
- terror
- cohill
- cohi
- drivel
- eldon
- housepainter
- extracts
- overtly
- uncontrolled
- pirated
- ominous
- thief
- westerner
- lunatic
- violate
- socia
- jehovah
- mormons
- intrusive
- solicited
- invasive
- soli
- intruded
- defining
- surmised
- incorrect
- unsolicited
- nonsol
- unconscious
- cli
- sequence
- peddling
- harassment
- generated
- lois
- intimidating
- rver
- greeting
- stake
- mitzi
- yip
- ranging
- soaked
- rhyme
- ruckus
- parallels
- cov
- hooker
- absolu
- phenomenon
- brazilian
- listenable
- elec
- acoustic
- interchangeably
- folk
- arranger
- sitar
- muted
- existing
- tally
- slush
- stocks
- expired
- pleasures
- albridge
- slogans
- outlooks
- haggerty
- spookier
- pecially
- airways
- focusing
- taj
- mahals
- prolongs
- whim
- deserved
- prevents
- mopping
- odds
- unair
- facial
- beards
- skids
- repack
- buttoned
- starched
- suspenders
- reorganization
- cruddy
- reall
- notre
- dame
- explosion
- untypically
- accumulation
- flatlands
- zeppelin
- floyd
- brash
- bump
- bohemian
- rhapsody
- pumped
- siskel
- ebert
- thumbs
- travolta
- quee
- tokens
- divi
- showbiz
- admission
- scyene
- inexpensively
- sao
- paulo
- usefulness
- spheres
- spaniards
- rulers
- conquistadors
- socialistic
- horribly
- dishonor
- defenses
- sabotaged
- peasant
- exploitation
- exerts
- export
- broadcasting
- ruddy
- minist
- wr
- ler
- interpretations
- histories
- copes
- indicate
- resident
- fledged
- barefoot
- pejorative
- unrest
- citizenry
- ignorance
- ult
- constitutionally
- creole
- prohibitions
- strengths
- cuisines
- throes
- reassess
- functionally
- fractiousness
- faddish
- wellness
- biweekly
- dispensed
- distinctions
- dev
- fizzled
- acupuncture
- gestalt
- irony
- cert
- vigorous
- carbohydrates
- kinesiology
- calc
- calculated
- calisthenics
- myerson
- frantic
- astonishing
- mortars
- formulated
- sociopathic
- pronounced
- unfit
- mouthed
- transcribing
- customized
- anne
- glenn
- improvise
- concentrates
- password
- verbal
- rowing
- lution
- rower
- transforms
- markov
- naval
- postgraduate
- civilians
- mainline
- respondent
- unders
- allergist
- smorgasbord
- compensatory
- profile
- bonds
- deducting
- disproportionate
- brutally
- commuted
- delays
- electrocution
- determent
- deter
- dubious
- internally
- organiz
- coordinating
- scandals
- kisha
- knight
- pullman
- exacerbate
- clutches
- pads
- benz
- absorbed
- keyboards
- spaghettis
- lasagnas
- hor
- horseback
- dabbled
- banjo
- druther
- stre
- farts
- polly
- followers
- inspir
- booths
- commutiv
- billboards
- bartman
- simpsons
- debbie
- nigh
- appraisers
- onward
- ease
- folds
- performs
- tenured
- microcomputer
- comprehensive
- rigamarole
- teachable
- specially
- spicier
- tofu
- pistachios
- pistachio
- bumped
- curried
- saute
- gigs
- perse
- ow
- conventions
- slippers
- teller
- alterations
- utilitarian
- knickknacks
- sconces
- jalapeno
- almanac
- concluding
- warms
- shutting
- piloting
- spectacle
- lobbyist
- legislators
- individ
- unbelieving
- justifiable
- nucle
- kilowatt
- washes
- stinging
- swelter
- lively
- eureka
- rentals
- inspires
- glider
- welder
- treks
- '747'
- mindlessly
- pacifier
- reme
- destructed
- milton
- berle
- stepchild
- tumultuous
- regions
- siberia
- oppression
- attentions
- hopely
- catchers
- gladly
- unheard
- babe
- ruth
- thru
- lovingest
- cosmo
- pellet
- tod
- lovey
- dovey
- kneading
- trimming
- bonzo
- poindexter
- felix
- tortoise
- possessive
- bedtime
- rendering
- jessica
- tandy
- warmth
- manhunt
- manhunter
- dysfunction
- slay
- toothpicks
- outwardly
- awfulness
- wonderfulness
- lapses
- telecommunications
- profits
- waivers
- earners
- physicals
- subsist
- lodges
- moss
- footing
- alumi
- defrays
- defray
- unfold
- walmart
- discourages
- catatonic
- discovers
- buzzards
- pal
- imagined
- slaughter
- earthquakes
- robby
- graze
- indira
- observed
- attleboro
- freeways
- jets
- swinging
- kerosene
- eah
- boilerhouse
- powerhouses
- belch
- kodak
- smokestack
- phosphorous
- grenades
- photograph
- overstated
- environmentalists
- claiming
- automakers
- soot
- particulate
- meter
- tailpipe
- devise
- mufflers
- resumes
- graph
- erased
- simplified
- anduille
- doughnuts
- cobbler
- fudge
- fiber
- sloughs
- rafting
- potty
- packs
- noth
- outfitter
- headwaters
- damper
- hostage
- rhetoric
- rolm
- engi
- sheer
- estimated
- doctrine
- turks
- cheering
- reconcile
- divisive
- unprecedented
- authorize
- frontal
- sununu
- commend
- scud
- lefty
- frizzell
- galway
- harpist
- bagpipes
- whistle
- violins
- instrumentals
- rooney
- dancer
- entertainer
- eddy
- smiley
- burnette
- raspy
- playboys
- ernest
- tubbs
- rector
- scratchy
- opry
- stadler
- autry
- anymo
- vegetate
- fri
- relly
- complication
- eith
- demolishing
- stereos
- annoy
- troubleshooting
- initials
- conversed
- sexes
- consist
- childbearing
- storly
- var
- biological
- urges
- encumbered
- heirs
- characterized
- acquaintances
- terming
- emerging
- marathon
- idear
- discrepancies
- overview
- encapsulated
- introductory
- glamour
- updated
- airspace
- huntley
- analyst
- paragraphs
- noontime
- dose
- spee
- fastened
- wander
- aides
- debilitated
- arboretum
- maid
- tackles
- spinning
- irvin
- overwork
- reinjuring
- scab
- revamped
- metcalf
- smuggled
- investigated
- rehi
- renamed
- psychologists
- ration
- modalities
- learner
- kinesthetic
- gladewater
- baccalaureate
- unle
- commentator
- golsome
- superintendent
- adminis
- scarce
- overachievers
- overachiever
- beeps
- expre
- phoe
- easiest
- horizons
- hurtling
- brothers'
- clips
- madly
- fetish
- luring
- costuming
- remarked
- thriller
- distinguished
- terrorized
- branching
- vito
- flicks
- bawled
- toughest
- venue
- disrup
- sequestered
- entrapment
- displeasure
- waive
- bungling
- caricature
- bloodless
- comic
- functions
- thrash
- fixes
- climactic
- joseph
- reborn
- targeted
- hypercritical
- fart
- gags
- slapsti
- funniness
- gag
- retreading
- tec
- preemployment
- brazen
- wisened
- ventilated
- motorola
- tack
- orangish
- feat
- brighter
- coloring
- haphazard
- baseboards
- edger
- granary
- stocked
- formulas
- perfectionist
- tasks
- freehand
- gratin
- banana
- dissipate
- thickening
- globs
- rubbery
- blenders
- cools
- favoring
- nestle
- quik
- groedy
- whisk
- beater
- melon
- baler
- cond
- octane
- generating
- volt
- v8s
- repellent
- erupted
- meteorologists
- chernobyl
- tracers
- smoky
- array
- fiero
- undisciplined
- jacuzzis
- abdominals
- thighs
- mattered
- alienated
- suffocating
- choke
- differing
- grads
- quirks
- academies
- cadets
- espouse
- anglo
- saxon
- inveterate
- switcher
- dave
- wylie
- pumping
- weatherman
- hansen
- gordon
- lightfoot
- winston
- headphones
- toweling
- investigator
- tailing
- socialite
- extradited
- levy
- uplifting
- interpreting
- jur
- gui
- overcrowd
- connects
- businessmen
- sente
- penned
- duff
- penal
- beca
- litigating
- respo
- spiritually
- begats
- durn
- kratz
- kranz
- hedges
- nathaniel
- hawthorne
- storybooks
- woe
- glossary
- krantz
- twilight
- bogused
- fuck
- dares
- hangover
- sarcastic
- fishbone
- spirited
- venezuela
- avalanche
- gobs
- inflated
- beneath
- captures
- resulting
- risky
- contain
- vague
- guaranty
- guarantees
- guaranties
- disasters
- vulnerability
- regul
- workup
- incline
- unjust
- revoke
- reverked
- revoked
- vengeance
- sayeth
- mao
- tse
- chung
- temples
- unified
- humbly
- sovereignly
- rebuke
- ager
- preface
- admonition
- agrarian
- commander
- conceal
- napalm
- gro
- clayton
- uproots
- residents
- deba
- servant
- repaid
- granddaddy
- dodger
- militia
- bologna
- alleviating
- afresh
- lifestyles
- cabbages
- broccolis
- insecticides
- dandelion
- roly
- poly
- slug
- dragons
- sockets
- alkaline
- stem
- peaches
- silt
- shrivels
- mes
- cottonwoods
- irr
- smartest
- gardenias
- revitalizing
- mayb
- chopping
- blasted
- hybrid
- editions
- spruce
- dips
- dipping
- arabic
- pita
- eggplant
- marinating
- hickory
- clones
- mach
- databases
- searches
- deleting
- pieced
- bypass
- monochrome
- enthusiasts
- nathan
- swollen
- manuscripts
- composts
- nurserymen
- goop
- doorknob
- compress
- mugs
- expressions
- ungodly
- expansionism
- nationalistic
- succ
- origins
- angolan
- sinai
- warsaw
- militory
- indu
- chan
- clobber
- conquered
- autonomists
- shortages
- bulgaria
- czechoslovakia
- placate
- alienate
- emancipated
- slaves
- emancipate
- supplied
- battleground
- val
- verde
- briefcase
- bookcase
- armageddon
- grove
- imposing
- yoakum
- trilogy
- terrifying
- '''brien'
- crappy
- jakes
- compendium
- lobbying
- emancimation
- afterthought
- luted
- honorary
- isaac
- asimov
- robot
- developmental
- blockbuster
- mist
- dune
- freeman
- debating
- suave
- charac
- egalitarian
- scripture
- disciples
- wafers
- contradict
- buyers
- elma
- sheds
- pasadena
- refinery
- phoenixville
- grumble
- northwestern
- piped
- almetco
- pantr
- deanne
- multipurpose
- vide
- launched
- groupings
- gentlem
- dyke
- griffith
- idn
- brave
- shallows
- gig
- naughty
- murky
- spectrums
- abso
- feldon
- madonna
- lamar
- gators
- sneaky
- buckner
- stadiums
- cornell
- redwings
- peewee
- crude
- tilled
- screeching
- acorn
- scents
- pollinate
- yield
- tiered
- shrub
- locus
- thorns
- pollination
- pollinated
- littleton
- trucked
- shovel
- pressurized
- chainsaw
- dusk
- unfeeling
- spreads
- datsun
- ku
- klux
- klan
- incumbents
- larou
- larouche
- chord
- mayport
- brim
- snagging
- owl
- baiting
- oyster
- cracker
- trophies
- rockport
- netted
- ugliest
- archaic
- dots
- croaking
- croaker
- friendships
- copayment
- seclor
- exemplary
- snatch
- impressions
- inspections
- yellowish
- misty
- emphysema
- isolating
- biker
- vowel
- lint
- phrase
- cub
- smash
- conv
- ding
- dongs
- guathier
- eliminates
- briberies
- sidedness
- lengthy
- judo
- hoc
- deltaing
- disagreement
- wapner
- judean
- vibrant
- undoable
- semitic
- predetermined
- wandered
- defeated
- astaire
- sto
- plank
- poultry
- empenadas
- eu
- scallions
- sesa
- slivers
- overcook
- dashes
- ketchup
- bishu
- meats
- empanadas
- bun
- niokes
- requi
- bah
- humbug
- fives
- phony
- interdisciplinary
- dispelled
- grating
- reputations
- impaired
- institutional
- quiche
- growls
- overrun
- hussy
- settlements
- poll
- tiddlywinks
- volumes
- ignorant
- ironsides
- affixing
- chart
- commingle
- confusion
- issuer
- conven
- shucks
- profitability
- shifted
- itemized
- alpha
- beta
- accusation
- linemen
- rotation
- thereafter
- proves
- encouragement
- chemists
- overinflate
- southward
- nonconventional
- warheads
- parallel
- resolves
- negotiations
- inhabiting
- lith
- neutral
- crazier
- libya
- treaties
- overthrow
- survives
- inhabitants
- dancers
- outweigh
- wayward
- attained
- sharpness
- acuity
- disorient
- decimeter
- superpowers
- toddler
- indoctrinate
- understa
- skipping
- lows
- chillier
- handicappers
- mosey
- twosome
- mellowed
- doubles
- rationalizing
- purged
- goofed
- nastier
- cashed
- burgeoning
- metropolis
- carey
- thes
- intern
- sanger
- harris
- lifelong
- thunderbird
- citation
- mazaratti
- conceive
- degray
- stutters
- antennas
- roadside
- cords
- heaters
- hookups
- sopping
- dialect
- hums
- nuns
- trin
- shun
- hospitalized
- pumps
- stimul
- flipper
- retraining
- stagnant
- sores
- golan
- kishkes
- matzi
- goyim
- pocketful
- heston
- commandments
- grips
- muslim
- religions
- sects
- protestants
- lennon
- zionist
- nosed
- tampa
- scariest
- coincidently
- lox
- generic
- predates
- jihads
- toge
- secretly
- unity
- revert
- baltics
- forcibly
- impossibility
- insightful
- prays
- dissimilar
- forefathers
- esc
- disseminated
- giv
- postpones
- juniors
- disgust
- centeredness
- inability
- multicultural
- multiracial
- psychologist
- refers
- preoccupied
- infor
- cults
- motorbike
- maureen
- solomon
- eastland
- farmed
- millennium
- hopeless
- ideology
- eden
- distributorship
- supplier
- dirkson
- extansion
- dirk
- pearson
- embarked
- isometric
- chlorination
- firsthand
- detectives
- hunky
- dory
- gi
- barbados
- colleagues
- covert
- suburbia
- roasted
- goat
- hating
- stunts
- bending
- alleviates
- indicative
- handcuffed
- elem
- escalated
- bett
- reemphasis
- rote
- spitted
- memorizer
- wiping
- mennonites
- electronically
- determines
- sherwin
- molding
- bled
- spackle
- lighting
- nerdy
- garfunkel
- fascination
- innate
- supp
- manilow
- badness
- behinds
- pajamas
- yardage
- enclose
- fanatically
- subcontract
- ducts
- materialistic
- dwelling
- necess
- branched
- dishwasher
- inventions
- trashing
- diskette
- ordeal
- configured
- prestigious
- innova
- innovation
- audits
- pry
- peripherals
- lance
- restraints
- thermal
- razzle
- dazzle
- flats
- clairon
- rath
- educa
- feast
- waking
- tentatively
- receptacle
- raisers
- distribute
- disposables
- incremental
- fiery
- luther
- galvanized
- bashing
- environmentalist
- respons
- glow
- wartime
- overlook
- affirmative
- junkyards
- testimonies
- defendants
- legalistic
- achieving
- likelihood
- tilted
- sleaze
- protects
- choreographed
- patents
- antic
- repeater
- vendetta
- observing
- proceedings
- weightless
- effortless
- sweatless
- surveys
- adjusters
- expressed
- meningitis
- fetal
- terminated
- termination
- codependents
- goddess
- observations
- firemen
- overtones
- astonished
- phys
- cokes
- sternness
- forbi
- expressways
- patricia
- handlebars
- rewarded
- dubbed
- booger
- diamonds
- numbered
- redeem
- attache
- suitcases
- lamps
- wheelbarrows
- mixer
- toaster
- waffle
- clocks
- candlesticks
- aloud
- fussy
- babbly
- druthers
- rockville
- ballady
- abortions
- pregnancies
- handing
- landscapers
- replant
- alleys
- cultivate
- replenished
- subside
- prune
- hosted
- correspondents
- translating
- masks
- typeface
- piddley
- braunsfel
- unread
- skimming
- imperialism
- reasserting
- hangings
- needlepointed
- outlined
- intricate
- geometric
- upholster
- stiffened
- streamers
- stiffener
- quilted
- stamp
- foresaw
- refrain
- expedite
- franc
- francs
- diem
- consternation
- godfrey
- goodies
- prin
- perforated
- metrics
- typos
- retyping
- retypes
- encyclopedia
- prints
- limi
- clone
- bleep
- lionheart
- singular
- superstar
- norris
- deserts
- bates
- floats
- animation
- retitled
- reshot
- rout
- cosmic
- enlightenment
- dichotomy
- educatable
- prodigies
- precocious
- harks
- schoolwork
- construct
- convey
- verbally
- stressing
- penalizing
- eternity
- bradley
- activists
- demonstrating
- agreeable
- gerrymandered
- lipscomb
- disservice
- pauken
- politicking
- upmanship
- fooled
- nationally
- applicants
- dissolved
- shutdown
- mathematics
- outgo
- kidney
- positives
- spe
- sadder
- anxieties
- detected
- dismissal
- pard
- certainty
- handcraft
- wreaths
- eucalyptus
- dowels
- goofs
- bulch
- straying
- koala
- shapes
- wintered
- transplanting
- leafed
- pasture
- jungles
- rubs
- validity
- disagrees
- guessed
- lux
- accom
- transcontinental
- throats
- coalition
- armaments
- congressional
- fuss
- shiites
- fiddling
- shaped
- topsoil
- herb
- rollback
- spurts
- loppers
- rotor
- dethatch
- heave
- ingredient
- shrip
- fettucini
- straightens
- disconnect
- sucking
- depended
- peeled
- chestnuts
- burgundy
- browned
- bruises
- retires
- swivels
- collisions
- automation
- iaccoca
- airbags
- sc
- spine
- harness
- nifty
- chryslers
- aerodynamic
- conveyor
- magnet
- pennsylvanians
- brownie
- pamphlet
- slicks
- slot
- poundage
- instant
- wisely
- shboom
- befriended
- ironically
- resumed
- gymnasium
- flooring
- chrome
- height
- pounding
- engineered
- curbs
- gravity
- singles
- assorted
- immobilized
- screamed
- climbers
- limp
- matches
- ammn
- amm
- initi
- initiation
- mishandle
- guiding
- deregister
- tumbling
- themself
- banding
- pis
- julie
- tense
- bundles
- childish
- kazoo
- numb
- suffices
- rela
- weakness
- weaknesses
- experi
- temporaries
- retest
- retested
- rx7
- whatso
- seater
- narrowed
- assessment
- thirsty
- stint
- wanderlust
- poker
- admiration
- miners
- roadsides
- harvey
- uneducated
- flaunting
- relinquished
- strikers
- speeded
- aerobically
- calmed
- postnatal
- cise
- birthing
- axle
- windstorm
- overlooking
- embankment
- arkan
- sweeping
- tows
- beavers
- flee
- attitu
- flaunt
- americanism
- slums
- coops
- inoculation
- hungary
- requesting
- rotely
- panamanian
- quieted
- anticommunist
- excesses
- playtex
- flowery
- jaded
- comforts
- thorn
- bureaucratics
- dyed
- pollen
- gah
- blowy
- rebellions
- massacred
- protested
- diminishing
- renegade
- launching
- strifes
- defect
- obtaining
- globally
- demise
- glasnost
- escalate
- reins
- intentioned
- conveniences
- nonfeeling
- uphold
- unpopularity
- geez
- honorable
- massad
- madman
- straddle
- personalties
- rethinking
- gesture
- miscalculated
- liberate
- underestimated
- miscalculation
- huss
- assassinate
- staking
- precedent
- bullies
- powdered
- bombing
- khomeini
- normalized
- sanc
- juggle
- friction
- bookkeeping
- earner
- kite
- idling
- spooky
- lat
- tracing
- hitter
- shorten
- saberhagen
- crain
- craning
- reds
- stri
- fouls
- steinbrenner
- bogus
- workable
- peripheral
- notebook
- modems
- revise
- furnishes
- deadline
- courier
- magee
- peretti
- piercing
- fic
- soun
- illu
- illusions
- quintupled
- flied
- nailed
- gibbons
- exempts
- planters
- shedding
- proj
- beau
- insi
- sunlight
- sulked
- overmilitarization
- disparity
- civilization
- bigge
- trickle
- hemisphere
- kingsport
- masala
- sweeter
- amaretta
- dijon
- basil
- turgeon
- laroute
- gastro
- lamink
- restructured
- hardships
- subcultures
- debates
- patronizing
- demeaning
- midwife
- pater
- paternity
- troit
- misunderstood
- ranks
- aines
- peak
- olajuwon
- dunk
- businessman
- murchison
- bottomless
- leanings
- assholes
- reaganomics
- nonexempt
- visitations
- shuts
- hunts
- wan
- degreed
- jenny
- outdoorsie
- twix
- braniff
- gossip
- hound
- host
- pause
- mic
- '''clo'
- participators
- primal
- kicks
- tabloids
- journalistic
- fondly
- steeped
- repu
- unnecessarily
- glancing
- nod
- tonic
- unhooking
- uncoupling
- rotating
- rotated
- dieting
- ourself
- wrapping
- kip
- centrally
- sickness
- folder
- emphasize
- miniskirt
- evoke
- overdo
- laces
- flounces
- adornment
- unprofessional
- sexist
- tailored
- vulgar
- redford
- lewisburg
- emblems
- grotesque
- imag
- shoo
- padlock
- pawn
- someway
- neatness
- psychiatric
- hinkleys
- accidently
- distinguishable
- barbed
- curi
- prayed
- reestablish
- lengthways
- mounds
- clumps
- southw
- slapping
- formidable
- adcose
- exaggeration
- harmful
- structural
- hankering
- tick
- excalibur
- newmarket
- edmunds
- barnyard
- treacherous
- journey
- climbs
- creation
- touristing
- asbestos
- repaint
- roughed
- energized
- bids
- bleed
- caulk
- masonite
- bid
- varnished
- intervene
- toppling
- descend
- latinos
- mee
- meek
- europeans
- vocalism
- comparably
- bitch
- moan
- compromise
- dependence
- cartels
- mistreating
- slovak
- catacombs
- persecution
- idi
- amin
- oopsy
- pood
- greets
- recouped
- evi
- burial
- countenance
- uncanny
- litterbox
- anointed
- buzzer
- cheerleaders
- courage
- cheerleader
- precincts
- precinct
- harmfulness
- heroin
- forefront
- estimation
- demolish
- cur
- tract
- scaredy
- straits
- quieter
- comfy
- husb
- prance
- paw
- lovable
- lapdogs
- cockatoos
- squawking
- som
- cower
- akita
- aq
- padding
- chewed
- wiper
- blades
- tinkering
- rightly
- punctured
- patched
- restores
- feminist
- amer
- undoing
- stains
- altar
- spooked
- butterflies
- dee
- nicaraguan
- housed
- spiders
- repent
- evangelical
- surpassing
- override
- rejoice
- borrower
- bondage
- squatters
- witchcraft
- mayans
- incas
- worshipped
- pyramids
- sacrifices
- gods
- oppressed
- warehouses
- cumulative
- itemizing
- scrimp
- walkabout
- boonies
- attribute
- eric
- dickerson
- smi
- linebacker
- bickering
- wen
- appropriately
- arcade
- drafts
- archie
- manning
- nobodies
- showi
- furious
- veg
- padded
- opposing
- satin
- bridesmaids
- maids
- accessibility
- harsher
- aerostar
- stealth
- slipping
- celicas
- perfor
- racing
- surreal
- fulfilled
- blair
- reformed
- gambler
- microbiologist
- competitions
- minnea
- dowling
- ren
- entrances
- periphery
- paired
- deacons
- blesses
- fugate
- proverb
- macy
- lowe
- purebreds
- studs
- sweetest
- sweetheart
- breeders
- bree
- inbreeding
- inquisitive
- hindquarters
- predominate
- rex
- rexes
- rodents
- groundhogs
- mesh
- remains
- teetering
- refusal
- presc
- pharmacy
- mens
- absoluteness
- foiled
- mere
- outlawing
- conspicuous
- inconspicuous
- inappropriately
- hunted
- squirted
- novelty
- outdo
- raciness
- calculators
- euphonium
- mellow
- deejays
- grafting
- cough
- graphs
- sponsoring
- enhanced
- bytes
- '128'
- callously
- deterr
- blooded
- midsized
- porting
- attendant
- vessels
- overbuilding
- phe
- phenomenally
- galant
- serviced
- 49ers
- harbor
- niners
- kim
- redskin
- cartoonist
- ellicott
- basicall
- importantly
- devaluated
- goats
- schoolyard
- motherhood
- overcompensate
- destabilize
- vying
- regroup
- standpoints
- easterners
- couched
- proclaim
- weaving
- dike
- plug
- unveiling
- takers
- roomie
- slaughtered
- sudan
- occurrence
- shredding
- bedding
- wrappers
- reviving
- yosemite
- objectors
- assigning
- examined
- idealistic
- pakistan
- algeria
- blinking
- manipulations
- insofar
- clowns
- partition
- dividers
- baloney
- daylilies
- orchid
- closes
- velvety
- multiplied
- weeded
- lilies
- azalea
- glories
- ned
- skeldon
- ojeda
- hubie
- offerman
- prediction
- cecil
- orel
- hershiser
- darrell
- interleague
- introduce
- anoth
- homey
- randi
- dawdle
- steamy
- lawrence
- mae
- rambo
- hogan
- associates
- realist
- garments
- vogues
- knits
- garment
- loopers
- piping
- cording
- twe
- sewn
- exceptional
- bev
- reap
- sow
- establishes
- pardons
- lust
- incest
- swiftly
- integral
- reeks
- expediting
- compunction
- appropr
- sins
- stoning
- clog
- streamlining
- extremism
- bubble
- habitat
- humanity
- inefficient
- preconceived
- notions
- delivering
- spiraling
- conservatism
- hampers
- patchwork
- unflattering
- autobiographies
- randolph
- descriptive
- affluents
- tale
- binge
- bookl
- francis
- momentarily
- connecting
- sigh
- chowperd
- snowbirds
- spawned
- contend
- melts
- kitty
- apso
- panic
- preserve
- campsites
- twang
- pfeiffer
- rim
- glenrose
- latrines
- gemini
- genocide
- hmong
- unsure
- slash
- intercultural
- dissimilated
- conceptualize
- slavery
- linguist
- withholding
- worthless
- cambodians
- graft
- falk
- drugstore
- coils
- mosquito
- crickets
- foamy
- pristine
- froth
- bobber
- reeling
- saturated
- soggy
- damp
- claustrophobia
- terrify
- spanking
- revamping
- lev
- plaques
- stenciling
- cushions
- impeme
- interface
- janitor
- reams
- dalmarva
- deinking
- contaminate
- wastebaskets
- publicly
- yucky
- interven
- occupying
- schwartz
- iranians
- egyptians
- kane
- matinees
- burton
- batman
- glover
- kline
- dennehe
- goldblum
- clease
- arquett
- untouchables
- graffiti
- broderick
- marlon
- parody
- tinman
- humphrey
- bogart
- maltese
- falcon
- quinn
- rainman
- okie
- homeboys
- optimism
- reconstruction
- redefining
- trait
- longhorns
- randal
- streaky
- touted
- sentimental
- instability
- indoctrination
- marines
- ak
- 47s
- cubans
- capturing
- nicaraguans
- crate
- patrice
- lamumba
- teachings
- extremist
- gen
- irregardless
- albania
- revolts
- psychos
- chiefs
- staffs
- uprisings
- squadrons
- afghanistan
- boils
- cen
- berlin
- wat
- steppers
- soles
- reword
- indi
- environmentalism
- ruther
- environmentally
- blasphemy
- acutely
- bureaucracies
- relegated
- heartache
- grudge
- succeeding
- parish
- policed
- comforting
- reminders
- pyrex
- teaspoon
- blackened
- skewers
- basin
- chefs
- clams
- instinctual
- demographically
- democratically
- proposition
- proposals
- revolted
- obligatory
- considers
- australians
- looses
- leas
- denies
- hamilt
- passionate
- democ
- candi
- antigovernment
- misspending
- bastards
- inte
- hundredths
- sixteenths
- mismatch
- clamps
- meters
- drams
- perfume
- machinist
- indic
- indicators
- micrometer
- finders
- nondecimal
- halves
- listing
- beverages
- whiskey
- ploy
- conversant
- milling
- measu
- calipers
- pliers
- milliliter
- drilling
- hundre
- lawy
- strangle
- neiman
- marcus
- outgrowing
- necked
- embellished
- dre
- presentable
- outrageously
- busters
- campinas
- oursel
- asses
- orient
- optimist
- jungle
- resonates
- profound
- bullying
- dreamed
- wildest
- semantics
- transcribes
- onl
- guzzlers
- fours
- threes
- transverse
- mounted
- shoved
- serpentine
- stickers
- reinstalled
- nozzle
- stroking
- groves
- surinam
- natio
- internationally
- amaco
- mobil
- rectified
- inward
- hateful
- kilom
- thumbnail
- kilogram
- britain
- adopting
- precisely
- grams
- sync
- orchestrate
- unfamiliar
- toting
- stroganoff
- allendale
- waldwick
- adirondacks
- pancakes
- outgrew
- beth
- knowl
- roanoke
- randall
- duplicated
- gamble
- ditka
- nate
- newton
- branded
- outlaws
- webster
- cocky
- lambert
- bloopers
- receivers
- tackled
- necks
- fav
- entities
- overburdened
- fairness
- pondsy
- invu
- invulnerable
- belongs
- electing
- politic
- floored
- maryl
- nurture
- credits
- ukrainian
- scallop
- buns
- batter
- bourguignonne
- grudgingly
- pinch
- reversal
- beck
- subsidize
- bennington
- liber
- refinement
- etiquette
- advises
- renaissance
- bowdoin
- bucknell
- lectures
- confirm
- guitarist
- yale
- minoring
- irrevocable
- irrespective
- clinical
- pathologist
- kayla
- bachelors
- profess
- traced
- rung
- maladjusted
- compelling
- distaste
- resp
- beret
- uzis
- disorderly
- unc
- unconcealed
- matched
- vibes
- clearest
- confi
- junkins
- mandated
- prompted
- tobacco
- bandwagon
- cour
- tricked
- syst
- maintenances
- scoop
- fetch
- pooper
- scooper
- colombia
- reek
- kindhearted
- nixed
- asthma
- outgrown
- misclass
- stately
- sunk
- furnished
- swoop
- situational
- punches
- momentum
- lockheed
- arose
- courageous
- accredita
- accreditation
- keying
- adjacent
- refine
- classified
- chemicalwise
- refining
- strean
- stillwater
- stephenville
- toxins
- bacterial
- bleaching
- sinked
- australian
- dominique
- neek
- wimp
- feline
- unconditionally
- feisty
- snuggle
- investigate
- beaner
- wadded
- fixture
- decor
- panty
- garb
- polyesters
- wools
- neatly
- layerings
- eyesore
- mended
- ironed
- compose
- upgrading
- plummeted
- acro
- daltons
- wholly
- understands
- disadvantaged
- winnowed
- structures
- casing
- connectors
- workmanship
- hal
- fluke
- highlands
- patronage
- cranberry
- pou
- lobsters
- billboard
- steams
- culinary
- adventurer
- franchised
- shacks
- shoney
- reliably
- communercation
- compe
- renditions
- organizer
- defeat
- registration
- dragginess
- headache
- draggy
- locker
- sauna
- motiv
- agony
- dictatorship
- uganda
- mils
- distances
- centigrade
- celsius
- metropolitans
- heeley
- wentworth
- differential
- microns
- whatev
- responded
- favorably
- bagged
- ecological
- prod
- additives
- pickups
- hangers
- cupboards
- fountain
- faucet
- exceeding
- decomposed
- shocker
- bizmart
- upseted
- taxwise
- toilets
- smashing
- soaker
- sheltered
- disapp
- rankled
- cheerfully
- outermost
- inland
- curving
- ventura
- buildi
- overflows
- anaheim
- simi
- meanings
- rhymed
- balti
- strayed
- kabob
- breakfasts
- galunkies
- marsh
- pierogies
- grandparent
- newarth
- cholest
- margarine
- margarines
- kebabs
- utensils
- goulashes
- juices
- sealed
- galore
- finer
- drains
- shakers
- journalist
- crux
- remo
- appease
- pob
- patr
- paro
- paroles
- partake
- traumatizing
- viaducts
- ceremonies
- dozens
- pageants
- riveted
- confuses
- thrilling
- producers
- tony
- dorsett
- hershel
- rationalized
- cinemax
- correspondence
- '30'
- cod
- reso
- repossessed
- 635's
- looper
- ramblers
- brook
- dealie
- diversion
- chevys
- nex
- v8
- carburetors
- gingerly
- yanked
- tinkerer
- evaporator
- rubbing
- testers
- diagnostic
- tester
- diagnostics
- carriage
- chilton
- multiplying
- lincolns
- tremend
- leaking
- condenser
- busted
- haas
- ovolacto
- lard
- nutrient
- lactose
- synthesize
- slough
- utilizing
- rids
- utili
- paperback
- novelization
- lucas
- freder
- brink
- feinstein
- fairfax
- deaf
- insulate
- scrubby
- pecan
- paralegals
- clears
- interference
- surplus
- tariffs
- mon
- apprentices
- advisable
- journeyman
- exporting
- imminent
- oodles
- salutatorian
- prided
- welcom
- welcoming
- tol
- resentful
- zales
- spiegel
- hurried
- circulating
- walrus
- porpoises
- mainland
- sanctuary
- whooping
- cranes
- pelicans
- antone
- alamo
- brewery
- caverns
- uncourteous
- actua
- irritant
- hullabaloo
- stockholders
- inebriated
- unsafe
- surgeries
- subsidizing
- quack
- waiveable
- refresh
- somewh
- willy
- horton
- consolation
- microscopic
- kneecap
- curtailed
- forming
- bison
- weakening
- strengthening
- '401'
- continuation
- telephones
- handbook
- badger
- showering
- physiological
- advan
- fledgling
- bikers
- bicyclist
- knocks
- coronary
- artery
- decreases
- embark
- motivating
- disevered
- knobby
- vaulted
- woodhollow
- villa
- secluded
- joking
- sellers
- coworker
- doorstep
- housebroken
- playful
- gastrointestinal
- beagle
- romping
- waters
- retrieve
- paddled
- unrequir
- degenerating
- rosebud
- sociable
- smu
- synopsis
- furrier
- judgement
- distribution
- wrongfully
- penitentiary
- sitt
- caravans
- lending
- simulation
- resemble
- adroit
- oddity
- moonlighting
- strengthwise
- divulging
- tarnished
- faye
- socialist
- undone
- inefficiency
- platform
- lieu
- mamma
- disruptive
- brow
- browbeat
- wist
- mugging
- faceless
- persuadable
- thunderbirds
- topaz
- camaro
- reim
- dominated
- wrenches
- eas
- champ
- premeditate
- premeditatively
- stiffening
- lessening
- retarded
- pleaded
- phrased
- dayers
- correctness
- promoting
- niceness
- vouch
- waterfall
- busch
- blacksburg
- portsmith
- williamsburg
- epcot
- temp
- buccaneers
- assessing
- opp
- benef
- wadley
- milestone
- tainted
- snickered
- examine
- aircraft
- astound
- pusher
- circularly
- chairman
- judy
- perturbed
- promotions
- programmed
- brightens
- hallmark
- servi
- seizures
- brighten
- tonya
- sneaks
- rainstorm
- breezes
- temperate
- promises
- westernize
- intact
- extensly
- vely
- woodward
- projected
- commanders
- colin
- powell
- embargo
- misread
- earliest
- disarray
- hopeful
- prosecute
- stature
- statesman
- foreseeable
- selves
- volatile
- retile
- bathtubs
- scouter
- drippy
- panes
- putty
- gazoo
- pes
- pesticides
- bulging
- chlorinating
- coronarys
- diets
- quadrupled
- ingestion
- clogging
- primates
- regimen
- kenneth
- innovator
- inactivity
- neurosurgeon
- strictest
- idiots
- stan
- destruction
- symbolism
- evokes
- lynched
- modified
- possess
- condone
- adamantly
- symbolizes
- circum
- satisfactory
- budg
- spartan
- frugally
- jordache
- nonessential
- victory
- cliche
- enactment
- adjourned
- mot
- expending
- reasoning
- allege
- myriad
- departure
- restocked
- guided
- unconstitutional
- reforms
- gard
- arranging
- orig
- florist
- slowdown
- runners
- geraniums
- coleus
- vinca
- thuringiansis
- caterpillars
- expands
- unlicensed
- brittle
- excelled
- wei
- denotes
- tension
- bicep
- tricep
- instructing
- grindstone
- hovering
- configuration
- blended
- muscular
- dystrophy
- documentaries
- paroe
- planner
- uruguay
- concepts
- yuppies
- legislated
- dynamics
- auditing
- rev
- revenues
- millspec
- operates
- elevens
- hammers
- federalized
- ci
- emphas
- identi
- americard
- adios
- commu
- demeanor
- announcement
- calcutta
- foreigner
- worldliness
- attributed
- chuckle
- pogo
- mourn
- tolerated
- drumming
- scrunch
- glamor
- sprigs
- ricksun
- tender
- lamp
- ashes
- overcame
- nondescript
- damned
- hierarchy
- restructuring
- feminism
- boomer
- creep
- rapidity
- electroni
- luncheon
- existent
- consulted
- alters
- stamina
- goi
- denying
- revolve
- entrusting
- omniscious
- omniscipotent
- alec
- precedes
- daders
- shrinking
- worthy
- whate
- responses
- spoils
- flashbacks
- flashback
- fidgety
- discriminate
- pertaining
- distraction
- males
- ital
- entree
- sagar
- presby
- kimonos
- grishman
- bavarian
- constricted
- putrid
- folley
- tableclo
- crayons
- disintegration
- flickers
- prevalence
- excusing
- signals
- mechanized
- requiring
- antipasta
- stuffing
- poached
- kernel
- spinach
- wilson
- beeping
- bakes
- frosting
- frostings
- chatting
- mentor
- adversaries
- manuscript
- harried
- interruptions
- feedback
- videotaping
- adopts
- twelfth
- tangible
- overseen
- alternately
- ilk
- phonic
- pistons
- snooty
- telev
- leno
- carvey
- deduce
- cros
- wheeled
- porked
- termites
- chess
- rearrange
- hisself
- bathtub
- prettier
- rewired
- shorting
- surges
- famili
- rearranging
- shuffle
- pane
- breakers
- valve
- drips
- walkway
- splash
- vein
- downfall
- yuppiedom
- restructure
- biologically
- physiologically
- wonderment
- swooshed
- viva
- talents
- mongst
- jealousy
- computerizing
- pecking
- punched
- slightest
- epidemiological
- guesswork
- transmitted
- semen
- illegitimate
- exploded
- stepchildren
- socio
- radios
- faxes
- sensors
- stalk
- jurisdiction
- outnumber
- solicitation
- prostitution
- unlocked
- fallout
- probability
- indentured
- servitude
- vigilantes
- victimless
- ridicul
- auctioning
- bidding
- patios
- insecticide
- diazinon
- carefu
- deb
- wallpa
- stagger
- renovator
- sheeting
- resilient
- stairway
- sworn
- rud
- veto
- bout
- yea
- dams
- droughts
- reservoirs
- poole
- reflected
- counteract
- learners
- genius
- perspiration
- diagnose
- predisposition
- flashing
- drowsy
- facilitators
- manipulated
- burdening
- toot
- weekdays
- racket
- drawer
- dennison
- derby
- siphon
- cu
- uba
- tailgate
- deterrents
- publishers
- poisons
- ergotisms
- fungus
- gender
- confidential
- tide
- vatted
- archeology
- shoelace
- promising
- upcoming
- reprinting
- thurber
- hundredth
- riveting
- viorst
- sci
- revol
- revolves
- shoelaces
- binds
- melody
- workbooks
- workbook
- geometry
- cypress
- greece
- irrelevant
- tortola
- gorda
- infusion
- ethnicity
- familial
- acclimate
- retaining
- latino
- continentals
- roberto
- unprepared
- vociferous
- attain
- imported
- territorialism
- horns
- encompass
- handcrafts
- wreath
- phillips
- ranching
- contemplating
- stabilize
- occupies
- baseline
- flextime
- grading
- scribble
- sensitivities
- akin
- minimized
- prematurely
- dumper
- geria
- empathize
- tandem
- providers
- prohibitive
- fantastically
- moslem
- surro
- surrogate
- regretful
- arou
- swims
- nationals
- quarries
- tumbled
- avail
- denmark
- appliqued
- eraser
- maturing
- rite
- unmarried
- aquariums
- zoos
- paternal
- traditions
- disintegrated
- trinket
- sociologist
- multigeneration
- eightch
- scorer
- rebounders
- assists
- thown
- laker
- marriott
- spittering
- sputtering
- swimsuit
- mavs
- favored
- endorsements
- prospects
- stanley
- underclassmen
- myrna
- curfew
- fiscally
- jockey
- catton
- dives
- cayman
- itinerary
- viet
- doves
- abnormal
- puppet
- heartbeats
- reviewing
- bocket
- hannibal
- lector
- fascin
- luster
- attractiveness
- originality
- pinpoint
- lavon
- upstream
- sever
- benders
- grea
- musky
- perches
- salami
- sonar
- maneuver
- charter
- suntan
- hobbyist
- styled
- convertibles
- sevi
- welded
- welding
- sunroof
- soured
- contention
- jags
- contractors
- bends
- enthused
- enthusi
- ap
- vending
- cartilage
- glanced
- fenced
- econ
- repeatable
- bundy
- exe
- strauss
- punish
- electrocute
- problematic
- candid
- fraud
- intangible
- reinstate
- mario
- cuomo
- legislatures
- molested
- incarcerate
- sylvan
- reenacted
- paltry
- polishing
- lotions
- meniar
- cringes
- thrifty
- flier
- psycholinguistics
- ivory
- godsend
- pathe
- willow
- cana
- bacally
- obese
- reimburses
- collared
- widget
- bramalea
- 401k
- weeny
- nonex
- censored
- bombarding
- dramatize
- statues
- weld
- epoxy
- resin
- shattered
- statue
- cricket
- thatches
- thatched
- vapors
- stained
- lacquered
- tung
- fanatical
- pills
- hem
- sweating
- bulge
- wrinkles
- vices
- sha
- germ
- ecru
- undercoat
- peachy
- steamers
- mottled
- grey
- maroon
- vivid
- turquoise
- coral
- renovating
- hallucinations
- cloths
- slop
- soluble
- tricks
- skimp
- tediously
- rewallpaper
- racks
- metlife
- worki
- workm
- inconsistencies
- amateurs
- footballs
- fencing
- earl
- princeton
- pacers
- subminimum
- administered
- reluctant
- poured
- chiropractor
- cautious
- janitorial
- rafael
- septien
- applicant
- eduardo
- mana
- sai
- mafia
- newcomers
- ellis
- redoing
- comm
- elitist
- concise
- rathers
- yous
- segregate
- wretched
- horrid
- shortchanged
- brokaw
- demi
- ringwald
- sixteenth
- doogie
- howser
- freckly
- ferris
- moustache
- reeve
- dreaming
- ooze
- bride
- pretended
- occupational
- exemption
- judiciously
- incidental
- figuratively
- westport
- bradford
- indirectly
- clair
- dayt
- baldwin
- bebble
- foreclosed
- rider
- homestead
- creeping
- livable
- retrial
- retry
- wond
- seeded
- raping
- choking
- shotcross
- televised
- vendettas
- trialed
- revoted
- annihilated
- enterprises
- misgivings
- quiz
- sprint
- capture
- extending
- endowment
- joes
- alumni
- splits
- governme
- faired
- undertaken
- deficiency
- dilly
- sangre
- cristos
- wichitas
- lakefront
- pinon
- naturalist
- stools
- binding
- component
- carol
- playroom
- realtors
- dominantly
- alleyways
- shifting
- popping
- bangla
- hugo
- bedroo
- barometric
- borger
- funnel
- pillowy
- radar
- veer
- swirl
- junes
- budding
- crimp
- scorch
- distracting
- heats
- therapeutic
- northe
- mayer
- denison
- purify
- purifying
- philodendron
- acc
- divert
- blurred
- fluoro
- fluorocarbons
- provoking
- brandeis
- fift
- readings
- iliad
- mythology
- choo
- scientifically
- grumbled
- unpleasant
- imparting
- cluster
- vicarious
- compromised
- profiles
- telemarketeers
- outcry
- cited
- crashes
- eroded
- erosion
- lockers
- latitudes
- motorists
- liens
- representing
- landlo
- dakotas
- alarmed
- exclusion
- parameters
- interpreted
- adoptive
- carting
- arresting
- interval
- orwell
- tay
- unusually
- leathery
- venture
- wea
- pebbles
- drainage
- deceptive
- fiend
- wrinkled
- oils
- fishermen
- tricycles
- kiddie
- wilds
- calves
- heifer
- jea
- flared
- hep
- themsel
- continuum
- astute
- propagate
- raccoon
- filleted
- livestock
- whiskers
- growling
- widen
- weaker
- ticker
- pentagon
- whomever
- nutrisweet
- bitterness
- ancient
- vets
- complicate
- preregister
- registrations
- eligibility
- preceded
- theodore
- upward
- rascals
- stinks
- precluded
- gullibility
- democracies
- redistricting
- subsidizes
- lineman
- spilled
- camouflage
- booby
- traps
- apocalypse
- influx
- surge
- buckle
- overcome
- castaways
- depicting
- dudley
- bloody
- olden
- realism
- pioneer
- worship
- chri
- videotapes
- shrunk
- eastwood
- showy
- westerns
- cursed
- pointy
- melissa
- gilbert
- idol
- verse
- shep
- immemorial
- misdemeanor
- waving
- prevail
- appoint
- bailiffs
- clerk
- verbalize
- tripled
- cameras
- reporters
- prosecutors
- outweighs
- prosecuted
- sump
- sewage
- towed
- aut
- trad
- marina
- hears
- acclaim
- sequels
- earle
- recluse
- essays
- qu
- conclusions
- photographers
- arro
- gorillas
- sloth
- fascinates
- bottoming
- landers
- tycoon
- bloomed
- fade
- spiky
- bl
- hya
- colossians
- thistles
- landscaper
- junipers
- puny
- foliage
- iris
- fuzzies
- wildflower
- insists
- camcorder
- pastime
- muggings
- grates
- claustrophobic
- tendencies
- deviant
- anguished
- cleaners
- meridian
- inlaws
- sneakers
- jordans
- brains
- caps
- videoed
- repeated
- repetition
- termed
- allowable
- purs
- discretion
- freely
- altering
- preparations
- namely
- minuses
- factored
- competitor
- trevino
- influencing
- wholesome
- exclamations
- sportsman
- phooey
- applicator
- nurseryman
- elm
- circumference
- stubs
- propelled
- pest
- sawed
- rot
- rotter
- autobiography
- liquidating
- emulating
- compu
- ause
- accomplishing
- spacings
- formattings
- insert
- reset
- rewrite
- typesetting
- typeset
- spaces
- compatibles
- adhere
- brochco
- hillstreet
- finale
- nudity
- delight
- shudder
- flabby
- telemarketing
- classification
- lotteries
- kalamazoo
- sinus
- carton
- stakes
- mounts
- hub
- airports
- altitudes
- intermediate
- simp
- fluorides
- guerrilla
- marched
- lied
- expire
- xerox
- modify
- soo
- terminals
- insur
- breakable
- hangouts
- haunts
- southerners
- rudest
- bartenders
- wee
- ferrings
- taiwanese
- jambalaya
- wowed
- univerisity
- arias
- casks
- hospitalization
- hos
- crowns
- fluctuate
- celebr
- inordinate
- axe
- newscast
- js
- recap
- sensationalize
- sensationalized
- asinine
- puzzle
- precede
- preclu
- preclude
- stretches
- wakes
- depreciate
- tru
- unibody
- granddaughters
- gol
- wagging
- trainers
- airheaded
- yappy
- dignified
- culling
- tamper
- innately
- tractable
- selectively
- culled
- belgian
- distinct
- breeds
- kennel
- translates
- shit
- unreliable
- handlers
- indiscriminate
- breeder
- handler
- bab
- doorbell
- stipulation
- laundromat
- grasslands
- surrounds
- betty
- parades
- palestine
- id
- peg
- catalyst
- palestinian
- kindest
- abounding
- kindness
- godly
- compassion
- humanness
- mandarin
- oranges
- grape
- fridge
- gelatin
- carrot
- eggo
- waffles
- adolph
- breakfa
- craftsmanship
- opt
- stanza
- glitters
- oasis
- warp
- clearinghouse
- consolidating
- salespers
- tel
- compan
- announcing
- telepho
- discard
- episodes
- cramp
- vela
- someb
- thirtysomething
- mclaughlin
- yogi
- loner
- comedian
- cantankerous
- echoed
- withdrawal
- grumpy
- stooges
- mouthiest
- kiddos
- mouthy
- touristy
- besieged
- defini
- badgering
- galapagos
- sidney
- adelaide
- chengdu
- quingdao
- retreat
- flights
- rita
- oah
- destitute
- ree
- snorkeling
- prawns
- milli
- arsenal
- traffi
- bennett
- gangsters
- corp
- arr
- pris
- crowding
- statutory
- verbalizing
- stints
- citing
- intensity
- limbaugh
- lamenting
- microwaved
- healthiest
- teases
- accuses
- deprivation
- nourishing
- evaporated
- broil
- marinara
- grapefruit
- starch
- pleasurable
- kalli
- cater
- rodolfo
- royal
- maitre
- pilgrim
- unnatural
- lookout
- arby
- wastes
- reduces
- speedup
- healthily
- sup
- quoting
- disputes
- commas
- reevaluated
- inma
- blinded
- restitution
- willfully
- contradictory
- caveman
- coleslaw
- tablecloths
- bakeries
- regretted
- purch
- pastrami
- '''oeuvre'
- complicat
- sustain
- addressing
- fellowship
- prefers
- troublesome
- camels
- beatle
- orchestration
- okeydoke
- statler
- stated
- debut
- investigating
- bootstraps
- baptisms
- clergy
- imprisoned
- confiscated
- bourgeoisie
- commonality
- recanting
- courtyard
- motions
- commandant
- escaped
- perseverance
- bureauc
- persecuted
- dab
- chorus
- mothering
- rerate
- precluding
- analogy
- spade
- marketeer
- warring
- peacefully
- trampling
- fantas
- crabby
- coated
- willis
- sarandon
- gena
- vatican
- paradeso
- befriends
- friendship
- califor
- drying
- nippy
- mucky
- thunderstormed
- shoveling
- michelle
- lan
- footnoting
- retype
- appetizer
- criterion
- alumnae
- heavyset
- poignant
- subtleties
- gore
- warlock
- omelet
- characterizing
- conceited
- portay
- goer
- prosecu
- cutor
- struggles
- flowing
- ir
- slicing
- locust
- omar
- swallowed
- redwood
- brownstone
- caulking
- myneer
- spacious
- inhaled
- revived
- airway
- revive
- sol
- dignity
- luxurious
- blossoming
- brazos
- sleeps
- purdis
- sandlin
- quake
- mak
- caramelized
- customary
- orchard
- accor
- ply
- crier
- waistline
- jewels
- earhart
- thurow
- perceptive
- pinpointing
- flimflam
- hughes
- assis
- plod
- rereading
- ditched
- findings
- bonfire
- vanities
- temporally
- burdened
- cafeterias
- linen
- napkins
- duplexes
- hodgkin
- undergoing
- interim
- constancy
- sufficiently
- farfetched
- wheeler
- cock
- slowing
- pals
- unjudgmental
- homy
- reprimand
- secrets
- brooksville
- campuses
- eyesight
- enrichment
- schooled
- rejection
- proceed
- herman
- foreigners
- polluter
- rigs
- busses
- incinerate
- pollutant
- untold
- cockroach
- accelerated
- nutrients
- sponges
- tending
- newark
- vividly
- entrance
- biggies
- consumable
- calculation
- physiology
- snowball
- dieters
- robbers
- trendsetters
- correspond
- circulates
- centralize
- descendancy
- closeness
- caliber
- differentiate
- stevens
- shippensburg
- specializes
- novelist
- intricately
- johann
- sebastian
- copyright
- compile
- poems
- baudelaire
- jennie
- abridged
- reunited
- rituals
- equated
- communion
- repetitively
- vernon
- salmonella
- silverware
- caterer
- biographer
- obituaries
- succeeded
- vigor
- bulletins
- chorals
- beginner
- violinist
- percussion
- accompany
- choruses
- audition
- verdi
- hermit
- vacationed
- anonymous
- whirlwinded
- effortlessly
- elicited
- unwound
- guadalupe
- penetrates
- alda
- burt
- reynolds
- vignettes
- dinosaurs
- robots
- satur
- sniping
- howling
- gleason
- snippets
- idle
- workshop
- gra
- dividing
- moses
- hab
- scavenge
- conserve
- indulgent
- exceptions
- contemplate
- permitting
- calming
- aboard
- docks
- cozumel
- ocho
- rios
- jurisdictions
- tapping
- lynda
- slandered
- landslide
- thornburg
- landslided
- characteristically
- savory
- petition
- resisted
- dirtier
- muddier
- sensibilities
- transpired
- nixon
- edible
- accumulating
- elbow
- cho
- grandes
- refried
- katy
- avocados
- avocado
- coolwhip
- horseshoes
- auctions
- sidelines
- loosely
- socioeconomic
- tracked
- pressured
- vandalism
- outward
- custodial
- skyline
- irritable
- unattended
- environments
- dunked
- compaq
- honk
- prodigy
- mush
- shareware
- paradox
- shooter
- crawford
- andrew
- webber
- paranoid
- unlucky
- anonymously
- competency
- wholesale
- lon
- exa
- beginnings
- kuenzer
- rebelled
- debtor
- angela
- eyeglasses
- indiv
- staffing
- examines
- optometrist
- ophthalmologist
- extractions
- publication
- unfeasible
- bettle
- orthodontal
- outsor
- roo
- suite
- scattering
- leniency
- underhanded
- perpetrator
- injustices
- wherein
- dist
- unsavory
- elimi
- rarity
- chairmen
- ministers
- congregations
- catholicism
- forthright
- disorders
- soothe
- exertion
- characteristic
- cram
- guarded
- sacrificing
- mediators
- interpersonal
- mediator
- doable
- devised
- stimulations
- goof
- whipping
- nickie
- snail
- hards
- futuristically
- subjective
- harmony
- impregnated
- challenges
- motherly
- competent
- militaristic
- colonel
- infantry
- embrey
- reynold
- riddle
- aeronautical
- pratt
- whitney
- daphne
- dictated
- qualifying
- rhodes
- scholars
- homogeneous
- realities
- socialization
- insular
- sheriffs
- evict
- continuances
- abundantly
- appealing
- retried
- lowers
- percep
- gypped
- slicker
- bruno
- kirby
- chauvinistic
- punching
- correlations
- opium
- dens
- weakened
- duress
- drunken
- induced
- legalized
- quantify
- deg
- safeguards
- fraction
- oath
- sensings
- sentencings
- pertains
- introduction
- accordance
- clark
- parachute
- presiding
- reorganizing
- sweeper
- univerty
- versity
- lakeway
- expose
- jun
- bethany
- unfocused
- midst
- instigated
- marrie
- remained
- tomorr
- whitmore
- arbor
- slushy
- sled
- icy
- lingering
- exodus
- eternally
- snowfall
- grassy
- sachse
- goddard
- stickler
- mulcher
- seni
- antisocial
- adapting
- deteriorates
- glimpse
- unwilling
- appalachia
- stopgap
- rougher
- strategic
- fails
- worded
- peoria
- dropouts
- insecure
- scaring
- stylish
- interpretive
- fathom
- expanding
- wean
- referrals
- advisory
- myrtle
- barricaded
- blackberry
- defeats
- enchila
- boiled
- toasted
- calorie
- hereditary
- headstart
- preschooler
- tacos
- tamales
- romanian
- backfires
- waiters
- batty
- momo
- colter
- pas
- campari
- adventured
- souper
- prey
- backlogged
- patrolled
- frus
- imme
- dialogue
- aisles
- cornball
- overacted
- applauding
- waterskiing
- ashley
- jamie
- warner
- deanna
- cheeks
- backdraft
- berry
- raspberries
- shaved
- entrees
- accompaniments
- gershwin
- puree
- antipollution
- gases
- accumulates
- groundwater
- fusion
- optimistic
- pessimistic
- reconvicted
- sicko
- merciful
- cannibalism
- hunch
- coordinate
- communicable
- memos
- orchestral
- fiddler
- oboe
- classy
- corresponds
- christening
- elijah
- marches
- poinsettias
- bouncy
- haunting
- conventional
- disposal
- odors
- throwaway
- ditches
- drinkers
- churn
- shipwrecked
- explodes
- maims
- sylvester
- mermaid
- outfitted
- crushing
- hobnail
- phobia
- bifocers
- trifocals
- mccalls
- byte
- afflicted
- exceeded
- antibody
- realm
- telethons
- doling
- receives
- ociety
- aesthetic
- enhancing
- frightens
- dahmer
- burglary
- enquirer
- cranks
- fuzz
- repala
- sil
- shiny
- heartbeat
- spins
- rainbow
- packaged
- trespass
- tidbit
- refrozen
- cheesecakes
- refreeze
- liabilities
- wrecks
- tattoos
- speedboats
- chambers
- afloat
- maneuvers
- stormy
- nibble
- rope
- entice
- sneaking
- paged
- favo
- flyer
- shaky
- iffy
- sentra
- subdued
- urinalysis
- bums
- overdress
- overkill
- businesslike
- nylons
- nutrisystem
- dreaded
- toppers
- ceramics
- seamstress
- cramped
- negligent
- initiates
- squeegees
- newscasters
- postponed
- a1
- alfredo
- clowning
- circuits
- sfuzzi
- copeland
- transported
- thirteenth
- wobbly
- bookends
- jug
- viscosity
- saver
- brushed
- tooken
- turpentine
- towels
- shi
- jul
- shindig
- boulevard
- maizeland
- skier
- minnie
- canaveral
- reschedule
- hilton
- eighteenth
- raton
- '287'
- '70'
- broadmoor
- breckenridge
- trinidad
- '25'
- hexpired
- disheartening
- elders
- albertson
- limbs
- sodas
- arranged
- brookshires
- pickle
- piles
- emporium
- cinch
- consolidate
- alluring
- cupcake
- henpecked
- instilled
- gatherings
- subtracts
- debits
- incidentals
- scotch
- igloos
- strateg
- strategically
- incurred
- cashes
- reunio
- entryway
- roaming
- ris
- risen
- appraisal
- disoriented
- blissful
- unexpectedly
- cockroaches
- complacent
- bitterly
- polling
- campaigning
- napping
- structuring
- digested
- perfumes
- geese
- peaked
- balloon
- canyons
- weatherwise
- sleet
- maps
- sy
- pearls
- loafers
- distinguishes
- '1200'
- whereby
- extract
- generates
- bursts
- navc
- blazey
- obscure
- promotes
- goe
- refrigerate
- tartness
- raspberry
- connoisseur
- tastings
- mesina
- exorbitant
- kaiser
- mccullum
- catastrophic
- implants
- transplants
- howe
- dislikes
- chopin
- expresses
- discussions
- chords
- panicking
- kielbasa
- bak
- ravioli
- reggae
- twangy
- agr
- cackle
- atteck
- scholar
- adolf
- imaginative
- sty
- antiques
- winnie
- pooh
- grimm
- fairy
- tales
- gentlest
- jewel
- restroom
- spitz
- extravagant
- overpass
- littering
- timers
- tans
- mauve
- distantly
- swap
- bichons
- barks
- hind
- origina
- bernards
- lega
- belittling
- liberals
- suppos
- tcat
- examination
- clicker
- screens
- carpooled
- bolivia
- sundresses
- polyester
- overheat
- sweltering
- newborn
- pleats
- absent
- strep
- bookkeeper
- partitions
- duality
- extenuating
- newsworthy
- leafing
- mccall
- subscribing
- gott
- newsy
- putterer
- caladiums
- hardened
- semitropical
- carrollton
- architecture
- hairless
- coon
- manx
- tame
- ships
- folklore
- faint
- chincoteague
- burgers
- teriyaki
- shakes
- grandy
- fend
- snowballed
- inconveniences
- woozy
- sys
- squirt
- flicking
- whales
- showtime
- adder
- dragon
- rosa
- sorrento
- dine
- mah
- jongg
- yearbook
- imprinted
- depreciated
- cribs
- bestes
- giver
- enables
- ly
- confining
- bronco
- moder
- cowb
- cheer
- schnauzers
- dachshund
- starved
- curled
- skittish
- spaying
- belon
- severing
- sr
- suicidal
- craziness
- mistrust
- lacks
- poland
- weeding
- mankind
- uninsurable
- medcenter
- hearings
- overstaffed
- mortgages
- outlaid
- intergovernmental
- plugging
- indepth
- capsize
- sensationalism
- blase
- sel
- sadist
- oleo
- oregano
- ight
- semolina
- absorbs
- vulnerable
- align
- bombings
- aligned
- tensions
- forceful
- cr
- expedited
- deserving
- mandate
- grassroots
- introspective
- schoo
- visitation
- advantaged
- energies
- tiananmen
- custodians
- immigrated
- brightest
- burst
- lanes
- winterized
- yourselfer
- representatives
- homemaking
- accessed
- uzi
- flyswatter
- utilized
- acquiring
- illicit
- gatlinburg
- cosa
- hiked
- ardmore
- cloud
- ledges
- hyatt
- gully
- trench
- tenkiller
- enlisting
- seductive
- pinion
- totality
- revealed
- legislat
- abrupt
- ruder
- arrives
- '1'
- microcomputers
- gateway
- apollo
- faulkner
- emblem
- candice
- bergen
- ghosts
- haunted
- dianetics
- gibberish
- broudigan
- journeys
- mailman
- karl
- malone
- hacking
- fillmont
- generically
- cyclist
- techy
- hackers
- davy
- crockett
- sailor
- sailed
- mck
- equalize
- semiretired
- dementia
- insisted
- rejuvenating
- coldest
- cus
- celltrex
- jeri
- maceo
- rampages
- cocoons
- occa
- uniqueness
- winfrey
- prebuilt
- workbench
- subcontracted
- subbed
- scramble
- championships
- peacefulness
- birdie
- quadruple
- whizzing
- spectators
- scrambles
- kerr
- mcgee
- infrared
- suffice
- notifies
- supplying
- angles
- anticrime
- outings
- sec
- arlene
- lister
- poked
- togethers
- dearly
- swoosh
- skate
- begonias
- destruct
- concessions
- drizzly
- huddled
- cages
- fanatics
- straightforward
- piston
- oiling
- altog
- reelection
- provisional
- locate
- incomewise
- ifs
- ands
- buts
- '4'
- hel
- discontinue
- narrowing
- nitty
- gritty
- faithful
- shoppers
- yourselves
- straighten
- stems
- relating
- supporters
- antisupporters
- contras
- dictator
- fascist
- siesta
- mouths
- reflecting
- dabble
- chalk
- chesapeake
- suspended
- ath
- tutored
- goofing
- piney
- diameter
- calmness
- outwitting
- shiners
- infla
- inflatable
- raft
- cottonmouth
- coves
- walkie
- talkies
- handcrafted
- semifixed
- automated
- crafted
- stateside
- adage
- advising
- embarrassment
- jessie
- helms
- intelligently
- mistreated
- papa
- doc
- tyrant
- puberty
- tibby
- perfumed
- legendary
- brookies
- rainbows
- accommodated
- specialists
- replanted
- rods
- norfolk
- portsmouth
- hikes
- pests
- chaperon
- calloway
- variegated
- beetles
- borderline
- zaps
- ligustrum
- apron
- gourds
- bolton
- symphonies
- caller
- sax
- houseful
- crabs
- sensation
- tingling
- oddball
- waitressing
- crunches
- relevance
- federally
- hogs
- barns
- revealing
- horticultural
- groundskeepers
- dormant
- centipede
- crops
- behold
- cuttings
- mit
- diamante
- boozier
- passengers
- shining
- becca
- nina
- palmer
- remarrying
- griffins
- crackers
- burritos
- debone
- notoriety
- jurisprudence
- thoroughfare
- sleeper
- herd
- cima
- savages
- plywood
- beams
- migrate
- undercover
- barbiturates
- codeine
- drixoral
- unsolved
- mcgillis
- weeknights
- physicist
- facet
- hurst
- greensboro
- celebrities
- repeaters
- zealand
- statistically
- outbound
- astronomy
- gallagher
- pictured
- betters
- hubble
- telescope
- planets
- habitable
- backers
- zippers
- snaps
- dull
- pretechnology
- shelled
- duplicates
- regulat
- regulators
- regulator
- lever
- pulley
- chev
- oi
- resur
- ourse
- hesitating
- russ
- noons
- flaw
- gasket
- fury
- exceptionally
- surfaced
- repeatedly
- escapes
- pragmatic
- consti
- opponents
- laural
- squeaked
- andrews
- clou
- crept
- firewood
- maples
- dogwoods
- lowell
- unu
- periodicals
- historic
- interes
- lawful
- scanners
- attempted
- thoroughness
- mag
- announcers
- tele
- ivan
- rodriguez
- ballplayers
- routing
- enthusiast
- ducted
- gettin
- brussels
- sprouts
- kale
- pony
- grazing
- pears
- extinguishers
- depleter
- extinguisher
- timed
- contaminants
- probe
- ionization
- miller
- temptation
- squareness
- buckles
- fea
- lettering
- vin
- vinyl
- balloons
- recy
- commented
- nudge
- decomposable
- flips
- emptying
- regressive
- defen
- kate
- curves
- raphael
- atchafalaya
- sausa
- alvarez
- applebee
- nonstructured
- torture
- nur
- fai
- glorious
- esoteric
- producer
- hairspray
- batch
- partic
- preteen
- unlikely
- dynamic
- raunchy
- horrifyingly
- poppins
- differed
- eclipses
- belie
- lebaron
- peeling
- gears
- oklahoman
- beatings
- proy
- condoms
- stupidity
- truthful
- faded
- marker
- reflective
- adheres
- sealing
- dings
- variance
- prop
- pressuring
- primed
- bragging
- sickening
- shitty
- drags
- burners
- putts
- teeing
- lodging
- dialers
- provision
- specify
- dialing
- prised
- weir
- overloads
- hoosiers
- crossing
- delancey
- thrillers
- backless
- ani
- nick
- nite
- dragnet
- bald
- marlo
- collier
- brigham
- estonia
- agriculture
- foodwise
- rioting
- secede
- proportionately
- hinders
- tubs
- brougham
- trunks
- shy
- gadgetry
- '6'
- interiors
- veered
- revolving
- reverting
- envy
- exhausts
- hairy
- gettingest
- daught
- bertinelli
- dysfunctional
- childfaring
- miracles
- bette
- midler
- redbook
- previewing
- postage
- unauthorized
- mayors
- discredit
- ps
- productions
- chariots
- gladiator
- fluent
- batches
- subtitle
- subtitled
- gems
- supernatural
- accusing
- migh
- mondays
- thrust
- lifters
- drills
- rocking
- referee
- abrasive
- maintaining
- posed
- refusing
- coins
- conversions
- dormitory
- unused
- ramp
- hydraulic
- disposer
- escapement
- incorporating
- leonard
- nimoy
- trekkie
- luke
- spock
- mccoy
- admiral
- hobbled
- vulcans
- doohan
- scotty
- addams
- averaging
- decrease
- munich
- snows
- chattanooga
- lori
- coldness
- membered
- unemp
- fetus
- complications
- slobs
- equation
- nameless
- malformed
- sincere
- deliberations
- dismissed
- indicted
- revenge
- subsequent
- provoked
- provocation
- qualifies
- mitigating
- contender
- linguini
- hawaiian
- luau
- angie
- shellfish
- clam
- cheeses
- nachos
- resurrection
- lutheran
- scanned
- cooperating
- toss
- inmate
- interpretation
- blanks
- executioner
- bamorghini
- skyhawk
- dominican
- nantes
- castles
- vineyard
- consignment
- goodwill
- crushes
- sewer
- res
- unoccupied
- assassinated
- menace
- perspec
- relativity
- vantage
- weighted
- reflect
- subservient
- integration
- ith
- frien
- drudgery
- montpe
- mont
- monteplier
- montpelier
- everett
- yack
- tromping
- unlimited
- wedge
- fairway
- flus
- startling
- '286'
- turret
- scien
- simulators
- plugged
- upgrades
- custer
- '386'
- trenches
- trencher
- stunt
- cul
- sac
- rearranged
- clancy
- novell
- netware
- ark
- ladonna
- peck
- bourne
- ultimatum
- enveloped
- amsterdam
- holland
- harpsichordist
- forte
- warrington
- cheating
- harry
- heroic
- mayfield
- corrupts
- lig
- hatteras
- imaging
- legalese
- himsnelf
- koop
- scarcity
- highland
- jogs
- gyms
- inequities
- stimulate
- deductor
- bentsen
- drunks
- lafferty
- infringe
- snuffed
- snuff
- compares
- gilmore
- accomplishes
- william
- thrice
- mating
- sows
- suckling
- hernia
- carcass
- cloves
- pineapples
- cranberries
- hominy
- barb
- automatics
- avis
- crashed
- lens
- porsche
- turbo
- carrera
- mys
- mushrooming
- percentagewise
- folderol
- lifeguard
- jarring
- flui
- watchers
- pokes
- blamed
- ceases
- intravenous
- cell
- quests
- subsidies
- slashed
- entitlement
- trades
- beauticians
- unending
- spiral
- consumers
- unf
- ailments
- magerick
- celtic
- transplanted
- rolando
- harper
- plaint
- straighter
- dayer
- plumbed
- bolted
- logan
- accredited
- professorship
- distressing
- fiel
- treasury
- refunds
- halt
- spying
- scaled
- loading
- challenger
- stat
- mirv
- roomy
- cargo
- recommends
- volvos
- wagons
- conscientiously
- emiss
- hypothesize
- muncie
- terre
- haute
- triggering
- verify
- drivable
- emerges
- overgrazed
- reclaimed
- prettiest
- palm
- paintbrush
- septic
- hummingbirds
- hummingbird
- pooped
- annuals
- countrified
- supermarket
- coaster
- afterburners
- gliding
- oomph
- subs
- gambled
- insulating
- spec
- verandas
- genes
- drapes
- guppies
- platies
- fishies
- glacier
- playgrounds
- wilderness
- scaries
- rayburn
- curling
- nominal
- fulfill
- synagogue
- geriatrics
- app
- degenerative
- communiky
- enhance
- assist
- text
- biogra
- daniels
- prince
- phillip
- criticizing
- miniseries
- scarlett
- spectacular
- torrents
- ligh
- horizontally
- arid
- crisp
- sleigh
- brighton
- springtime
- skie
- hammered
- subtly
- brianna
- lib
- submerged
- loosening
- leaks
- tar
- gravel
- plastered
- drywalled
- plastering
- terri
- exasperating
- swelling
- squirming
- swells
- shrinks
- retains
- highlight
- captive
- legos
- technic
- lego
- stare
- engagements
- sousa
- refreshments
- rehearsal
- donations
- municipal
- conduct
- nitny
- altoona
- lockhaven
- nighttimes
- ama
- emerson
- maceboast
- circuitry
- vacationer
- wausau
- unduly
- sunglasses
- grip
- durable
- faulty
- recliner
- pinto
- sequoias
- redwoods
- bryce
- tetons
- sequoia
- driveways
- snowmen
- snowballs
- marketed
- acceleration
- suspension
- lumbar
- sma
- bur
- skyrocketing
- govern
- exclude
- ballgame
- warrant
- rounds
- brats
- eff
- nativity
- facings
- casings
- relieve
- strase
- reliever
- relieving
- sander
- cabinet
- equipments
- dado
- rotary
- sicknesses
- bryan
- mamas
- packards
- solburns
- frown
- niggardly
- chintzy
- megs
- mirroring
- epidemic
- immunizations
- rays
- mumps
- rubella
- inaccuracy
- defined
- issued
- hypocritical
- stings
- laundering
- contr
- governed
- discomfort
- stea
- holster
- spontaneous
- headquarters
- bitterest
- fluctuations
- texts
- doen
- rosie
- '''neil'
- thomases
- trimmer
- clump
- tithing
- homeowner
- computerization
- stale
- subroutine
- libra
- clara
- beastie
- triggered
- pledged
- fren
- ally
- organi
- trombone
- weathers
- facetious
- directors
- spells
- compulsive
- childr
- fluffs
- toppings
- brea
- torque
- underdrive
- sportier
- beetle
- coolers
- bonneville
- secondaries
- quadrajet
- compulsion
- elevation
- variations
- hilltops
- mines
- hamster
- cruelty
- parakeet
- parakreet
- burmese
- deactivated
- infatuated
- jobbies
- visualize
- boggling
- slid
- clamped
- kisses
- everywh
- brag
- gramm
- overturning
- renegotiate
- kickbacks
- valdez
- defi
- batted
- hangs
- threats
- emit
- che
- churning
- remembrance
- networking
- conformance
- wyatt
- extremey
- bennigan
- vincent
- chefalia
- whataburger
- zillion
- mercado
- juarez
- tallest
- ewaldes
- cont
- stoneleigh
- chews
- yapping
- collies
- roughest
- hollered
- battling
- obedience
- squats
- vaca
- pilgrims
- medieval
- relics
- bemerton
- newness
- turin
- muffins
- requests
- helman
- tart
- zing
- cele
- layering
- fluffier
- joins
- jennifer
- unselfish
- tutoring
- affiliated
- aimlessly
- perky
- shins
- hyper
- burdensome
- earphones
- timbuktu
- onna
- lieutenant
- biologist
- sliding
- tremors
- variedly
- bakers
- aprons
- sweatshirt
- wigs
- lamb
- bunnies
- symbols
- milky
- polytechnochloride
- mought
- trashmore
- lifts
- riverview
- tranged
- strongest
- recessionary
- stagnate
- unteachable
- prominent
- chide
- remaining
- backbone
- newborns
- fullest
- firewh
- daffodil
- jung
- aquinas
- libretto
- rossini
- mahler
- dutchen
- trumpets
- elixir
- floated
- swapped
- tyme
- tempco
- trooper
- gisland
- carribean
- unpacking
- lotto
- alcatraz
- hairdresser
- crui
- janice
- furry
- eaves
- rafter
- cactuses
- furrows
- wrung
- plink
- construe
- thinkings
- bue
- buechele
- grieves
- gullible
- manufactures
- borden
- bib
- overalls
- oshman
- evaluated
- unfor
- linguistic
- austria
- niagara
- coasts
- carolinas
- leisurely
- modesto
- cheeseburgers
- incapable
- hygienic
- inoperable
- oxygen
- banish
- relocated
- realtor
- listings
- precautions
- integrate
- cooperatives
- reallocate
- reorganize
- accelerate
- transient
- commish
- tenderhearted
- galaxies
- crud
- mutations
- feazure
- ballooned
- reclamation
- merits
- axiom
- fiends
- sensitivity
- aboveboard
- evaluating
- veggies
- unarmed
- resembling
- tallow
- scalloped
- weighing
- strap
- squeaker
- closing
- mullin
- squeakers
- marquee
- bluish
- hydrogen
- sulfide
- h2s
- ramps
- vaccine
- preventable
- syringes
- needles
- feared
- ruf
- riffraff
- haves
- nots
- earhout
- bulletproof
- vest
- hedge
- tollbooth
- hatcher
- taverns
- sailboats
- ancle
- lounge
- cocktail
- sailer
- cruiser
- hull
- spars
- rigging
- gusts
- wearisome
- flaky
- markups
- arming
- stra
- quail
- swedish
- munch
- intermission
- doughy
- frosts
- iceberg
- schoolteacher
- altrusa
- upholstery
- garl
- jupiter
- musically
- auditions
- repertory
- outlet
- auditory
- lear
- educationally
- verified
- chording
- pianist
- min
- ec
- subbranch
- emigrated
- beware
- entrepreneurial
- ventures
- banked
- stored
- footsteps
- postcards
- notify
- notifying
- steals
- hides
- subsequently
- corrective
- leers
- downright
- outright
- shu
- newest
- apathetic
- absol
- prolong
- roofing
- retool
- zigzag
- kan
- untalented
- washed
- salvageable
- gluing
- feds
- interrupting
- faults
- caucasian
- educ
- thei
- officed
- deputy
- pruned
- gladiolas
- amaryllis
- conf
- plantings
- sprout
- narcissus
- psychic
- rerun
- activate
- rusted
- rusts
- fenders
- repainted
- acco
- dreary
- expen
- salting
- weinstocks
- wad
- hilt
- dolphene
- feelt
- throwed
- wheelchairs
- emjoy
- anheimer
- tela
- kindly
- innovated
- endeavors
- adam
- particulars
- abusive
- evolutionary
- duplication
- imagers
- allocate
- optimally
- squawk
- evolution
- insurers
- entity
- burnable
- ticketed
- charities
- braved
- suede
- cardigan
- appointments
- unlined
- toasty
- lightweight
- fireplaces
- dense
- ethanol
- smokestacks
- mowers
- wedded
- organism
- nutritionally
- bamba
- szechuan
- pancho
- binders
- assignments
- developments
- cashew
- avoiding
- suey
- disburse
- squeeze
- sq
- faculties
- pauper
- brokerage
- anticipation
- cherished
- commodity
- famuel
- slopes
- biness
- furlough
- promoted
- nec
- shasta
- salmon
- sk
- walleye
- fighters
- fillet
- foil
- seekers
- scrutiny
- tarrant
- bobsy
- accu
- smiled
- growled
- mistrials
- railroaded
- convalescent
- unsettling
- senile
- graying
- exercisings
- unaffordable
- restricts
- casse
- gabrielli
- bankrupted
- cello
- viola
- composers
- boutiques
- darling
- chanting
- canseco
- ramming
- vinny
- utility
- outweighing
- sundance
- smithsonian
- crosswords
- planners
- artists
- bazo
- faron
- spiro
- gyro
- dulcimer
- jarreau
- contorted
- bonnie
- rait
- grammy
- unedu
- sprayer
- routers
- cookie
- varnish
- smoother
- hayloft
- franklin
- gradual
- increasement
- torpedoed
- downside
- blythe
- tonkin
- macintoshes
- graphical
- multitasking
- gestures
- vocabulary
- compilers
- consultation
- interactive
- discriminating
- correlate
- funnest
- gentler
- panicked
- sassy
- westmin
- westminster
- infra
- mondale
- situa
- circuses
- disrepair
- dashboard
- ce
- beefing
- patrols
- visibility
- lifted
- cumberland
- cobb
- thefts
- superficial
- cracked
- electrically
- manufactured
- bordering
- elects
- aerodyne
- aerob
- brace
- publicize
- killings
- duri
- commentators
- blurbs
- bog
- dur
- countdown
- newscasts
- unreasonable
- moderator
- unorganized
- moderated
- assumingly
- importers
- dahlmer
- ohi
- nightmarish
- withheld
- sovereign
- martial
- puritanical
- permissible
- acquitting
- acquit
- impaneling
- dismissing
- foreman
- deliberating
- una
- restate
- unannounced
- sweep
- definitive
- bodily
- behaviors
- enters
- privacies
- melanie
- spry
- announcements
- anson
- fayetteville
- waynesboro
- delinquency
- fre
- gainfully
- tremen
- thriving
- towar
- grit
- pail
- latent
- compression
- ovens
- armor
- fierce
- finagle
- nationalizing
- cutoff
- operat
- unionized
- distinction
- institutionally
- expedient
- innovativeness
- expedi
- unequal
- plaintiff
- novices
- bets
- leaky
- luby
- taping
- promo
- blurb
- mutt
- hooper
- veterin
- spay
- neuter
- frie
- shorties
- decreased
- unrestricted
- glut
- magnum
- rushes
- oper
- preset
- styro
- frank
- shocks
- allot
- frowned
- chronicle
- analytical
- abnormality
- overwhelmingly
- academia
- descriptions
- addictive
- reevaluate
- divvy
- allocated
- psy
- psychedelic
- crosby
- stills
- performers
- secular
- druggie
- shipping
- maximize
- actuall
- revelation
- polymers
- roadways
- hoop
- funn
- heavenly
- retailers
- induce
- inducement
- recycler
- saskatoon
- welfor
- employing
- deposits
- arithmetic
- sums
- colleague
- internet
- infusions
- incurring
- surveying
- assesses
- footloose
- smattering
- greetings
- snobby
- paled
- refrained
- acute
- indivigal
- thrives
- categorized
- receptionist
- lar
- curve
- critter
- incumbent
- entrenched
- standardizing
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
776f7890b1f0f2615e3234e25cfbc211
|
jaykmr/ESMCrystal_t12_35M_v2
|
jaykmr
|
esm
| 28 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 8,802 | false |
## Label Semantics:
Label 0: Non-crystallizable (Negative)
Label 1: Crystallizable (Positive)
## Dataset
1. [DeepCrystal Train](https://huggingface.co/jaykmr/ESMCrystal_t12_35M_v2/blob/main/Datasets/train.csv)
2. [DeepCrystal Test](https://huggingface.co/jaykmr/ESMCrystal_t12_35M_v2/blob/main/Datasets/test.csv)
3. [BCrystal Test](https://huggingface.co/jaykmr/ESMCrystal_t12_35M_v2/tree/main/Datasets/BCrystal_Balanced_Test_set)
4. [SP Test](https://huggingface.co/jaykmr/ESMCrystal_t12_35M_v2/tree/main/Datasets/SP_Final_set)
5. [TR Test](https://huggingface.co/jaykmr/ESMCrystal_t12_35M_v2/tree/main/Datasets/TR_Final_set)
## Model
#### ESMCrystal_t12_35M_v2
ESMCrystal_t12_35M_v2 is a state-of-the-art protein crystallization prediction model finetuned on [esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D),
having 12 layers and 35M parameters with size of [approx. 136MB](https://huggingface.co/jaykmr/ESMCrystal_t12_35M_v2/blob/main/pytorch_model.bin)
using transfer learning to predict whether an input protein sequence will crystallize or not.
## Accuracy :
| Dataset | Accuracy |
|------------------|--------------------|
| DeepCrystal Test | 0.8161222339304531 |
| BCrystal test | 0.8052602126468943 |
| SP test | 0.7637130801687764 |
| TR test | 0.8389328063241107 |
## Comparision Table:
| Dataset | Count | Positives | Negatives | TP | FP | FN | TN | Precision | Recall | F1 | Accuracy | ROC | Mathew's Coefficient | PPV | NPV |
|-------------------|-------------|---------------|---------------|--------|------------|-----------|------------|-------------------|-------------------|-------------------|-------------------|---------------|--------------------------|-------------------|-------------------|
| | | | | | | | | | | | | | | | |
| DeepCrystalTest | 1898 | 898 | 1000 | 579 | 319 | 30 | 970 | 0.64476615 | 0.95073892 | 0.76841407 | 0.81612223 | 0.9403 | 0.657526117 | 0.64476615 | 0.97 |
| | | | | | | | | | | | | | | | |
| BCrystal Test | 1787 | 891 | 896 | 573 | 318 | 30 | 866 | 0.64309764 | 0.95024876 | 0.76706827 | 0.80526021 | 0.9396 | 0.644635696 | 0.64309764 | 0.96651786 |
| | | | | | | | | | | | | | | | |
| SP Test | 237 | 148 | 89 | 97 | 51 | 5 | 84 | 0.65540541 | 0.95098039 | 0.776 | 0.76371308 | 0.9293 | 0.586069704 | 0.65540541 | 0.94382022 |
| | | | | | | | | | | | | | | | |
| TR Test | 1012 | 374 | 638 | 225 | 149 | 14 | 624 | 0.60160428 | 0.94142259 | 0.73409462 | 0.83893281 | 0.9562 | 0.658766192 | 0.60160428 | 0.97805643 |
| | | | | | | | | | | | | | | | |
## Graphs
### ROC-AUC Curve
* DeepCrystal Test

* BCrystal Test

* SP Test

* TR Test

### PR-AUC Curve
* DeepCrystal Test

* BCrystal Test

* SP Test

* TR Test

## Final scores :
* on DeepCrystal test:
| | precision | recall | f1-score | support |
|--------------------|-----------|--------|----------|---------|
| non-crystallizable | 0.75 | 0.97 | 0.85 | 1000 |
| crystallizable | 0.95 | 0.64 | 0.77 | 898 |
| accuracy | | | 0.82 | 1898 |
| macro avg | 0.85 | 0.81 | 0.81 | 1898 |
| weighted avg | 0.85 | 0.82 | 0.81 | 1898 |
* on BCrystal test:
| | precision | recall | f1-score | support |
|--------------------|-----------|--------|----------|---------|
| non-crystallizable | 0.73 | 0.97 | 0.83 | 896 |
| crystallizable | 0.95 | 0.64 | 0.77 | 891 |
| accuracy | | | 0.81 | 1787 |
| macro avg | 0.84 | 0.80 | 0.80 | 1787 |
| weighted avg | 0.84 | 0.81 | 0.80 | 1787 |
* on SP test:
| | precision | recall | f1-score | support |
|--------------------|-----------|--------|----------|---------|
| non-crystallizable | 0.62 | 0.94 | 0.75 | 89 |
| crystallizable | 0.95 | 0.66 | 0.78 | 148 |
| accuracy | | | 0.76 | 237 |
| macro avg | 0.79 | 0.80 | 0.76 | 237 |
| weighted avg | 0.83 | 0.76 | 0.77 | 237 |
* on TR test:
| | precision | recall | f1-score | support |
|--------------------|-----------|--------|----------|---------|
| non-crystallizable | 0.81 | 0.98 | 0.88 | 638 |
| crystallizable | 0.94 | 0.60 | 0.73 | 374 |
| accuracy | | | 0.84 | 1012 |
| macro avg | 0.87 | 0.79 | 0.81 | 1012 |
| weighted avg | 0.86 | 0.84 | 0.83 | 1012 |
## Confusion matrix:
* on DeepCrystal test:
```
| 579 | 319 |
| 30 | 970 |
```
* on BCrystal test:
```
| 573 | 318 |
| 30 | 866 |
```
* on SP test:
```
| 97 | 51 |
| 5 | 84 |
```
* on TR test:
```
| 225 | 149 |
| 14 | 624 |
```
## Metrics
roc score:
* on DeepCrystal test: 0.9403474387527841
* on BCrystal test: 0.9395705567580568
* on SP test: 0.9293197692074097
* on TR test: 0.9561924798417515
Mathews Coefficient:
* on DeepCrystal test: 0.6575261170551334
* on BCrystal test: 0.6446356961702661
* on SP test: 0.586069703866632
* on TR test: 0.6587661924247377
NPV:
* on DeepCrystal test: 0.97
* on BCrystal test: 0.9665178571428571
* on SP test: 0.9438202247191011
* on TR test: 0.9780564263322884
PPV:
* on DeepCrystal test: 0.6447661469933185
* on BCrystal test: 0.6430976430976431
* on SP test: 0.6554054054054054
* on TR test: 0.6016042780748663
Researchers:
* [Jayanth Kumar](https://jaykmr.com)
* [Kavya Jaykumar](https://www.linkedin.com/in/kavya-jayakumar-6390271b5/)
Credits:
* [Meta ESMFold2](https://github.com/facebookresearch/esm)
* [Huggingface](https://huggingface.co/jaykmr)
* [Paperspace Compute Cloud](https://www.paperspace.com/)
|
bc418ff5036b5269226f145ddde70916
|
Padomin/t5-base-TEDxJP-0front-1body-1rear
|
Padomin
|
t5
| 20 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-sa-4.0
| null |
['te_dx_jp']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,953 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-0front-1body-1rear
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4869
- Wer: 0.1801
- Mer: 0.1739
- Wil: 0.2635
- Wip: 0.7365
- Hits: 55253
- Substitutions: 6626
- Deletions: 2708
- Insertions: 2296
- Cer: 0.1411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:|
| 0.6609 | 1.0 | 1457 | 0.5121 | 0.2181 | 0.2049 | 0.2958 | 0.7042 | 54651 | 6867 | 3069 | 4151 | 0.1880 |
| 0.5633 | 2.0 | 2914 | 0.4719 | 0.1891 | 0.1817 | 0.2714 | 0.7286 | 55015 | 6654 | 2918 | 2644 | 0.1558 |
| 0.5212 | 3.0 | 4371 | 0.4626 | 0.1838 | 0.1771 | 0.2666 | 0.7334 | 55168 | 6635 | 2784 | 2452 | 0.1462 |
| 0.4498 | 4.0 | 5828 | 0.4616 | 0.1807 | 0.1747 | 0.2643 | 0.7357 | 55148 | 6630 | 2809 | 2231 | 0.1420 |
| 0.4058 | 5.0 | 7285 | 0.4633 | 0.1799 | 0.1739 | 0.2631 | 0.7369 | 55200 | 6592 | 2795 | 2231 | 0.1419 |
| 0.3802 | 6.0 | 8742 | 0.4675 | 0.1796 | 0.1733 | 0.2630 | 0.7370 | 55311 | 6636 | 2640 | 2321 | 0.1412 |
| 0.4126 | 7.0 | 10199 | 0.4737 | 0.1781 | 0.1724 | 0.2617 | 0.7383 | 55245 | 6595 | 2747 | 2163 | 0.1394 |
| 0.3436 | 8.0 | 11656 | 0.4772 | 0.1788 | 0.1729 | 0.2624 | 0.7376 | 55247 | 6616 | 2724 | 2208 | 0.1401 |
| 0.3249 | 9.0 | 13113 | 0.4827 | 0.1796 | 0.1735 | 0.2632 | 0.7368 | 55265 | 6635 | 2687 | 2281 | 0.1407 |
| 0.3347 | 10.0 | 14570 | 0.4869 | 0.1801 | 0.1739 | 0.2635 | 0.7365 | 55253 | 6626 | 2708 | 2296 | 0.1411 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
d5cdf8f5f4efdc084393944f92b7af41
|
timm/coatnet_2_rw_224.sw_in12k
|
timm
| null | 4 | 52 |
timm
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet-12k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'timm']
| false | true | true | 21,960 | false |
# Model card for coatnet_2_rw_224.in12k
A timm specific CoAtNet image classification model. Trained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) by Ross Wightman.
### Model Variants in [maxxvit.py](https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/maxxvit.py)
MaxxViT covers a number of related model architectures that share a common structure including:
- CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages.
- MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid).
- CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm).
- MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate.
Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations.
All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 85.0
- GMACs: 15.1
- Activations (M): 49.2
- Image size: 224 x 224
- **Papers:**
- CoAtNet: Marrying Convolution and Attention for All Data Sizes: https://arxiv.org/abs/2201.03545
- **Dataset:** ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('coatnet_2_rw_224.in12k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'coatnet_2_rw_224.in12k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 192, 192])
# torch.Size([1, 128, 96, 96])
# torch.Size([1, 256, 48, 48])
# torch.Size([1, 512, 24, 24])
# torch.Size([1, 1024, 12, 12])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'coatnet_2_rw_224.in12k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
### By Throughput (samples / sec)
|model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)|
|------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:|
|[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80|
|[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41|
|[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34|
|[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73|
|[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04|
|[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86|
|[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05|
|[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92|
|[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05|
|[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28|
|[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11|
|[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47|
|[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13|
|[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78|
|[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60|
|[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92|
|[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30|
|[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17|
|[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22|
|[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78|
|[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78|
|[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38|
|[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77|
|[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64|
|[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01|
|[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42|
|[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35|
|[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65|
|[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43|
|[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74|
|[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59|
|[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95|
|[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90|
|[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90|
|[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77|
|[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84|
|[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84|
|[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99|
|[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99|
|[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76|
|[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15|
|[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15|
|[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22|
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
```bibtex
@article{tu2022maxvit,
title={MaxViT: Multi-Axis Vision Transformer},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={ECCV},
year={2022},
}
```
```bibtex
@article{dai2021coatnet,
title={CoAtNet: Marrying Convolution and Attention for All Data Sizes},
author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing},
journal={arXiv preprint arXiv:2106.04803},
year={2021}
}
```
|
bb7f9d5099533a0c3078e5c0235dd7af
|
derhuli/vit-base-beans
|
derhuli
|
vit
| 9 | 11 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['beans']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0410
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0751 | 1.54 | 100 | 0.0768 | 0.9850 |
| 0.0121 | 3.08 | 200 | 0.0410 | 0.9925 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.10.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
1d6867ca66c92241aa6bb1434e508622
|
pietrotrope/hate_trained
|
pietrotrope
|
distilbert
| 10 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['tweet_eval']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,389 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hate_trained
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9661
- F1: 0.7730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.303025140957233e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4767 | 1.0 | 2250 | 0.5334 | 0.7717 |
| 0.4342 | 2.0 | 4500 | 0.7633 | 0.7627 |
| 0.3813 | 3.0 | 6750 | 0.9452 | 0.7614 |
| 0.3118 | 4.0 | 9000 | 0.9661 | 0.7730 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
2f7569a203b153cc38fb96550ef34cae
|
Leizhang/xlm-roberta-base-finetuned-panx-de-fr
|
Leizhang
|
xlm-roberta
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,321 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1631
- F1: 0.8579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2878 | 1.0 | 715 | 0.1840 | 0.8247 |
| 0.1456 | 2.0 | 1430 | 0.1596 | 0.8473 |
| 0.0925 | 3.0 | 2145 | 0.1631 | 0.8579 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
b71d74fdebaceee3888a3c5d2f04da19
|
janeel/muppet-roberta-base-finetuned-squad
|
janeel
|
roberta
| 13 | 5 |
transformers
| 2 |
question-answering
| true | false | false |
mit
| null |
['squad_v2']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,241 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# muppet-roberta-base-finetuned-squad
This model is a fine-tuned version of [facebook/muppet-roberta-base](https://huggingface.co/facebook/muppet-roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9017
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7007 | 1.0 | 8239 | 0.7905 |
| 0.4719 | 2.0 | 16478 | 0.9017 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
e80d1f325b7f1d103591d88a85f117a8
|
Helsinki-NLP/opus-mt-fr-ilo
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-fr-ilo
* source languages: fr
* target languages: ilo
* OPUS readme: [fr-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ilo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ilo/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ilo/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ilo/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ilo | 30.6 | 0.528 |
|
05b8c0270f25c48e81bcafb61806b872
|
Salesforce/codegen-350M-mono
|
Salesforce
|
codegen
| 10 | 12,253 |
transformers
| 20 |
text-generation
| true | false | false |
bsd-3-clause
| null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
[]
| false | true | true | 2,949 | false |
# CodeGen (CodeGen-Mono 350M)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Mono 350M** in the paper, where "Mono" means the model is initialized with *CodeGen-Multi 350M* and further pre-trained on a Python programming language dataset, and "350M" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Mono 350M) was firstly initialized with *CodeGen-Multi 350M*, and then pre-trained on BigPython dataset. The data consists of 71.7B tokens of Python programming language. See Section 2.1 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-mono")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
b82ae34b496234c879ede059390662bd
|
PhilSad/gpt-scp-neo-125M
|
PhilSad
|
gpt_neo
| 15 | 6 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,036 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_gptneo125-2
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
d56f52a44fb8beafd7ff51bd78463bb5
|
aipicasso/cool-japan-diffusion-2-1-1
|
aipicasso
| null | 21 | 553 |
diffusers
| 18 |
text-to-image
| false | false | false |
other
| null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 6,948 | false |
# Cool Japan Diffusion 2.1.1 Model Card

[注意事项。中国将对图像生成的人工智能实施法律限制。 ](http://www.cac.gov.cn/2022-12/11/c_1672221949318230.htm) (中国国内にいる人への警告)
English version is [here](README_en.md).
# はじめに
Cool Japan Diffusion はStable Diffsionをファインチューニングして、アニメやマンガ、ゲームなどのクールジャパンを表現することに特化したモデルです。なお、内閣府のクールジャパン戦略とは特に関係はありません。
# ライセンスについて
ライセンスについては、もとのライセンス CreativeML Open RAIL++-M License に例外を除き商用利用禁止を追加しただけです。
例外を除き商用利用禁止を追加した理由は創作業界に悪影響を及ぼしかねないという懸念からです。
この懸念が払拭されれば、次のバージョンから元のライセンスに戻し、商用利用可能とします。
ちなみに、元のライセンスの日本語訳は[こちら](https://qiita.com/robitan/items/887d9f3153963114823d)になります。
営利企業にいる方は法務部にいる人と相談してください。
趣味で利用する方はあまり気にしなくても一般常識を守れば大丈夫なはずです。
なお、ライセンスにある通り、このモデルを改造しても、このライセンスを引き継ぐ必要があります。
# 法律や倫理について
本モデルは日本にて作成されました。したがって、日本の法律が適用されます。
本モデルの学習は、著作権法第30条の4に基づき、合法であると主張します。
また、本モデルの配布については、著作権法や刑法175条に照らしてみても、
正犯や幇助犯にも該当しないと主張します。詳しくは柿沼弁護士の[見解](https://twitter.com/tka0120/status/1601483633436393473?s=20&t=yvM9EX0Em-_7lh8NJln3IQ)を御覧ください。
ただし、ライセンスにもある通り、本モデルの生成物は各種法令に従って取り扱って下さい。
しかし、本モデルを配布する行為が倫理的に良くないとは作者は思っています。
これは学習する著作物に対して著作者の許可を得ていないためです。
ただし、学習するには著作者の許可は法律上必要もなく、検索エンジンと同様法律上は問題はありません。
したがって、法的な側面ではなく、倫理的な側面を調査する目的も本配布は兼ねていると考えてください。
# 使い方
手軽に楽しみたい方は、こちらの[Space](https://huggingface.co/spaces/aipicasso/cool-japan-diffusion-latest-demo)をお使いください。
詳しい本モデルの取り扱い方は[こちらの取扱説明書](https://alfredplpl.hatenablog.com/entry/2023/01/11/182146)にかかれています。
モデルは[ここ](https://huggingface.co/aipicasso/cool-japan-diffusion-2-1-1/resolve/main/v2-1-1.ckpt)からダウンロードできます。
以下、一般的なモデルカードの日本語訳です。
## モデル詳細
- **開発者:** Robin Rombach, Patrick Esser, Alfred Increment
- **モデルタイプ:** 拡散モデルベースの text-to-image 生成モデル
- **言語:** 日本語
- **ライセンス:** CreativeML Open RAIL++-M-NC License
- **モデルの説明:** このモデルはプロンプトに応じて適切な画像を生成することができます。アルゴリズムは [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) と [OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip) です。
- **補足:**
- **参考文献:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## モデルの使用例
Stable Diffusion v2と同じ使い方です。
たくさんの方法がありますが、2つのパターンを提供します。
- Web UI
- Diffusers
### Web UIの場合
こちらの[取扱説明書](https://alfredplpl.hatenablog.com/entry/2023/01/11/182146)に従って作成してください。
### Diffusersの場合
[🤗's Diffusers library](https://github.com/huggingface/diffusers) を使ってください。
まずは、以下のスクリプトを実行し、ライブラリをいれてください。
```bash
pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
```
次のスクリプトを実行し、画像を生成してください。
```python
from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler
import torch
model_id = "aipicasso/cool-japan-diffusion-2-1-1"
scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "anime, masterpiece, a portrait of a girl, good pupil, 4k, detailed"
negative_prompt="deformed, blurry, bad anatomy, bad pupil, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, poorly drawn hands, bad hands, fused fingers, messy drawing, broken legs censor, low quality, mutated hands and fingers, long body, mutation, poorly drawn, bad eyes, ui, error, missing fingers, fused fingers, one hand with more than 5 fingers, one hand with less than 5 fingers, one hand with more than 5 digit, one hand with less than 5 digit, extra digit, fewer digits, fused digit, missing digit, bad digit, liquid digit, long body, uncoordinated body, unnatural body, lowres, jpeg artifacts, 3d, cg, text, japanese kanji"
images = pipe(prompt,negative_prompt=negative_prompt, num_inference_steps=20).images
images[0].save("girl.png")
```
**注意**:
- [xformers](https://github.com/facebookresearch/xformers) を使うと早くなるらしいです。
- GPUを使う際にGPUのメモリが少ない人は `pipe.enable_attention_slicing()` を使ってください。
#### 想定される用途
- コンテスト
- [AIアートグランプリ](https://www.aiartgrandprix.com/)への投稿
- ファインチューニングに用いた全データを開示し、審査基準を満たしていることを判断してもらうようにします。
- コンテストに向けて、要望があれば、Hugging Face の Community などで私に伝えてください。
- 画像生成AIに関する報道
- 公共放送だけでなく、営利企業でも可能
- 画像合成AIに関する情報を「知る権利」は創作業界に悪影響を及ぼさないと判断したためです。また、報道の自由などを尊重しました。
- クールジャパンの紹介
- 他国の人にクールジャパンとはなにかを説明すること。
- 他国の留学生はクールジャパンに惹かれて日本に来ることがおおくあります。そこで、クールジャパンが日本では「クールでない」とされていることにがっかりされることがとても多いとAlfred Incrementは感じております。他国の人が憧れる自国の文化をもっと誇りに思ってください。
- 研究開発
- Discord上でのモデルの利用
- プロンプトエンジニアリング
- ファインチューニング(追加学習とも)
- DreamBooth など
- 他のモデルとのマージ
- Latent Diffusion Modelとクールジャパンとの相性
- 本モデルの性能をFIDなどで調べること
- 本モデルがStable Diffusion以外のモデルとは独立であることをチェックサムやハッシュ関数などで調べること
- 教育
- 美大生や専門学校生の卒業制作
- 大学生の卒業論文や課題制作
- 先生が画像生成AIの現状を伝えること
- 自己表現
- SNS上で自分の感情や思考を表現すること
- Hugging Face の Community にかいてある用途
- 日本語か英語で質問してください
#### 想定されない用途
- 物事を事実として表現するようなこと
- 収益化されているYouTubeなどのコンテンツへの使用
- 商用のサービスとして直接提供すること
- 先生を困らせるようなこと
- その他、創作業界に悪影響を及ぼすこと
# 使用してはいけない用途や悪意のある用途
- デジタル贋作 ([Digital Forgery](https://arxiv.org/abs/2212.03860)) は公開しないでください(著作権法に違反するおそれ)
- 特に既存のキャラクターは公開しないでください(著作権法に違反するおそれ)
- なお、学習していない[キャラクターも生成できる](https://twitter.com/ThePioneerJPnew/status/1609074173892235264?s=20&t=-rY1ufzNeIDT3Fm5YdME6g)そうです。(このツイート自体は研究目的として許可しています。)
- 他人の作品を無断でImage-to-Imageしないでください(著作権法に違反するおそれ)
- わいせつ物を頒布しないでください (刑法175条に違反するおそれ)
- いわゆる業界のマナーを守らないようなこと
- 事実に基づかないことを事実のように語らないようにしてください(威力業務妨害罪が適用されるおそれ)
- フェイクニュース
## モデルの限界やバイアス
### モデルの限界
- よくわかっていない
### バイアス
Stable Diffusionと同じバイアスが掛かっています。
気をつけてください。
## 学習
**学習データ**
次のデータを主に使ってStable Diffusionをファインチューニングしています。
- VAEについて
- Danbooruなどの無断転載サイトを除いた日本の国内法を遵守したデータ: 60万種類 (データ拡張により無限枚作成)
- U-Netについて
- Danbooruなどの無断転載サイトを除いた日本の国内法を遵守したデータ: 100万ペア
**学習プロセス**
Stable DiffusionのVAEとU-Netをファインチューニングしました。
- **ハードウェア:** RTX 3090, A6000
- **オプティマイザー:** AdamW
- **Gradient Accumulations**: 1
- **バッチサイズ:** 1
## 評価結果
## 環境への影響
ほとんどありません。
- **ハードウェアタイプ:** RTX 3090, A6000
- **使用時間(単位は時間):** 600
- **クラウド事業者:** なし
- **学習した場所:** 日本
- **カーボン排出量:** そんなにない
## 参考文献
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*このモデルカードは [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md) に基づいて、Alfred Incrementがかきました。
|
8d68b2b2b518394033bf1b8ac6269daa
|
it5/mt5-small-informal-to-formal
|
it5
|
mt5
| 11 | 4 |
transformers
| 0 |
text2text-generation
| true | true | true |
apache-2.0
|
['it']
|
['yahoo/xformal_it']
|
{'emissions': '17g', 'source': 'Google Cloud Platform Carbon Footprint', 'training_type': 'fine-tuning', 'geographical_location': 'Eemshaven, Netherlands, Europe', 'hardware_used': '1 TPU v3-8 VM'}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['italian', 'sequence-to-sequence', 'style-transfer', 'formality-style-transfer']
| true | true | true | 1,798 | false |
# mT5 Small for Informal-to-formal Style Transfer 🧐
This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on Informal-to-formal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
i2f = pipeline("text2text-generation", model='it5/mt5-small-informal-to-formal')
i2f("nn capisco xke tt i ragazzi lo fanno")
>>> [{"generated_text": "non comprendo perché tutti i ragazzi agiscono così"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-informal-to-formal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-informal-to-formal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
|
f3ed241957699cbe2b72788d0ed8b823
|
autoevaluate/image-multi-class-classification
|
autoevaluate
|
swin
| 20 | 60 |
transformers
| 1 |
image-classification
| true | false | false |
apache-2.0
| null |
['mnist', 'autoevaluate/mnist-sample']
| null | 3 | 3 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,336 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image-classification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the mnist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0556
- Accuracy: 0.9833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3743 | 1.0 | 422 | 0.0556 | 0.9833 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
f151ab75be8f1f92ad88af604bfb3d00
|
itsGanni/Canadian_Armed_Forces-clustered
|
itsGanni
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,858 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/Canadian_Armed_Forces-clustered
This model is a fine-tuned version of [nandysoham/0-clustered](https://huggingface.co/nandysoham/0-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7260
- Train End Logits Accuracy: 0.8160
- Train Start Logits Accuracy: 0.7292
- Validation Loss: 0.5889
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 0.6000
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.7260 | 0.8160 | 0.7292 | 0.5889 | 1.0 | 0.6000 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
c7bd01e05282e90f88dceaa99153aa6c
|
JeanneRbs/ddpm-butterflies-128
|
JeanneRbs
| null | 14 | 0 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,205 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/JeanneRbs/ddpm-butterflies-128/tensorboard?#scalars)
|
242eb5b31866c370fec67ec59e2d1f42
|
rmihaylov/roberta-base-use-qa-theseus-bg
|
rmihaylov
|
xlm-roberta
| 9 | 4 |
transformers
| 0 |
sentence-similarity
| true | false | false |
mit
|
['bg']
|
['oscar', 'chitanka', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['torch']
| false | true | true | 6,100 | false |
# ROBERTA BASE (cased) trained on private Bulgarian-English parallel data
This is a Multilingual Roberta model. It could be used for creating embeddings of Bulgarian sentences.
Using the ideas from [Sentence-BERT](https://arxiv.org/abs/2004.09813), the training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence.
The teacher model is the [USE model by Google](https://aclanthology.org/D18-2029/).
This model is cased: it does make a difference between bulgarian and Bulgarian.
It was trained on private Bulgarian-English parallel data.
Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925).
### How to use
Here is how to use this model in PyTorch:
```python
>>> import scipy
>>> import torch
>>> from transformers import AutoModel, AutoTokenizer
>>>
>>> model = AutoModel.from_pretrained('rmihaylov/roberta-base-use-qa-theseus-bg')
>>> tokenizer = AutoTokenizer.from_pretrained('rmihaylov/roberta-base-use-qa-theseus-bg')
>>>
>>> query = "Какви са съставките на бисквитките?"
>>>
>>> answers = [
>>> "Бисквитката е печена или варена храна, която обикновено е малка, плоска и сладка.",
>>> "Бисквитките обикновено съдържат брашно, захар и някакъв вид масло или мазнини. Те могат да включват други съставки като стафиди, овес, шоколадов чипс, ядки и др.",
>>> "В повечето англоговорящи страни, с изключение на САЩ и Канада, хрупкавите бисквитки се наричат бисквити.",
>>> "Бисквитите Chewier понякога се наричат бисквитки дори в Обединеното кралство. Някои бисквитки могат също да бъдат назовавани според формата им, като квадратчета с дата или барове.",
>>> "Бисквитките или бисквитите могат да се произвеждат масово във фабрики, направени в малки пекарни или домашно приготвени.",
>>> "Вариантите за бисквити или бисквити включват сандвич бисквити, като крем крем, Jammie Dodgers, Bourbons и Oreos, с пълнеж от ружа или конфитюр и понякога потопени в шоколад или друго сладко покритие.",
>>> "Бисквитките често се сервират с напитки като мляко, кафе или чай.",
>>> "Фабричните бисквитки се продават в магазини за хранителни стоки, магазини за удобство и автомати.",
>>> "Американската употреба произлиза от холандското koekje „малка торта“, което е умалително от „koek“ („торта“), което произлиза от средно холандската дума „koke“.",
>>> "Cookie Monster е Muppet в дългогодишното детско телевизионно шоу Sesame Street, който е най-известен с ненаситния си апетит към бисквитките и известните си фрази за ядене, като „Me want cookie!“, „Me eat cookie!“ (или просто „COOKIE!“) и „Om nom nom nom“ (казано през уста, пълна с храна).",
>>> "Домашните бисквитки обикновено се правят от тесто, оформено на малки топчета и пуснато върху лист с бисквитки. След това се пекат във фурна за 5 до 15 минути, в зависимост от рецептата. Температурата на фурната варира от 250 до 350 градуса.",
>>> "Повечето бисквитки със среден размер, ако са направени със захар, брашно и скъсяване, ще съдържат между 100 и 200 калории.",
>>> ]
>>>
>>> query_embedding = model.question(**tokenizer.encode_plus(query, return_tensors='pt')).detach().numpy()[0]
>>>
>>> corpus, corpus_embeddings = [], []
>>> for answer in answers:
>>> value_inputs = tokenizer.encode_plus(answer, answer, return_tensors='pt')
>>> embedding = model.answer(**value_inputs).detach().numpy()[0]
>>> corpus.append(answer)
>>> corpus_embeddings.append(embedding)
>>>
>>> distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0]
>>>
>>> results = zip(range(len(distances)), distances)
>>> results = sorted(results, key=lambda x: x[1])
>>>
>>> print([[corpus[idx].strip(), (1.0 - distance)] for idx, distance in results])
[['Бисквитките обикновено съдържат брашно, захар и някакъв вид масло или мазнини. Те могат да включват други съставки като стафиди, овес, шоколадов чипс, ядки и др.',
0.5449754306536151],
['Фабричните бисквитки се продават в магазини за хранителни стоки, магазини за удобство и автомати.',
0.5049509545814316],
['В повечето англоговорящи страни, с изключение на САЩ и Канада, хрупкавите бисквитки се наричат \u200b\u200bбисквити.',
0.5029661338050297],
['Бисквитките или бисквитите могат да се произвеждат масово във фабрики, направени в малки пекарни или домашно приготвени.',
0.4991678233218718],
['Вариантите за бисквити или бисквити включват сандвич бисквити, като крем крем, Jammie Dodgers, Bourbons и Oreos, с пълнеж от ружа или конфитюр и понякога потопени в шоколад или друго сладко покритие.',
0.49050297326146386],
['Повечето бисквитки със среден размер, ако са направени със захар, брашно и скъсяване, ще съдържат между 100 и 200 калории.',
0.48950875441294106],
['Бисквитката е печена или варена храна, която обикновено е малка, плоска и сладка.',
0.48646309549536737],
['Бисквитите Chewier понякога се наричат \u200b\u200bбисквитки дори в Обединеното кралство. Някои бисквитки могат също да бъдат назовавани според формата им, като квадратчета с дата или барове.',
0.4840599482604815],
['Cookie Monster е Muppet в дългогодишното детско телевизионно шоу Sesame Street, който е най-известен с ненаситния си апетит към бисквитките и известните си фрази за ядене, като „Me want cookie!“, „Me eat cookie!“ (или просто „COOKIE!“) и „Om nom nom nom“ (казано през уста, пълна с храна).',
0.45209677893728206],
['Домашните бисквитки обикновено се правят от тесто, оформено на малки топчета и пуснато върху лист с бисквитки. След това се пекат във фурна за 5 до 15 минути, в зависимост от рецептата. Температурата на фурната варира от 250 до 350 градуса.',
0.4511516464302119],
['Бисквитките често се сервират с напитки като мляко, кафе или чай.',
0.42364528401677803],
['Американската употреба произлиза от холандското koekje „малка торта“, което е умалително от „koek“ („торта“), което произлиза от средно холандската дума „koke“.',
0.3267314582662877]]
```
|
f8e4df03a46fa7d840bf5bd8dc049097
|
inhee/m2m100_418M-finetuned-ko-to-en3
|
inhee
|
m2m_100
| 12 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,499 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-ko-to-en3
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5175
- Bleu: 75.215
- Gen Len: 9.726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 0.99 | 103 | 2.7756 | 8.9955 | 9.425 |
| No log | 1.99 | 206 | 0.7248 | 63.7645 | 9.6421 |
| No log | 2.99 | 309 | 0.5175 | 75.215 | 9.726 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bd35792f34bb8807abb641e7c78ac236
|
platzi/platzi-vit-model-orlando-murcia
|
platzi
|
vit
| 9 | 4 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['beans']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,240 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-orlando-murcia
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0532
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0776 | 3.85 | 500 | 0.0532 | 0.9850 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
53231400fbe0d656d5cd584188e0ffd7
|
SerdarHelli/SDF-StyleGAN-3D
|
SerdarHelli
| null | 3 | 0 | null | 3 | null | false | false | false |
other
| null |
['shapenet']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['Shape modeling', 'Volumetric models']
| false | true | true | 1,419 | false |
### Model Description
- SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation
- Zheng, Xin-Yang and Liu, Yang and Wang, Peng-Shuai and Tong, Xin, 2022
The proposed deeplearning model for 3D shape generation called signed distance field (SDF) - SDF-StyleGAN, whicH is based on StyleGAN2. The goal of this approach is to minimize the visual and geometric differences between the generated shapes and a collection of existing shapes.
### Documents
- [GitHub Repo](https://github.com/Zhengxinyang/SDF-StyleGAN)
- [Paper - SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation](https://arxiv.org/pdf/2206.12055.pdf)
### Datasets
ShapeNet is a comprehensive 3D shape dataset created for research in computer graphics, computer vision, robotics and related diciplines.
- [Offical Dataset of ShapeNet](https://shapenet.org/)
- [author's data preparation script](https://github.com/Zhengxinyang/SDF-StyleGAN)
- [author's training data](https://pan.baidu.com/s/1nVS7wlcOz62nYBgjp_M8Yg?pwd=oj1b)
### How to use
Training snippets are published under the official GitHub repository above.
### BibTeX Entry and Citation Info
```
@inproceedings{zheng2022sdfstylegan,
title = {SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation},
author = {Zheng, Xin-Yang and Liu, Yang and Wang, Peng-Shuai and Tong, Xin},
booktitle = {Comput. Graph. Forum (SGP)},
year = {2022},
}
```
|
09538bce1a6cacf2a88e719473e546cd
|
jonatasgrosman/exp_w2v2r_en_xls-r_age_teens-10_sixties-0_s807
|
jonatasgrosman
|
wav2vec2
| 10 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'en']
| false | true | true | 476 | false |
# exp_w2v2r_en_xls-r_age_teens-10_sixties-0_s807
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
6274cc995a405a6db75584fc91160cfd
|
nikhil6041/wav2vec2-commonvoice-hindi
|
nikhil6041
|
wav2vec2
| 26 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,372 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-commonvoice-hindi
This model is a fine-tuned version of [theainerd/Wav2Vec2-large-xlsr-hindi](https://huggingface.co/theainerd/Wav2Vec2-large-xlsr-hindi) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9825
- Wer: 0.6763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 20.0 | 100 | 0.8801 | 0.6754 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
690eaea6d040d9e6f782ce2e4c1b4baf
|
morahil/wav2vec2-hindi-new-3
|
morahil
|
wav2vec2
| 12 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,281 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-hindi-new-3
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.1206
- eval_wer: 0.8949
- eval_runtime: 20.2358
- eval_samples_per_second: 19.767
- eval_steps_per_second: 2.471
- epoch: 25.8
- step: 1600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
b809926a3e7c7791df3e6f4b642c39b5
|
research-backup/t5-small-subjqa-vanilla-electronics-qg
|
research-backup
|
t5
| 34 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['en']
|
['lmqg/qg_subjqa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation']
| true | true | true | 4,056 | false |
# Model Card of `research-backup/t5-small-subjqa-vanilla-electronics-qg`
This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: electronics) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-small](https://huggingface.co/t5-small)
- **Language:** en
- **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (electronics)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-small-subjqa-vanilla-electronics-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-small-subjqa-vanilla-electronics-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-subjqa-vanilla-electronics-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json)
| | Score | Type | Dataset |
|:-----------|--------:|:------------|:-----------------------------------------------------------------|
| BERTScore | 52.85 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_1 | 2.89 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_2 | 0.7 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_3 | 0 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_4 | 0 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| METEOR | 2.64 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| MoverScore | 49.57 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| ROUGE_L | 2.38 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_subjqa
- dataset_name: electronics
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-small
- max_length: 512
- max_length_output: 32
- epoch: 1
- batch: 32
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-subjqa-vanilla-electronics-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
04d8a2ef1dc8a45b5acf88bb9484cbd7
|
david-whittaker-td/finetuning-sentiment-model-3000-samples
|
david-whittaker-td
|
distilbert
| 19 | 12 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,053 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2973
- Accuracy: 0.88
- F1: 0.8808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
1cb6f84ab73fa51c56bf8d78ec8f72d1
|
espnet/kan-bayashi_csmsc_fastspeech
|
espnet
| null | 6 | 6 |
espnet
| 0 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['zh']
|
['csmsc']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 1,797 | false |
## Example ESPnet2 TTS model
### `kan-bayashi/csmsc_fastspeech`
♻️ Imported from https://zenodo.org/record/3986227/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
2d9b5bf2314e2863f4ab6c3a3f0c55b7
|
Helsinki-NLP/opus-mt-pis-es
|
Helsinki-NLP
|
marian
| 10 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-pis-es
* source languages: pis
* target languages: es
* OPUS readme: [pis-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pis.es | 24.1 | 0.421 |
|
868b07f87077d8c4780e29775f7d27f6
|
Matthijs/mobilenet_v1_0.75_192
|
Matthijs
|
mobilenet_v1
| 5 | 9 |
transformers
| 0 |
image-classification
| true | false | false |
other
| null |
['imagenet-1k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-classification']
| false | true | true | 2,397 | false |
# MobileNet V1
MobileNet V1 model pre-trained on ImageNet-1k at resolution 192x192. It was introduced in [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Howard et al, and first released in [this repository](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md).
Disclaimer: The team releasing MobileNet V1 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v1) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileNetV1FeatureExtractor, MobileNetV1ForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileNetV1FeatureExtractor.from_pretrained("Matthijs/mobilenet_v1_1.0_224")
model = MobileNetV1ForImageClassification.from_pretrained("Matthijs/mobilenet_v1_1.0_224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0).
Currently, both the feature extractor and model support PyTorch.
|
497e40f4075ef92906d9ba306e470feb
|
kadirnar/osnet_x1_0_imagenet
|
kadirnar
| null | 3 | 0 | null | 0 |
object-detection
| false | false | false |
gpl-3.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['object-detection', 'computer-vision', 'sort', 'tracker', 'osnet']
| false | true | true | 2,685 | false |
<div align="center">
<h1>
Torchreid-Pip: Packaged version of Torchreid
</h1>
<h4>
<img width="700" alt="teaser" src="https://raw.githubusercontent.com/goksenin-uav/torchreid-pip/main/doc/logo.png">
</h4>
</div>
This repo is a packaged version of the [Torchreid](https://github.com/KaiyangZhou/deep-person-reid) algorithm.
### Installation
```
pip install torchreid
```
### Model Description
[Learning Generalisable Omni-Scale Representations for Person Re-Identification](https://arxiv.org/abs/1905.00953):
[Omni-Scale Feature Learning for Person Re-Identification](https://arxiv.org/abs/1910.06827)
[Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch](https://arxiv.org/abs/1910.10093)
### Overview
##### 1. Import ``torchreid``
```python
import torchreid
```
##### 2. Load data manager
```python
datamanager = torchreid.data.ImageDataManager(
root="reid-data",
sources="market1501",
targets="market1501",
height=256,
width=128,
batch_size_train=32,
batch_size_test=100,
transforms=["random_flip", "random_crop"]
)
```
##### 3 Build model, optimizer and lr_scheduler
```python
model = torchreid.models.build_model(
name="resnet50",
num_classes=datamanager.num_train_pids,
loss="softmax",
pretrained=True
)
model = model.cuda()
optimizer = torchreid.optim.build_optimizer(
model,
optim="adam",
lr=0.0003
)
scheduler = torchreid.optim.build_lr_scheduler(
optimizer,
lr_scheduler="single_step",
stepsize=20
)
```
##### 4. Build engine
```python
engine = torchreid.engine.ImageSoftmaxEngine(
datamanager,
model,
optimizer=optimizer,
scheduler=scheduler,
label_smooth=True
)
```
##### 5. Run training and test
```python
engine.run(
save_dir="log/resnet50",
max_epoch=60,
eval_freq=10,
print_freq=10,
test_only=False
)
```
Citation
---------
If you use this code or the models in your research, please give credit to the following papers:
```bibtex
@article{torchreid,
title={Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch},
author={Zhou, Kaiyang and Xiang, Tao},
journal={arXiv preprint arXiv:1910.10093},
year={2019}
}
@inproceedings{zhou2019osnet,
title={Omni-Scale Feature Learning for Person Re-Identification},
author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
booktitle={ICCV},
year={2019}
}
@article{zhou2021osnet,
title={Learning Generalisable Omni-Scale Representations for Person Re-Identification},
author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
journal={TPAMI},
year={2021}
}
```
|
dcdfe865eedbfc8baf0ac64edc57a533
|
tokeron/TRBLLmaker
|
tokeron
|
gpt2
| 14 | 6 |
transformers
| 2 |
text-generation
| true | false | false |
afl-3.0
|
['eng']
|
['TRBLLmaker']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generative-model']
| false | true | true | 1,867 | false |
[Link to the arxiv](https://arxiv.org/pdf/2212.04917.pdf)
[Datasets](https://huggingface.co/datasets/MorVentura/TRBLLmaker)
### About Us
Created by [Mor Ventura](https://www.linkedin.com/in/mor-ventura/) and [Michael Toker](https://www.linkedin.com/in/mnlp/).
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: checkpoint_gpt2-medium_lyrics_meaning_2022-03-10-16-16-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoint_gpt2-medium_lyrics_meaning_2022-03-10-16-16-32
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5896
- Accuracy: 0.4923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7781 | 1.01 | 128 | 2.6284 | 0.4875 |
| 2.6217 | 2.02 | 256 | 2.6022 | 0.4908 |
| 2.569 | 3.02 | 384 | 2.5928 | 0.4917 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.1
- Datasets 1.18.3
- Tokenizers 0.11.6
|
3a547f9a898920b243ec3f8bf102dcc3
|
gokuls/mobilebert_sa_GLUE_Experiment_data_aug_mrpc_128
|
gokuls
|
mobilebert
| 17 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,614 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_data_aug_mrpc_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0
- Accuracy: 1.0
- F1: 1.0
- Combined Score: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.2019 | 1.0 | 1959 | 0.0211 | 0.9926 | 0.9947 | 0.9936 |
| 0.0464 | 2.0 | 3918 | 0.0122 | 0.9951 | 0.9964 | 0.9958 |
| 0.0307 | 3.0 | 5877 | 0.0049 | 0.9975 | 0.9982 | 0.9979 |
| 0.0223 | 4.0 | 7836 | 0.0041 | 0.9975 | 0.9982 | 0.9979 |
| 0.0179 | 5.0 | 9795 | 0.0006 | 1.0 | 1.0 | 1.0 |
| 0.0147 | 6.0 | 11754 | 0.0005 | 1.0 | 1.0 | 1.0 |
| 0.012 | 7.0 | 13713 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.0086 | 8.0 | 15672 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.0064 | 9.0 | 17631 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0058 | 10.0 | 19590 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0043 | 11.0 | 21549 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0035 | 12.0 | 23508 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.003 | 13.0 | 25467 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.0024 | 14.0 | 27426 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0018 | 15.0 | 29385 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0017 | 16.0 | 31344 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 17.0 | 33303 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0014 | 18.0 | 35262 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.001 | 19.0 | 37221 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0008 | 20.0 | 39180 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0009 | 21.0 | 41139 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0006 | 22.0 | 43098 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0007 | 23.0 | 45057 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0004 | 24.0 | 47016 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0007 | 25.0 | 48975 | 0.0 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 26.0 | 50934 | 0.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
593c74557ccd91e058e98bdbf48b2613
|
Intel/roberta-base-mrpc-int8-dynamic
|
Intel
|
roberta
| 5 | 22 |
transformers
| 0 |
text-classification
| false | false | false |
mit
|
['en']
|
['mrpc']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classfication', 'int8', 'Intel® Neural Compressor', 'PostTrainingDynamic', 'onnx']
| false | true | true | 663 | false |
# INT8 roberta base finetuned MRPC
## Post-training dynamic quantization
### ONNX
This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [Intel/roberta-base-mrpc](https://huggingface.co/Intel/roberta-base-mrpc).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9085|0.9138|
| **Model size (MB)** |122|476|
#### Load ONNX model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained('Intel/roberta-base-mrpc-int8-dynamic')
```
|
e4f42c15c3f714bbaa37e3ee5a76f3f9
|
GioReg/bertMULTINEGsentiment
|
GioReg
|
bert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 947 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertMULTINEGsentiment
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
e8b9485723188c047a527e6a41936d77
|
choeunsoo/bert-base-uncased-finetuned-cola
|
choeunsoo
|
bert
| 13 | 14 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,553 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8347
- Matthews Correlation: 0.5914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4921 | 1.0 | 535 | 0.5622 | 0.4713 |
| 0.301 | 2.0 | 1070 | 0.4454 | 0.5611 |
| 0.1999 | 3.0 | 1605 | 0.6690 | 0.5521 |
| 0.1437 | 4.0 | 2140 | 0.7627 | 0.5851 |
| 0.0915 | 5.0 | 2675 | 0.8347 | 0.5914 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
3d1d35de5f259dcceba0ca4d7fd62368
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.