pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sentence-similarity
|
sentence-transformers
|
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_30_Epochs
| null |
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 11 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 11 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 11 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_50_Epochs
| null |
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 11 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 11 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 11 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_5_Epochs
| null |
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# {MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 11 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 11 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 11 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-bert-base-uncased-ww-squad
This model is a fine-tuned version of [jgammack/MTL-bert-base-uncased-ww](https://huggingface.co/jgammack/MTL-bert-base-uncased-ww) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "MTL-bert-base-uncased-ww-squad", "results": []}]}
|
jgammack/MTL-bert-base-uncased-ww-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# MTL-bert-base-uncased-ww-squad
This model is a fine-tuned version of jgammack/MTL-bert-base-uncased-ww on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# MTL-bert-base-uncased-ww-squad\n\nThis model is a fine-tuned version of jgammack/MTL-bert-base-uncased-ww on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MTL-bert-base-uncased-ww-squad\n\nThis model is a fine-tuned version of jgammack/MTL-bert-base-uncased-ww on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-bert-base-uncased-ww
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2964 | 1.0 | 99 | 2.9560 |
| 3.0419 | 2.0 | 198 | 2.8336 |
| 2.8979 | 3.0 | 297 | 2.8009 |
| 2.8815 | 4.0 | 396 | 2.7394 |
| 2.8373 | 5.0 | 495 | 2.6813 |
| 2.741 | 6.0 | 594 | 2.6270 |
| 2.6877 | 7.0 | 693 | 2.5216 |
| 2.6823 | 8.0 | 792 | 2.5485 |
| 2.6326 | 9.0 | 891 | 2.5690 |
| 2.5976 | 10.0 | 990 | 2.6336 |
| 2.6009 | 11.0 | 1089 | 2.5919 |
| 2.5615 | 12.0 | 1188 | 2.4264 |
| 2.5826 | 13.0 | 1287 | 2.5562 |
| 2.5693 | 14.0 | 1386 | 2.5529 |
| 2.5494 | 15.0 | 1485 | 2.5300 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "MTL-bert-base-uncased-ww", "results": []}]}
|
jgammack/MTL-bert-base-uncased-ww
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
MTL-bert-base-uncased-ww
========================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.5261
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 7
* eval\_batch\_size: 7
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 7\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 7\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4409 | 1.0 | 99 | 2.1982 |
| 2.2905 | 2.0 | 198 | 2.1643 |
| 2.1974 | 3.0 | 297 | 2.1168 |
| 2.15 | 4.0 | 396 | 2.0023 |
| 2.0823 | 5.0 | 495 | 2.0199 |
| 2.0752 | 6.0 | 594 | 1.9061 |
| 2.0408 | 7.0 | 693 | 1.9770 |
| 1.9984 | 8.0 | 792 | 1.9322 |
| 1.9933 | 9.0 | 891 | 1.9167 |
| 1.9806 | 10.0 | 990 | 1.9652 |
| 1.9436 | 11.0 | 1089 | 1.9308 |
| 1.9491 | 12.0 | 1188 | 1.9064 |
| 1.929 | 13.0 | 1287 | 1.8831 |
| 1.9096 | 14.0 | 1386 | 1.8927 |
| 1.9032 | 15.0 | 1485 | 1.9117 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "MTL-bert-base-uncased", "results": []}]}
|
jgammack/MTL-bert-base-uncased
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
MTL-bert-base-uncased
=====================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9283
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 7
* eval\_batch\_size: 7
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 7\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 7\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-distilbert-base-uncased-squad
This model is a fine-tuned version of [jgammack/MTL-distilbert-base-uncased](https://huggingface.co/jgammack/MTL-distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "MTL-distilbert-base-uncased-squad", "results": []}]}
|
jgammack/MTL-distilbert-base-uncased-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# MTL-distilbert-base-uncased-squad
This model is a fine-tuned version of jgammack/MTL-distilbert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# MTL-distilbert-base-uncased-squad\n\nThis model is a fine-tuned version of jgammack/MTL-distilbert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MTL-distilbert-base-uncased-squad\n\nThis model is a fine-tuned version of jgammack/MTL-distilbert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5593 | 1.0 | 99 | 2.3163 |
| 2.4346 | 2.0 | 198 | 2.2918 |
| 2.3377 | 3.0 | 297 | 2.2345 |
| 2.2953 | 4.0 | 396 | 2.1463 |
| 2.2296 | 5.0 | 495 | 2.1761 |
| 2.2235 | 6.0 | 594 | 2.0721 |
| 2.1878 | 7.0 | 693 | 2.1460 |
| 2.1569 | 8.0 | 792 | 2.0856 |
| 2.1455 | 9.0 | 891 | 2.1039 |
| 2.1391 | 10.0 | 990 | 2.1112 |
| 2.1056 | 11.0 | 1089 | 2.0694 |
| 2.1076 | 12.0 | 1188 | 2.0501 |
| 2.0919 | 13.0 | 1287 | 2.0484 |
| 2.0669 | 14.0 | 1386 | 2.0342 |
| 2.0595 | 15.0 | 1485 | 2.0802 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "MTL-distilbert-base-uncased", "results": []}]}
|
jgammack/MTL-distilbert-base-uncased
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
MTL-distilbert-base-uncased
===========================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0874
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 7
* eval\_batch\_size: 7
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 7\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 7\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8338 | 1.0 | 98 | 1.6750 |
| 1.7732 | 2.0 | 196 | 1.6229 |
| 1.7208 | 3.0 | 294 | 1.6131 |
| 1.6917 | 4.0 | 392 | 1.5936 |
| 1.6579 | 5.0 | 490 | 1.6183 |
| 1.6246 | 6.0 | 588 | 1.6015 |
| 1.6215 | 7.0 | 686 | 1.5248 |
| 1.5743 | 8.0 | 784 | 1.5454 |
| 1.5621 | 9.0 | 882 | 1.5925 |
| 1.5652 | 10.0 | 980 | 1.5213 |
| 1.5615 | 11.0 | 1078 | 1.4845 |
| 1.5349 | 12.0 | 1176 | 1.5443 |
| 1.5165 | 13.0 | 1274 | 1.5304 |
| 1.5164 | 14.0 | 1372 | 1.4773 |
| 1.5293 | 15.0 | 1470 | 1.5537 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "MTL-roberta-base", "results": []}]}
|
jgammack/MTL-roberta-base
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
MTL-roberta-base
================
This model is a fine-tuned version of roberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4859
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 7
* eval\_batch\_size: 7
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 7\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 7\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [jgammack/SAE-door-abstracts](https://huggingface.co/datasets/jgammack/SAE-door-abstracts) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5967 | 1.0 | 80 | 2.3409 |
| 2.4881 | 2.0 | 160 | 2.2707 |
| 2.3567 | 3.0 | 240 | 2.3134 |
| 2.3413 | 4.0 | 320 | 2.2592 |
| 2.3006 | 5.0 | 400 | 2.2351 |
| 2.2568 | 6.0 | 480 | 2.2556 |
| 2.2303 | 7.0 | 560 | 2.2546 |
| 2.1892 | 8.0 | 640 | 2.1868 |
| 2.1851 | 9.0 | 720 | 2.2073 |
| 2.1738 | 10.0 | 800 | 2.1344 |
| 2.1673 | 11.0 | 880 | 2.1927 |
| 2.1518 | 12.0 | 960 | 2.1844 |
| 2.1142 | 13.0 | 1040 | 2.1466 |
| 2.1343 | 14.0 | 1120 | 2.2024 |
| 2.1332 | 15.0 | 1200 | 2.1035 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "widget": [{"text": "Wind [MASK] was detected coming from the car door closure system.", "example_title": "Closure system"}], "model-index": [{"name": "SAE-bert-base-uncased", "results": []}]}
|
jgammack/SAE-bert-base-uncased
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
SAE-bert-base-uncased
=====================
This model is a fine-tuned version of bert-base-uncased on the jgammack/SAE-door-abstracts dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1256
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 7
* eval\_batch\_size: 7
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 7\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 7\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-distilbert-base-uncased-squad
This model is a fine-tuned version of [jgammack/SAE-distilbert-base-uncased](https://huggingface.co/jgammack/SAE-distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "SAE-distilbert-base-uncased-squad", "results": []}]}
|
jgammack/SAE-distilbert-base-uncased-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# SAE-distilbert-base-uncased-squad
This model is a fine-tuned version of jgammack/SAE-distilbert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# SAE-distilbert-base-uncased-squad\n\nThis model is a fine-tuned version of jgammack/SAE-distilbert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# SAE-distilbert-base-uncased-squad\n\nThis model is a fine-tuned version of jgammack/SAE-distilbert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
fill-mask
|
transformers
|
# SAE-distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [jgammack/SAE-door-abstracts](https://huggingface.co/datasets/jgammack/SAE-door-abstracts) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2970
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 15
- eval_batch_size: 15
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5323 | 1.0 | 37 | 2.4503 |
| 2.4968 | 2.0 | 74 | 2.4571 |
| 2.4688 | 3.0 | 111 | 2.4099 |
| 2.419 | 4.0 | 148 | 2.3343 |
| 2.4229 | 5.0 | 185 | 2.3072 |
| 2.4067 | 6.0 | 222 | 2.2927 |
| 2.3877 | 7.0 | 259 | 2.2836 |
| 2.374 | 8.0 | 296 | 2.3767 |
| 2.3582 | 9.0 | 333 | 2.2493 |
| 2.356 | 10.0 | 370 | 2.2847 |
| 2.3294 | 11.0 | 407 | 2.3234 |
| 2.3358 | 12.0 | 444 | 2.2660 |
| 2.3414 | 13.0 | 481 | 2.2887 |
| 2.3154 | 14.0 | 518 | 2.3737 |
| 2.311 | 15.0 | 555 | 2.2686 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "widget": [{"text": "Wind noise was detected coming from the car [MASK] closure system.", "example_title": "Closure system"}], "model-index": [{"name": "SAE-distilbert-base-uncased", "results": []}]}
|
jgammack/SAE-distilbert-base-uncased
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
SAE-distilbert-base-uncased
===========================
This model is a fine-tuned version of distilbert-base-uncased on the jgammack/SAE-door-abstracts dataset.
It achieves the following results on the evaluation set:
* Loss: 2.2970
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 15
* eval\_batch\_size: 15
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 15\n* eval\\_batch\\_size: 15\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 15\n* eval\\_batch\\_size: 15\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-roberta-base-squad
This model is a fine-tuned version of [jgammack/SAE-roberta-base](https://huggingface.co/jgammack/SAE-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "SAE-roberta-base-squad", "results": []}]}
|
jgammack/SAE-roberta-base-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us
|
# SAE-roberta-base-squad
This model is a fine-tuned version of jgammack/SAE-roberta-base on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# SAE-roberta-base-squad\n\nThis model is a fine-tuned version of jgammack/SAE-roberta-base on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us \n",
"# SAE-roberta-base-squad\n\nThis model is a fine-tuned version of jgammack/SAE-roberta-base on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9847 | 1.0 | 79 | 1.8238 |
| 1.9142 | 2.0 | 158 | 1.8299 |
| 1.8613 | 3.0 | 237 | 1.7636 |
| 1.8384 | 4.0 | 316 | 1.8048 |
| 1.8193 | 5.0 | 395 | 1.7734 |
| 1.7985 | 6.0 | 474 | 1.7271 |
| 1.7758 | 7.0 | 553 | 1.8525 |
| 1.7611 | 8.0 | 632 | 1.7716 |
| 1.7599 | 9.0 | 711 | 1.7913 |
| 1.7118 | 10.0 | 790 | 1.7578 |
| 1.7003 | 11.0 | 869 | 1.7598 |
| 1.7072 | 12.0 | 948 | 1.6942 |
| 1.6511 | 13.0 | 1027 | 1.6955 |
| 1.6802 | 14.0 | 1106 | 1.7837 |
| 1.7048 | 15.0 | 1185 | 1.7377 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "SAE-roberta-base", "results": []}]}
|
jgammack/SAE-roberta-base
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
SAE-roberta-base
================
This model is a fine-tuned version of roberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6959
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 7
* eval\_batch\_size: 7
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 7\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 7\n* eval\\_batch\\_size: 7\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
sentence-similarity
|
sentence-transformers
|
# jgammack/distilbert-base-mean-pooling
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/distilbert-base-mean-pooling')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/distilbert-base-mean-pooling')
model = AutoModel.from_pretrained('jgammack/distilbert-base-mean-pooling')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/distilbert-base-mean-pooling)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jgammack/distilbert-base-mean-pooling
| null |
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# jgammack/distilbert-base-mean-pooling
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Full Model Architecture
## Citing & Authors
|
[
"# jgammack/distilbert-base-mean-pooling\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# jgammack/distilbert-base-mean-pooling\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-squad", "results": []}]}
|
jgammack/distilbert-base-uncased-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# distilbert-base-uncased-squad
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# distilbert-base-uncased-squad\n\nThis model is a fine-tuned version of distilbert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# distilbert-base-uncased-squad\n\nThis model is a fine-tuned version of distilbert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
sentence-similarity
|
sentence-transformers
|
# jgammack/multi-qa-MTL-distilbert-base-uncased-40k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-MTL-distilbert-base-uncased-40k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased-40k')
model = AutoModel.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased-40k')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-MTL-distilbert-base-uncased-40k)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jgammack/multi-qa-MTL-distilbert-base-uncased-40k
| null |
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# jgammack/multi-qa-MTL-distilbert-base-uncased-40k
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Full Model Architecture
## Citing & Authors
|
[
"# jgammack/multi-qa-MTL-distilbert-base-uncased-40k\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# jgammack/multi-qa-MTL-distilbert-base-uncased-40k\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# jgammack/multi-qa-MTL-distilbert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-MTL-distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased')
model = AutoModel.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-MTL-distilbert-base-uncased)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jgammack/multi-qa-MTL-distilbert-base-uncased
| null |
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# jgammack/multi-qa-MTL-distilbert-base-uncased
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Full Model Architecture
## Citing & Authors
|
[
"# jgammack/multi-qa-MTL-distilbert-base-uncased\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# jgammack/multi-qa-MTL-distilbert-base-uncased\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# jgammack/multi-qa-SAE-distilbert-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-SAE-distilbert-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-SAE-distilbert-base')
model = AutoModel.from_pretrained('jgammack/multi-qa-SAE-distilbert-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-SAE-distilbert-base)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jgammack/multi-qa-SAE-distilbert-base-uncased
| null |
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# jgammack/multi-qa-SAE-distilbert-base
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Full Model Architecture
## Citing & Authors
|
[
"# jgammack/multi-qa-SAE-distilbert-base\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# jgammack/multi-qa-SAE-distilbert-base\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# jgammack/multi-qa-distilbert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-distilbert-base-uncased')
model = AutoModel.from_pretrained('jgammack/multi-qa-distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-distilbert-base-uncased)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jgammack/multi-qa-distilbert-base-uncased
| null |
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# jgammack/multi-qa-distilbert-base-uncased
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Full Model Architecture
## Citing & Authors
|
[
"# jgammack/multi-qa-distilbert-base-uncased\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# jgammack/multi-qa-distilbert-base-uncased\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "roberta-base-squad", "results": []}]}
|
jgammack/roberta-base-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us
|
# roberta-base-squad
This model is a fine-tuned version of roberta-base on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# roberta-base-squad\n\nThis model is a fine-tuned version of roberta-base on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us \n",
"# roberta-base-squad\n\nThis model is a fine-tuned version of roberta-base on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
sentence-similarity
|
sentence-transformers
|
# This model is superseded by [https://github.com/ORNL/affinity_pred](https://github.com/ORNL/affinity_pred)
# jglaser/protein-ligand-mlp-1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps pairs of protein and chemical sequences (canonical SMILES) onto binding affinities (pIC50 values).
Each member of the ensemble has been trained using a different seed and you can use the different models as independent samples to estimate the uncertainty.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
#pip install -U sentence-transformers
pip install git+https://github.com/jglaser/sentence-transformers.git@enable_mixed
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = [{'protein': ["SEQVENCE"], 'ligand': ["c1ccccc1"]}]
model = SentenceTransformer('jglaser/protein-ligand-mlp-1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
## Full Model Architecture
```
SentenceTransformer(
(0): Asym(
(protein-0): Transformer({'max_seq_length': 2048, 'do_lower_case': False}) with Transformer model: BertModel
(protein-1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(protein-2): Dense({'in_features': 1024, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(ligand-0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(ligand-1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(ligand-2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
(1): Dense({'in_features': 1792, 'out_features': 1000, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(2): Dense({'in_features': 1000, 'out_features': 1000, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(3): Dense({'in_features': 1000, 'out_features': 1000, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(4): Dense({'in_features': 1000, 'out_features': 1, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
(5): Dense({'in_features': 1, 'out_features': 1, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Citing & Authors
- [Andrew E Blanchard](https://github.com/blnchrd)
- [John Gounley](https://github.com/gounley)
- [Debsindhu Bhowmik](https://github.com/debsindhu)
- [Mayanka Chandra Shekar](https://github.com/mayankachandrashekar)
- [Isaac Lyngaas](https://github.com/irlyngaas)
- Shang Gao
- Junqi Yin
- Aristeidis Tsaris
- Feiyi Wang
- [Jens Glaser](https://github.com/jglaser)
Find more information in our [bioRxiv preprint](https://www.biorxiv.org/content/10.1101/2021.12.10.471928v1)
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
|
jglaser/protein-ligand-mlp-1
| null |
[
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# This model is superseded by URL
# jglaser/protein-ligand-mlp-1
This is a sentence-transformers model: It maps pairs of protein and chemical sequences (canonical SMILES) onto binding affinities (pIC50 values).
Each member of the ensemble has been trained using a different seed and you can use the different models as independent samples to estimate the uncertainty.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
## Full Model Architecture
## Citing & Authors
- Andrew E Blanchard
- John Gounley
- Debsindhu Bhowmik
- Mayanka Chandra Shekar
- Isaac Lyngaas
- Shang Gao
- Junqi Yin
- Aristeidis Tsaris
- Feiyi Wang
- Jens Glaser
Find more information in our bioRxiv preprint
|
[
"# This model is superseded by URL",
"# jglaser/protein-ligand-mlp-1\n\nThis is a sentence-transformers model: It maps pairs of protein and chemical sequences (canonical SMILES) onto binding affinities (pIC50 values).\n\nEach member of the ensemble has been trained using a different seed and you can use the different models as independent samples to estimate the uncertainty.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results",
"## Full Model Architecture",
"## Citing & Authors\n- Andrew E Blanchard\n- John Gounley\n- Debsindhu Bhowmik\n- Mayanka Chandra Shekar\n- Isaac Lyngaas\n- Shang Gao\n- Junqi Yin\n- Aristeidis Tsaris\n- Feiyi Wang\n- Jens Glaser\n\nFind more information in our bioRxiv preprint"
] |
[
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# This model is superseded by URL",
"# jglaser/protein-ligand-mlp-1\n\nThis is a sentence-transformers model: It maps pairs of protein and chemical sequences (canonical SMILES) onto binding affinities (pIC50 values).\n\nEach member of the ensemble has been trained using a different seed and you can use the different models as independent samples to estimate the uncertainty.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results",
"## Full Model Architecture",
"## Citing & Authors\n- Andrew E Blanchard\n- John Gounley\n- Debsindhu Bhowmik\n- Mayanka Chandra Shekar\n- Isaac Lyngaas\n- Shang Gao\n- Junqi Yin\n- Aristeidis Tsaris\n- Feiyi Wang\n- Jens Glaser\n\nFind more information in our bioRxiv preprint"
] |
sentence-similarity
|
sentence-transformers
|
# This model is superseded by [https://github.com/ORNL/affinity_pred](https://github.com/ORNL/affinity_pred)
# jglaser/protein-ligand-mlp-2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps pairs of protein and chemical sequences (canonical SMILES) onto binding affinities (pIC50 values).
Each member of the ensemble has been trained using a different seed and you can use the different models as independent samples to estimate the uncertainty.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
#pip install -U sentence-transformers
pip install git+https://github.com/jglaser/sentence-transformers.git@enable_mixed
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = [{'protein': ["SEQVENCE"], 'ligand': ["c1ccccc1"]}]
model = SentenceTransformer('jglaser/protein-ligand-mlp-2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
## Full Model Architecture
```
SentenceTransformer(
(0): Asym(
(protein-0): Transformer({'max_seq_length': 2048, 'do_lower_case': False}) with Transformer model: BertModel
(protein-1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(protein-2): Dense({'in_features': 1024, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(ligand-0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(ligand-1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(ligand-2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
(1): Dense({'in_features': 1792, 'out_features': 1000, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(2): Dense({'in_features': 1000, 'out_features': 1000, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(3): Dense({'in_features': 1000, 'out_features': 1000, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(4): Dense({'in_features': 1000, 'out_features': 1, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
(5): Dense({'in_features': 1, 'out_features': 1, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Citing & Authors
- [Andrew E Blanchard](https://github.com/blnchrd)
- [John Gounley](https://github.com/gounley)
- [Debsindhu Bhowmik](https://github.com/debsindhu)
- [Mayanka Chandra Shekar](https://github.com/mayankachandrashekar)
- [Isaac Lyngaas](https://github.com/irlyngaas)
- Shang Gao
- Junqi Yin
- Aristeidis Tsaris
- Feiyi Wang
- [Jens Glaser](https://github.com/jglaser)
Find more information in our [bioRxiv preprint](https://www.biorxiv.org/content/10.1101/2021.12.10.471928v1)
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
|
jglaser/protein-ligand-mlp-2
| null |
[
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# This model is superseded by URL
# jglaser/protein-ligand-mlp-2
This is a sentence-transformers model: It maps pairs of protein and chemical sequences (canonical SMILES) onto binding affinities (pIC50 values).
Each member of the ensemble has been trained using a different seed and you can use the different models as independent samples to estimate the uncertainty.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
## Full Model Architecture
## Citing & Authors
- Andrew E Blanchard
- John Gounley
- Debsindhu Bhowmik
- Mayanka Chandra Shekar
- Isaac Lyngaas
- Shang Gao
- Junqi Yin
- Aristeidis Tsaris
- Feiyi Wang
- Jens Glaser
Find more information in our bioRxiv preprint
|
[
"# This model is superseded by URL",
"# jglaser/protein-ligand-mlp-2\n\nThis is a sentence-transformers model: It maps pairs of protein and chemical sequences (canonical SMILES) onto binding affinities (pIC50 values).\n\nEach member of the ensemble has been trained using a different seed and you can use the different models as independent samples to estimate the uncertainty.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results",
"## Full Model Architecture",
"## Citing & Authors\n- Andrew E Blanchard\n- John Gounley\n- Debsindhu Bhowmik\n- Mayanka Chandra Shekar\n- Isaac Lyngaas\n- Shang Gao\n- Junqi Yin\n- Aristeidis Tsaris\n- Feiyi Wang\n- Jens Glaser\n\nFind more information in our bioRxiv preprint"
] |
[
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# This model is superseded by URL",
"# jglaser/protein-ligand-mlp-2\n\nThis is a sentence-transformers model: It maps pairs of protein and chemical sequences (canonical SMILES) onto binding affinities (pIC50 values).\n\nEach member of the ensemble has been trained using a different seed and you can use the different models as independent samples to estimate the uncertainty.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results",
"## Full Model Architecture",
"## Citing & Authors\n- Andrew E Blanchard\n- John Gounley\n- Debsindhu Bhowmik\n- Mayanka Chandra Shekar\n- Isaac Lyngaas\n- Shang Gao\n- Junqi Yin\n- Aristeidis Tsaris\n- Feiyi Wang\n- Jens Glaser\n\nFind more information in our bioRxiv preprint"
] |
sentence-similarity
|
sentence-transformers
|
# This model is superseded by [https://github.com/ORNL/affinity_pred](https://github.com/ORNL/affinity_pred)
# jglaser/protein-ligand-mlp-3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps pairs of protein and chemical sequences (canonical SMILES) onto binding affinities (pIC50 values).
Each member of the ensemble has been trained using a different seed and you can use the different models as independent samples to estimate the uncertainty.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
#pip install -U sentence-transformers
pip install git+https://github.com/jglaser/sentence-transformers.git@enable_mixed
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = [{'protein': ["SEQVENCE"], 'ligand': ["c1ccccc1"]}]
model = SentenceTransformer('jglaser/protein-ligand-mlp-3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
## Full Model Architecture
```
SentenceTransformer(
(0): Asym(
(protein-0): Transformer({'max_seq_length': 2048, 'do_lower_case': False}) with Transformer model: BertModel
(protein-1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(protein-2): Dense({'in_features': 1024, 'out_features': 1024, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(ligand-0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(ligand-1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(ligand-2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
(1): Dense({'in_features': 1792, 'out_features': 1000, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(2): Dense({'in_features': 1000, 'out_features': 1000, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(3): Dense({'in_features': 1000, 'out_features': 1000, 'bias': True, 'activation_function': 'torch.nn.modules.activation.GELU'})
(4): Dense({'in_features': 1000, 'out_features': 1, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
(5): Dense({'in_features': 1, 'out_features': 1, 'bias': True, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Citing & Authors
- [Andrew E Blanchard](https://github.com/blnchrd)
- [John Gounley](https://github.com/gounley)
- [Debsindhu Bhowmik](https://github.com/debsindhu)
- [Mayanka Chandra Shekar](https://github.com/mayankachandrashekar)
- [Isaac Lyngaas](https://github.com/irlyngaas)
- Shang Gao
- Junqi Yin
- Aristeidis Tsaris
- Feiyi Wang
- [Jens Glaser](https://github.com/jglaser)
Find more information in our [bioRxiv preprint](https://www.biorxiv.org/content/10.1101/2021.12.10.471928v1)
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
|
jglaser/protein-ligand-mlp-3
| null |
[
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #feature-extraction #sentence-similarity #endpoints_compatible #region-us
|
# This model is superseded by URL
# jglaser/protein-ligand-mlp-3
This is a sentence-transformers model: It maps pairs of protein and chemical sequences (canonical SMILES) onto binding affinities (pIC50 values).
Each member of the ensemble has been trained using a different seed and you can use the different models as independent samples to estimate the uncertainty.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Evaluation Results
## Full Model Architecture
## Citing & Authors
- Andrew E Blanchard
- John Gounley
- Debsindhu Bhowmik
- Mayanka Chandra Shekar
- Isaac Lyngaas
- Shang Gao
- Junqi Yin
- Aristeidis Tsaris
- Feiyi Wang
- Jens Glaser
Find more information in our bioRxiv preprint
|
[
"# This model is superseded by URL",
"# jglaser/protein-ligand-mlp-3\n\nThis is a sentence-transformers model: It maps pairs of protein and chemical sequences (canonical SMILES) onto binding affinities (pIC50 values).\n\nEach member of the ensemble has been trained using a different seed and you can use the different models as independent samples to estimate the uncertainty.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results",
"## Full Model Architecture",
"## Citing & Authors\n- Andrew E Blanchard\n- John Gounley\n- Debsindhu Bhowmik\n- Mayanka Chandra Shekar\n- Isaac Lyngaas\n- Shang Gao\n- Junqi Yin\n- Aristeidis Tsaris\n- Feiyi Wang\n- Jens Glaser\n\nFind more information in our bioRxiv preprint"
] |
[
"TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n",
"# This model is superseded by URL",
"# jglaser/protein-ligand-mlp-3\n\nThis is a sentence-transformers model: It maps pairs of protein and chemical sequences (canonical SMILES) onto binding affinities (pIC50 values).\n\nEach member of the ensemble has been trained using a different seed and you can use the different models as independent samples to estimate the uncertainty.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Evaluation Results",
"## Full Model Architecture",
"## Citing & Authors\n- Andrew E Blanchard\n- John Gounley\n- Debsindhu Bhowmik\n- Mayanka Chandra Shekar\n- Isaac Lyngaas\n- Shang Gao\n- Junqi Yin\n- Aristeidis Tsaris\n- Feiyi Wang\n- Jens Glaser\n\nFind more information in our bioRxiv preprint"
] |
sentence-similarity
|
sentence-transformers
|
# jhemmingsson/lab2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jhemmingsson/lab2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhemmingsson/lab2')
model = AutoModel.from_pretrained('jhemmingsson/lab2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jhemmingsson/lab2)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 357 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jhemmingsson/lab2
| null |
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# jhemmingsson/lab2
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 357 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# jhemmingsson/lab2\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 357 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# jhemmingsson/lab2\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 357 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# ko-sbert-multitask
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]
model = SentenceTransformer('jhgan/ko-sbert-multitask')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sbert-multitask')
model = AutoModel.from_pretrained('jhgan/ko-sbert-multitask')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
KorSTS, KorNLI 학습 데이터셋으로 멀티 태스크 학습을 진행한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 84.13
- Cosine Spearman: 84.71
- Euclidean Pearson: 82.42
- Euclidean Spearman: 82.66
- Manhattan Pearson: 81.41
- Manhattan Spearman: 81.69
- Dot Pearson: 80.05
- Dot Spearman: 79.69
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8885 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 719 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 360,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv
preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jhgan/ko-sbert-multitask
| null |
[
"sentence-transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #tf #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# ko-sbert-multitask
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
KorSTS, KorNLI 학습 데이터셋으로 멀티 태스크 학습을 진행한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 84.13
- Cosine Spearman: 84.71
- Euclidean Pearson: 82.42
- Euclidean Spearman: 82.66
- Manhattan Pearson: 81.41
- Manhattan Spearman: 81.69
- Dot Pearson: 80.05
- Dot Spearman: 79.69
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8885 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 719 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv
preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).
|
[
"# ko-sbert-multitask\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\nKorSTS, KorNLI 학습 데이터셋으로 멀티 태스크 학습을 진행한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.\n\n- Cosine Pearson: 84.13\n- Cosine Spearman: 84.71\n- Euclidean Pearson: 82.42\n- Euclidean Spearman: 82.66\n- Manhattan Pearson: 81.41\n- Manhattan Spearman: 81.69\n- Dot Pearson: 80.05\n- Dot Spearman: 79.69",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 719 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors\n\n\n- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv\npreprint arXiv:2004.03289\n- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)\n- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)."
] |
[
"TAGS\n#sentence-transformers #pytorch #tf #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# ko-sbert-multitask\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\nKorSTS, KorNLI 학습 데이터셋으로 멀티 태스크 학습을 진행한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.\n\n- Cosine Pearson: 84.13\n- Cosine Spearman: 84.71\n- Euclidean Pearson: 82.42\n- Euclidean Spearman: 82.66\n- Manhattan Pearson: 81.41\n- Manhattan Spearman: 81.69\n- Dot Pearson: 80.05\n- Dot Spearman: 79.69",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 719 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors\n\n\n- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv\npreprint arXiv:2004.03289\n- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)\n- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)."
] |
sentence-similarity
|
sentence-transformers
|
# ko-sbert-nli
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]
model = SentenceTransformer('jhgan/ko-sbert-nli')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sbert-nli')
model = AutoModel.from_pretrained('jhgan/ko-sbert-nli')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
KorNLI 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 82.24
- Cosine Spearman: 83.16
- Euclidean Pearson: 82.19
- Euclidean Spearman: 82.31
- Manhattan Pearson: 82.18
- Manhattan Spearman: 82.30
- Dot Pearson: 79.30
- Dot Spearman: 78.78
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8885 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 889,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jhgan/ko-sbert-nli
| null |
[
"sentence-transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #tf #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# ko-sbert-nli
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
KorNLI 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 82.24
- Cosine Spearman: 83.16
- Euclidean Pearson: 82.19
- Euclidean Spearman: 82.31
- Manhattan Pearson: 82.18
- Manhattan Spearman: 82.30
- Dot Pearson: 79.30
- Dot Spearman: 78.78
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8885 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).
|
[
"# ko-sbert-nli\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nKorNLI 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.\n\n- Cosine Pearson: 82.24\n- Cosine Spearman: 83.16\n- Euclidean Pearson: 82.19\n- Euclidean Spearman: 82.31\n- Manhattan Pearson: 82.18\n- Manhattan Spearman: 82.30\n- Dot Pearson: 79.30\n- Dot Spearman: 78.78",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors\n\n\n- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289\n- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)\n- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)."
] |
[
"TAGS\n#sentence-transformers #pytorch #tf #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# ko-sbert-nli\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nKorNLI 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.\n\n- Cosine Pearson: 82.24\n- Cosine Spearman: 83.16\n- Euclidean Pearson: 82.19\n- Euclidean Spearman: 82.31\n- Manhattan Pearson: 82.18\n- Manhattan Spearman: 82.30\n- Dot Pearson: 79.30\n- Dot Spearman: 78.78",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors\n\n\n- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289\n- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)\n- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)."
] |
sentence-similarity
|
sentence-transformers
|
# ko-sbert-sts
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]
model = SentenceTransformer('jhgan/ko-sbert-sts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sbert-sts')
model = AutoModel.from_pretrained('jhgan/ko-sbert-sts')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
KorSTS 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 81.55
- Cosine Spearman: 81.23
- Euclidean Pearson: 79.94
- Euclidean Spearman: 79.79
- Manhattan Pearson: 79.90
- Manhattan Spearman: 79.75
- Dot Pearson: 76.02
- Dot Spearman: 75.31
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 719 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 360,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv
preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jhgan/ko-sbert-sts
| null |
[
"sentence-transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #tf #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# ko-sbert-sts
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
KorSTS 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 81.55
- Cosine Spearman: 81.23
- Euclidean Pearson: 79.94
- Euclidean Spearman: 79.79
- Manhattan Pearson: 79.90
- Manhattan Spearman: 79.75
- Dot Pearson: 76.02
- Dot Spearman: 75.31
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 719 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv
preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)
|
[
"# ko-sbert-sts\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nKorSTS 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.\n\n- Cosine Pearson: 81.55\n- Cosine Spearman: 81.23\n- Euclidean Pearson: 79.94\n- Euclidean Spearman: 79.79\n- Manhattan Pearson: 79.90\n- Manhattan Spearman: 79.75\n- Dot Pearson: 76.02\n- Dot Spearman: 75.31",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 719 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors\n\n\n- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv\npreprint arXiv:2004.03289\n- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)\n- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)"
] |
[
"TAGS\n#sentence-transformers #pytorch #tf #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# ko-sbert-sts\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nKorSTS 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.\n\n- Cosine Pearson: 81.55\n- Cosine Spearman: 81.23\n- Euclidean Pearson: 79.94\n- Euclidean Spearman: 79.79\n- Manhattan Pearson: 79.90\n- Manhattan Spearman: 79.75\n- Dot Pearson: 76.02\n- Dot Spearman: 75.31",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 719 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors\n\n\n- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv\npreprint arXiv:2004.03289\n- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)\n- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)"
] |
sentence-similarity
|
sentence-transformers
|
# ko-sroberta-multitask
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]
model = SentenceTransformer('jhgan/ko-sroberta-multitask')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sroberta-multitask')
model = AutoModel.from_pretrained('jhgan/ko-sroberta-multitask')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
KorSTS, KorNLI 학습 데이터셋으로 멀티 태스크 학습을 진행한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 84.77
- Cosine Spearman: 85.60
- Euclidean Pearson: 83.71
- Euclidean Spearman: 84.40
- Manhattan Pearson: 83.70
- Manhattan Spearman: 84.38
- Dot Pearson: 82.42
- Dot Spearman: 82.33
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8885 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 719 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 360,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv
preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).
|
{"language": "ko", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jhgan/ko-sroberta-multitask
| null |
[
"sentence-transformers",
"pytorch",
"tf",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"ko",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#sentence-transformers #pytorch #tf #roberta #feature-extraction #sentence-similarity #transformers #ko #endpoints_compatible #has_space #region-us
|
# ko-sroberta-multitask
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
KorSTS, KorNLI 학습 데이터셋으로 멀티 태스크 학습을 진행한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 84.77
- Cosine Spearman: 85.60
- Euclidean Pearson: 83.71
- Euclidean Spearman: 84.40
- Manhattan Pearson: 83.70
- Manhattan Spearman: 84.38
- Dot Pearson: 82.42
- Dot Spearman: 82.33
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8885 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 719 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv
preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).
|
[
"# ko-sroberta-multitask\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nKorSTS, KorNLI 학습 데이터셋으로 멀티 태스크 학습을 진행한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.\n\n- Cosine Pearson: 84.77\n- Cosine Spearman: 85.60\n- Euclidean Pearson: 83.71\n- Euclidean Spearman: 84.40\n- Manhattan Pearson: 83.70\n- Manhattan Spearman: 84.38\n- Dot Pearson: 82.42\n- Dot Spearman: 82.33",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 719 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors\n\n\n- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv\npreprint arXiv:2004.03289\n- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)\n- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)."
] |
[
"TAGS\n#sentence-transformers #pytorch #tf #roberta #feature-extraction #sentence-similarity #transformers #ko #endpoints_compatible #has_space #region-us \n",
"# ko-sroberta-multitask\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nKorSTS, KorNLI 학습 데이터셋으로 멀티 태스크 학습을 진행한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.\n\n- Cosine Pearson: 84.77\n- Cosine Spearman: 85.60\n- Euclidean Pearson: 83.71\n- Euclidean Spearman: 84.40\n- Manhattan Pearson: 83.70\n- Manhattan Spearman: 84.38\n- Dot Pearson: 82.42\n- Dot Spearman: 82.33",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 719 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors\n\n\n- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv\npreprint arXiv:2004.03289\n- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)\n- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)."
] |
sentence-similarity
|
sentence-transformers
|
# ko-sroberta-nli
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]
model = SentenceTransformer('jhgan/ko-sroberta-nli')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sroberta-nli')
model = AutoModel.from_pretrained('jhgan/ko-sroberta-nli')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
KorNLI 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 82.83
- Cosine Spearman: 83.85
- Euclidean Pearson: 82.87
- Euclidean Spearman: 83.29
- Manhattan Pearson: 82.88
- Manhattan Spearman: 83.28
- Dot Pearson: 80.34
- Dot Spearman: 79.69
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8885 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 889,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).
|
{"language": "ko", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jhgan/ko-sroberta-nli
| null |
[
"sentence-transformers",
"pytorch",
"tf",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"ko",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#sentence-transformers #pytorch #tf #roberta #feature-extraction #sentence-similarity #transformers #ko #endpoints_compatible #region-us
|
# ko-sroberta-nli
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
KorNLI 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 82.83
- Cosine Spearman: 83.85
- Euclidean Pearson: 82.87
- Euclidean Spearman: 83.29
- Manhattan Pearson: 82.88
- Manhattan Spearman: 83.28
- Dot Pearson: 80.34
- Dot Spearman: 79.69
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8885 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289
- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)
- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020).
|
[
"# ko-sroberta-nli\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nKorNLI 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.\n\n- Cosine Pearson: 82.83\n- Cosine Spearman: 83.85\n- Euclidean Pearson: 82.87\n- Euclidean Spearman: 83.29\n- Manhattan Pearson: 82.88\n- Manhattan Spearman: 83.28\n- Dot Pearson: 80.34\n- Dot Spearman: 79.69",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors\n\n\n- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289\n- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)\n- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)."
] |
[
"TAGS\n#sentence-transformers #pytorch #tf #roberta #feature-extraction #sentence-similarity #transformers #ko #endpoints_compatible #region-us \n",
"# ko-sroberta-nli\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nKorNLI 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.\n\n- Cosine Pearson: 82.83\n- Cosine Spearman: 83.85\n- Euclidean Pearson: 82.87\n- Euclidean Spearman: 83.29\n- Manhattan Pearson: 82.88\n- Manhattan Spearman: 83.28\n- Dot Pearson: 80.34\n- Dot Spearman: 79.69",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 8885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors\n\n\n- Ham, J., Choe, Y. J., Park, K., Choi, I., & Soh, H. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289\n- Reimers, Nils and Iryna Gurevych. “Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks.” ArXiv abs/1908.10084 (2019)\n- Reimers, Nils and Iryna Gurevych. “Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation.” EMNLP (2020)."
] |
sentence-similarity
|
sentence-transformers
|
# ko-sroberta-sts
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]
model = SentenceTransformer('jhgan/ko-sroberta-sts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sroberta-sts')
model = AutoModel.from_pretrained('jhgan/ko-sroberta-sts')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
KorSTS 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 81.84
- Cosine Spearman: 81.82
- Euclidean Pearson: 81.15
- Euclidean Spearman: 81.25
- Manhattan Pearson: 81.14
- Manhattan Spearman: 81.25
- Dot Pearson: 79.09
- Dot Spearman: 78.54
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 719 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 360,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
jhgan/ko-sroberta-sts
| null |
[
"sentence-transformers",
"pytorch",
"tf",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #tf #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# ko-sroberta-sts
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
KorSTS 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.
- Cosine Pearson: 81.84
- Cosine Spearman: 81.82
- Euclidean Pearson: 81.15
- Euclidean Spearman: 81.25
- Manhattan Pearson: 81.14
- Manhattan Spearman: 81.25
- Dot Pearson: 79.09
- Dot Spearman: 78.54
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 719 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# ko-sroberta-sts\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nKorSTS 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.\n\n- Cosine Pearson: 81.84\n- Cosine Spearman: 81.82\n- Euclidean Pearson: 81.15\n- Euclidean Spearman: 81.25\n- Manhattan Pearson: 81.14\n- Manhattan Spearman: 81.25\n- Dot Pearson: 79.09\n- Dot Spearman: 78.54",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 719 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #tf #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# ko-sroberta-sts\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nKorSTS 학습 데이터셋으로 학습한 후 KorSTS 평가 데이터셋으로 평가한 결과입니다.\n\n- Cosine Pearson: 81.84\n- Cosine Spearman: 81.82\n- Euclidean Pearson: 81.15\n- Euclidean Spearman: 81.25\n- Manhattan Pearson: 81.14\n- Manhattan Spearman: 81.25\n- Dot Pearson: 79.09\n- Dot Spearman: 78.54",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 719 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-guarani-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4964
- Wer: 0.5957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 6.65 | 100 | 1.1326 | 1.0 |
| 1.6569 | 13.32 | 200 | 0.5264 | 0.6478 |
| 1.6569 | 19.97 | 300 | 0.5370 | 0.6261 |
| 0.2293 | 26.65 | 400 | 0.4964 | 0.5957 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["common_voice", "gn"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-guarani-small", "results": []}]}
|
jhonparra18/wav2vec2-large-xls-r-300m-guarani-small
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:common_voice",
"dataset:gn",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #dataset-common_voice #dataset-gn #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-guarani-small
=======================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4964
* Wer: 0.5957
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #dataset-common_voice #dataset-gn #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-custom
This model was trained from scratch on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2245
- eval_wer: 0.2082
- eval_runtime: 801.6784
- eval_samples_per_second: 18.822
- eval_steps_per_second: 2.354
- epoch: 0.76
- step: 8400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"tags": ["generated_from_trainer", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-spanish-custom", "results": []}]}
|
jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #endpoints_compatible #region-us
|
# wav2vec2-large-xls-r-300m-spanish-custom
This model was trained from scratch on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2245
- eval_wer: 0.2082
- eval_runtime: 801.6784
- eval_samples_per_second: 18.822
- eval_steps_per_second: 2.354
- epoch: 0.76
- step: 8400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
[
"# wav2vec2-large-xls-r-300m-spanish-custom\n\nThis model was trained from scratch on the common_voice dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.2245\n- eval_wer: 0.2082\n- eval_runtime: 801.6784\n- eval_samples_per_second: 18.822\n- eval_steps_per_second: 2.354\n- epoch: 0.76\n- step: 8400",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 10\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #endpoints_compatible #region-us \n",
"# wav2vec2-large-xls-r-300m-spanish-custom\n\nThis model was trained from scratch on the common_voice dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.2245\n- eval_wer: 0.2082\n- eval_runtime: 801.6784\n- eval_samples_per_second: 18.822\n- eval_steps_per_second: 2.354\n- epoch: 0.76\n- step: 8400",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 200\n- num_epochs: 10\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-large
This model is a fine-tuned version of [tomascufaro/xls-r-es-test](https://huggingface.co/tomascufaro/xls-r-es-test) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1431
- Wer: 0.1197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1769 | 0.15 | 400 | 0.1795 | 0.1698 |
| 0.217 | 0.3 | 800 | 0.2000 | 0.1945 |
| 0.2372 | 0.45 | 1200 | 0.1985 | 0.1859 |
| 0.2351 | 0.6 | 1600 | 0.1901 | 0.1772 |
| 0.2269 | 0.75 | 2000 | 0.1968 | 0.1783 |
| 0.2284 | 0.9 | 2400 | 0.1873 | 0.1771 |
| 0.2014 | 1.06 | 2800 | 0.1840 | 0.1696 |
| 0.1988 | 1.21 | 3200 | 0.1904 | 0.1730 |
| 0.1919 | 1.36 | 3600 | 0.1827 | 0.1630 |
| 0.1919 | 1.51 | 4000 | 0.1788 | 0.1629 |
| 0.1817 | 1.66 | 4400 | 0.1755 | 0.1558 |
| 0.1812 | 1.81 | 4800 | 0.1795 | 0.1638 |
| 0.1808 | 1.96 | 5200 | 0.1762 | 0.1603 |
| 0.1625 | 2.11 | 5600 | 0.1721 | 0.1557 |
| 0.1477 | 2.26 | 6000 | 0.1735 | 0.1504 |
| 0.1508 | 2.41 | 6400 | 0.1708 | 0.1478 |
| 0.157 | 2.56 | 6800 | 0.1644 | 0.1466 |
| 0.1491 | 2.71 | 7200 | 0.1638 | 0.1445 |
| 0.1458 | 2.86 | 7600 | 0.1582 | 0.1426 |
| 0.1387 | 3.02 | 8000 | 0.1607 | 0.1376 |
| 0.1269 | 3.17 | 8400 | 0.1559 | 0.1364 |
| 0.1172 | 3.32 | 8800 | 0.1521 | 0.1335 |
| 0.1203 | 3.47 | 9200 | 0.1534 | 0.1330 |
| 0.1177 | 3.62 | 9600 | 0.1485 | 0.1304 |
| 0.1167 | 3.77 | 10000 | 0.1498 | 0.1302 |
| 0.1194 | 3.92 | 10400 | 0.1463 | 0.1287 |
| 0.1053 | 4.07 | 10800 | 0.1483 | 0.1282 |
| 0.098 | 4.22 | 11200 | 0.1498 | 0.1267 |
| 0.0958 | 4.37 | 11600 | 0.1461 | 0.1233 |
| 0.0946 | 4.52 | 12000 | 0.1444 | 0.1218 |
| 0.094 | 4.67 | 12400 | 0.1434 | 0.1206 |
| 0.0932 | 4.82 | 12800 | 0.1424 | 0.1206 |
| 0.0912 | 4.98 | 13200 | 0.1431 | 0.1197 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "es", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-spanish-large", "results": []}]}
|
jhonparra18/wav2vec2-xls-r-300m-spanish-large-noLM
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"es",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #es #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-spanish-large
=======================================
This model is a fine-tuned version of tomascufaro/xls-r-es-test on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1431
* Wer: 0.1197
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 10
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 20
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 300
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #es #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 20\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
fill-mask
|
transformers
|
Our bibert-ende is a bilingual English-German Language Model. Please check out our EMNLP 2021 paper "[BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation](https://aclanthology.org/2021.emnlp-main.534.pdf)" for more details.
```
@inproceedings{xu-etal-2021-bert,
title = "{BERT}, m{BERT}, or {B}i{BERT}? A Study on Contextualized Embeddings for Neural Machine Translation",
author = "Xu, Haoran and
Van Durme, Benjamin and
Murray, Kenton",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.534",
pages = "6663--6675",
abstract = "The success of bidirectional encoders using masked language models, such as BERT, on numerous natural language processing tasks has prompted researchers to attempt to incorporate these pre-trained models into neural machine translation (NMT) systems. However, proposed methods for incorporating pre-trained models are non-trivial and mainly focus on BERT, which lacks a comparison of the impact that other pre-trained models may have on translation performance. In this paper, we demonstrate that simply using the output (contextualized embeddings) of a tailored and suitable bilingual pre-trained language model (dubbed BiBERT) as the input of the NMT encoder achieves state-of-the-art translation performance. Moreover, we also propose a stochastic layer selection approach and a concept of a dual-directional translation model to ensure the sufficient utilization of contextualized embeddings. In the case of without using back translation, our best models achieve BLEU scores of 30.45 for En→De and 38.61 for De→En on the IWSLT{'}14 dataset, and 31.26 for En→De and 34.94 for De→En on the WMT{'}14 dataset, which exceeds all published numbers.",
}
```
# Download
Note that tokenizer package is `BertTokenizer` not `AutoTokenizer`.
```
from transformers import BertTokenizer, AutoModel
tokenizer = BertTokenizer.from_pretrained("jhu-clsp/bibert-ende")
model = AutoModel.from_pretrained("jhu-clsp/bibert-ende")
```
|
{"language": ["en", "de"]}
|
jhu-clsp/bibert-ende
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"en",
"de",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en",
"de"
] |
TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #en #de #autotrain_compatible #endpoints_compatible #region-us
|
Our bibert-ende is a bilingual English-German Language Model. Please check out our EMNLP 2021 paper "BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation" for more details.
# Download
Note that tokenizer package is 'BertTokenizer' not 'AutoTokenizer'.
|
[
"# Download\n\nNote that tokenizer package is 'BertTokenizer' not 'AutoTokenizer'."
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #en #de #autotrain_compatible #endpoints_compatible #region-us \n",
"# Download\n\nNote that tokenizer package is 'BertTokenizer' not 'AutoTokenizer'."
] |
null |
transformers
|
This is the pre-trained model presented in [Automated Chemical Reaction Extraction from Scientific Literature](https://pubs.acs.org/doi/pdf/10.1021/acs.jcim.1c00284), which is a BERT model trained on chemical literature data.
The training corpus was taken from ~200K ACS publications, more details can be found in the paper.
If using these models, please cite the following paper:
```latex
@article{guo2021automated,
title={Automated Chemical Reaction Extraction from Scientific Literature},
author={Guo, Jiang and Ibanez-Lopez, A Santiago and Gao, Hanyu and Quach, Victor and Coley, Connor W and Jensen, Klavs F and Barzilay, Regina},
journal={Journal of Chemical Information and Modeling},
year={2021},
publisher={ACS Publications}
}
```
|
{}
|
jiangg/chembert_cased
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #endpoints_compatible #region-us
|
This is the pre-trained model presented in Automated Chemical Reaction Extraction from Scientific Literature, which is a BERT model trained on chemical literature data.
The training corpus was taken from ~200K ACS publications, more details can be found in the paper.
If using these models, please cite the following paper:
|
[] |
[
"TAGS\n#transformers #pytorch #endpoints_compatible #region-us \n"
] |
null |
transformers
|
KcELECTRA([https://github.com/Beomi/KcELECTRA](https://github.com/Beomi/KcELECTRA))의 Tokenizer에서 [UNK]로 대체되는 토큰들을 추가했습니다.
|
{}
|
jiho0304/bad-korean-tokenizer
| null |
[
"transformers",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #electra #pretraining #endpoints_compatible #region-us
|
KcELECTRA(URL)의 Tokenizer에서 [UNK]로 대체되는 토큰들을 추가했습니다.
|
[] |
[
"TAGS\n#transformers #electra #pretraining #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
ElectraBERT tuned with korean-bad-speeches
|
{}
|
jiho0304/curseELECTRA
| null |
[
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #electra #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
ElectraBERT tuned with korean-bad-speeches
|
[] |
[
"TAGS\n#transformers #pytorch #electra #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8394
- Matthews Correlation: 0.5414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5259 | 1.0 | 535 | 0.5429 | 0.4064 |
| 0.342 | 2.0 | 1070 | 0.5270 | 0.5081 |
| 0.234 | 3.0 | 1605 | 0.6115 | 0.5268 |
| 0.1703 | 4.0 | 2140 | 0.7344 | 0.5387 |
| 0.1283 | 5.0 | 2675 | 0.8394 | 0.5414 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.8.0+cpu
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.541356878970505, "name": "Matthews Correlation"}]}]}]}
|
jimmyliao/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8394
* Matthews Correlation: 0.5414
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.8.0+cpu
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.8.0+cpu\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.8.0+cpu\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTreach-finetuned-ner
This model is a fine-tuned version of [jimregan/BERTreach](https://huggingface.co/jimregan/BERTreach) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4944
- Precision: 0.5201
- Recall: 0.5667
- F1: 0.5424
- Accuracy: 0.8366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 63 | 0.7249 | 0.3645 | 0.3905 | 0.3770 | 0.7584 |
| No log | 2.0 | 126 | 0.5850 | 0.4529 | 0.4948 | 0.4729 | 0.8072 |
| No log | 3.0 | 189 | 0.5192 | 0.4949 | 0.5456 | 0.5190 | 0.8288 |
| No log | 4.0 | 252 | 0.5042 | 0.5208 | 0.5592 | 0.5393 | 0.8348 |
| No log | 5.0 | 315 | 0.4944 | 0.5201 | 0.5667 | 0.5424 | 0.8366 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"language": "ga", "license": "apache-2.0", "tags": ["generated_from_trainer", "irish"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "widget": [{"text": "Saola\u00edodh P\u00e1draic \u00d3 Conaire i nGaillimh sa bhliain 1882."}], "model-index": [{"name": "BERTreach-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "ga"}, "metrics": [{"type": "precision", "value": 0.5200517464424321, "name": "Precision"}, {"type": "recall", "value": 0.5667293233082706, "name": "Recall"}, {"type": "f1", "value": 0.5423881268270744, "name": "F1"}, {"type": "accuracy", "value": 0.8365605828220859, "name": "Accuracy"}]}]}]}
|
jimregan/BERTreach-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"irish",
"ga",
"dataset:wikiann",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ga"
] |
TAGS
#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #irish #ga #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
BERTreach-finetuned-ner
=======================
This model is a fine-tuned version of jimregan/BERTreach on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4944
* Precision: 0.5201
* Recall: 0.5667
* F1: 0.5424
* Accuracy: 0.8366
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #irish #ga #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
## BERTreach
([beirtreach](https://www.teanglann.ie/en/fgb/beirtreach) means 'oyster bed')
**Model size:** 84M
**Training data:**
* [PARSEME 1.2](https://gitlab.com/parseme/parseme_corpus_ga/-/blob/master/README.md)
* Newscrawl 300k portion of the [Leipzig Corpora](https://wortschatz.uni-leipzig.de/en/download/irish)
* Private news corpus crawled with [Corpus Crawler](https://github.com/google/corpuscrawler)
(2125804 sentences, 47419062 tokens, as reckoned by wc)
```
from transformers import pipeline
fill_mask = pipeline("fill-mask", model="jimregan/BERTreach", tokenizer="jimregan/BERTreach")
```
|
{"language": "ga", "license": "apache-2.0", "tags": ["irish"]}
|
jimregan/BERTreach
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"irish",
"ga",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ga"
] |
TAGS
#transformers #pytorch #jax #safetensors #roberta #fill-mask #irish #ga #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
## BERTreach
(beirtreach means 'oyster bed')
Model size: 84M
Training data:
* PARSEME 1.2
* Newscrawl 300k portion of the Leipzig Corpora
* Private news corpus crawled with Corpus Crawler
(2125804 sentences, 47419062 tokens, as reckoned by wc)
|
[
"## BERTreach\n\n(beirtreach means 'oyster bed')\n\nModel size: 84M\n\nTraining data: \n* PARSEME 1.2 \n* Newscrawl 300k portion of the Leipzig Corpora\n* Private news corpus crawled with Corpus Crawler\n\n(2125804 sentences, 47419062 tokens, as reckoned by wc)"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #roberta #fill-mask #irish #ga #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## BERTreach\n\n(beirtreach means 'oyster bed')\n\nModel size: 84M\n\nTraining data: \n* PARSEME 1.2 \n* Newscrawl 300k portion of the Leipzig Corpora\n* Private news corpus crawled with Corpus Crawler\n\n(2125804 sentences, 47419062 tokens, as reckoned by wc)"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-irish-cased-v1-finetuned-ner
This model is a fine-tuned version of [DCU-NLP/bert-base-irish-cased-v1](https://huggingface.co/DCU-NLP/bert-base-irish-cased-v1) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2468
- Precision: 0.8191
- Recall: 0.8363
- F1: 0.8276
- Accuracy: 0.9307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 63 | 0.4902 | 0.5579 | 0.5269 | 0.5420 | 0.8458 |
| No log | 2.0 | 126 | 0.3227 | 0.7169 | 0.7417 | 0.7291 | 0.8991 |
| No log | 3.0 | 189 | 0.2720 | 0.7895 | 0.7839 | 0.7867 | 0.9186 |
| No log | 4.0 | 252 | 0.2585 | 0.8128 | 0.8296 | 0.8211 | 0.9264 |
| No log | 5.0 | 315 | 0.2468 | 0.8191 | 0.8363 | 0.8276 | 0.9307 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"language": "ga", "license": "apache-2.0", "tags": ["generated_from_trainer", "irish"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "widget": [{"text": "Saola\u00edodh P\u00e1draic \u00d3 Conaire i nGaillimh sa bhliain 1882."}], "base_model": "DCU-NLP/bert-base-irish-cased-v1", "model-index": [{"name": "bert-base-irish-cased-v1-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "ga"}, "metrics": [{"type": "precision", "value": 0.8190601668862538, "name": "Precision"}, {"type": "recall", "value": 0.8363228699551569, "name": "Recall"}, {"type": "f1", "value": 0.8276015087641446, "name": "F1"}, {"type": "accuracy", "value": 0.9306559069156423, "name": "Accuracy"}]}]}]}
|
jimregan/bert-base-irish-cased-v1-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"irish",
"ga",
"dataset:wikiann",
"base_model:DCU-NLP/bert-base-irish-cased-v1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ga"
] |
TAGS
#transformers #pytorch #tensorboard #safetensors #bert #token-classification #generated_from_trainer #irish #ga #dataset-wikiann #base_model-DCU-NLP/bert-base-irish-cased-v1 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-irish-cased-v1-finetuned-ner
======================================
This model is a fine-tuned version of DCU-NLP/bert-base-irish-cased-v1 on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2468
* Precision: 0.8191
* Recall: 0.8363
* F1: 0.8276
* Accuracy: 0.9307
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #bert #token-classification #generated_from_trainer #irish #ga #dataset-wikiann #base_model-DCU-NLP/bert-base-irish-cased-v1 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-irish-cased-discriminator-v1-finetuned-ner
This model is a fine-tuned version of [DCU-NLP/electra-base-irish-cased-generator-v1](https://huggingface.co/DCU-NLP/electra-base-irish-cased-generator-v1) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6654
- Precision: 0.5414
- Recall: 0.5161
- F1: 0.5285
- Accuracy: 0.8420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 63 | 1.3231 | 0.1046 | 0.0417 | 0.0596 | 0.5449 |
| No log | 2.0 | 126 | 0.9710 | 0.3879 | 0.3359 | 0.3600 | 0.7486 |
| No log | 3.0 | 189 | 0.7723 | 0.4713 | 0.4457 | 0.4582 | 0.8152 |
| No log | 4.0 | 252 | 0.6892 | 0.5257 | 0.4910 | 0.5078 | 0.8347 |
| No log | 5.0 | 315 | 0.6654 | 0.5414 | 0.5161 | 0.5285 | 0.8420 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"language": "ga", "license": "apache-2.0", "tags": ["generated_from_trainer", "irish"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "widget": [{"text": "Saola\u00edodh P\u00e1draic \u00d3 Conaire i nGaillimh sa bhliain 1882."}], "model-index": [{"name": "electra-base-irish-cased-discriminator-v1-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "ga"}, "metrics": [{"type": "precision", "value": 0.5413922859830668, "name": "Precision"}, {"type": "recall", "value": 0.5161434977578475, "name": "Recall"}, {"type": "f1", "value": 0.5284664830119375, "name": "F1"}, {"type": "accuracy", "value": 0.8419817960026273, "name": "Accuracy"}]}]}]}
|
jimregan/electra-base-irish-cased-discriminator-v1-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"electra",
"token-classification",
"generated_from_trainer",
"irish",
"ga",
"dataset:wikiann",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ga"
] |
TAGS
#transformers #pytorch #tensorboard #safetensors #electra #token-classification #generated_from_trainer #irish #ga #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
electra-base-irish-cased-discriminator-v1-finetuned-ner
=======================================================
This model is a fine-tuned version of DCU-NLP/electra-base-irish-cased-generator-v1 on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6654
* Precision: 0.5414
* Recall: 0.5161
* F1: 0.5285
* Accuracy: 0.8420
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #electra #token-classification #generated_from_trainer #irish #ga #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-irish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4286
- Wer: 0.5097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 210
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 4.3406 | 24.97 | 400 | 1.1677 | 0.7270 |
| 0.2527 | 49.97 | 800 | 1.2686 | 0.5927 |
| 0.0797 | 74.97 | 1200 | 1.3970 | 0.5769 |
| 0.0424 | 99.97 | 1600 | 1.4093 | 0.5600 |
| 0.0286 | 124.97 | 2000 | 1.3684 | 0.5407 |
| 0.0174 | 149.97 | 2400 | 1.4571 | 0.5205 |
| 0.0109 | 174.97 | 2800 | 1.4327 | 0.5178 |
| 0.0072 | 199.97 | 3200 | 1.4286 | 0.5097 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-irish-colab", "results": []}]}
|
jimregan/wav2vec2-large-xls-r-300m-irish-colab
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-irish-colab
=====================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4286
* Wer: 0.5097
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 210
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu113
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 210\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu113\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 210\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu113\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Irish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Irish Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ga-IE", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Irish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ga-IE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-irish-basic")
model.to("cuda")
# So, tolower() for Irish is a bit complicated: tAthar -> t-athair
# toupper() is non-deterministic :)
def is_upper_vowel(letter):
if letter in ['A', 'E', 'I', 'O', 'U', 'Á', 'É', 'Í', 'Ó', 'Ú']:
return True
else:
return False
def irish_lower(word):
if len(word) > 1 and word[0] in ['n', 't'] and is_upper_vowel(word[1]):
return word[0] + '-' + word[1:].lower()
else:
return word.lower()
def irish_lower_sentence(sentence):
return " ".join([irish_lower(w) for w in sentence.split(" ")])
chars_to_ignore_regex = '[,\?\.\!\;\:\"\“\%\‘\”\(\)\*]'
def remove_special_characters(sentence):
tmp = re.sub('’ ', ' ', sentence)
tmp = re.sub("’$", '', tmp)
tmp = re.sub('’', '\'', tmp)
tmp = re.sub(chars_to_ignore_regex, '', tmp)
sentence = irish_lower_sentence(tmp) + ' '
return sentence
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = remove_special_characters(batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 43.7 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/jimregan/wav2vec2-sprint/blob/main/irish/fine-tune-xlsr-wav2vec2-on-irish-asr-with-transformers.ipynb)
|
{"language": "ga", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Irish by Jim O'Regan", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ga-IE", "type": "common_voice", "args": "ga-IE"}, "metrics": [{"type": "wer", "value": 47.4, "name": "Test WER"}]}]}]}
|
jimregan/wav2vec2-large-xlsr-irish-basic
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ga",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ga"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ga #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Irish
Fine-tuned facebook/wav2vec2-large-xlsr-53
on the Irish Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Irish test data of Common Voice.
Test Result: 43.7 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training.
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-Irish\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Irish Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Irish test data of Common Voice.\n\nTest Result: 43.7 %",
"## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training.\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ga #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Irish\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Irish Common Voice dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Irish test data of Common Voice.\n\nTest Result: 43.7 %",
"## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training.\n\nThe script used for training can be found here"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Latvian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Latvian Common Voice dataset](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-latvian-cv")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-latvian-cv")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Latvian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lv", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-latvian-cv")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-latvian-cv")
model.to("cuda")
chars_to_ignore_regex = '[,\?\.\!\;\:\"\“\%\‘\”\(\)\*\…\—\–\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 29.95 %
|
{"language": "lv", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-large-xlsr-53", "model-index": [{"name": "jimregan/wav2vec2-large-xlsr-latvian-cv", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice lv", "type": "common_voice", "args": "lv"}, "metrics": [{"type": "wer", "value": 29.95, "name": "Test WER"}]}]}]}
|
jimregan/wav2vec2-large-xlsr-latvian-cv
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"lv",
"dataset:common_voice",
"base_model:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"lv"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #lv #dataset-common_voice #base_model-facebook/wav2vec2-large-xlsr-53 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Latvian
Fine-tuned facebook/wav2vec2-large-xlsr-53
on the Latvian Common Voice dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Latvian test data of Common Voice.
Test Result: 29.95 %
|
[
"# Wav2Vec2-Large-XLSR-Latvian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Latvian Common Voice dataset.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Latvian test data of Common Voice.\n\n\n\nTest Result: 29.95 %"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #lv #dataset-common_voice #base_model-facebook/wav2vec2-large-xlsr-53 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Latvian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Latvian Common Voice dataset.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Latvian test data of Common Voice.\n\n\n\nTest Result: 29.95 %"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-Upper-Sorbian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Upper Sorbian Common Voice dataset](https://huggingface.co/datasets/common_voice), with an
extra 28 minutes of audio from an online [Sorbian course](https://sprachkurs.sorbischlernen.de/).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hsb", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Upper Sorbian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ga-IE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed")
model = Wav2Vec2ForCTC.from_pretrained("jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\�„«»–]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = remove_special_characters(batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 48.2 %
## Training
The Common Voice `train` and `validation` datasets were used for training, with the vocabulary from the English A1 lesson from an online [Sorbian course](https://sprachkurs.sorbischlernen.de/)
The script used for training can be found [here](https://github.com/jimregan/wav2vec2-sprint/blob/main/upper_sorbian/fine-tune-xlsr-wav2vec2-on-upper-sorbian-asr-with-transformers.ipynb)
The script used for cleaning the transcripts of the vocabulary data is [here](https://github.com/jimregan/wav2vec2-sprint/blob/main/upper_sorbian/sprachkurs.ipynb)
|
{"language": "hsb", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Upper Sorbian mixed by Jim O'Regan", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hsb", "type": "common_voice", "args": "hsb"}, "metrics": [{"type": "wer", "value": 43.48, "name": "Test WER"}]}]}]}
|
jimregan/wav2vec2-large-xlsr-upper-sorbian-mixed
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hsb",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hsb"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hsb #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-Upper-Sorbian
Fine-tuned facebook/wav2vec2-large-xlsr-53
on the Upper Sorbian Common Voice dataset, with an
extra 28 minutes of audio from an online Sorbian course.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Upper Sorbian test data of Common Voice.
Test Result: 48.2 %
## Training
The Common Voice 'train' and 'validation' datasets were used for training, with the vocabulary from the English A1 lesson from an online Sorbian course
The script used for training can be found here
The script used for cleaning the transcripts of the vocabulary data is here
|
[
"# Wav2Vec2-Large-XLSR-Upper-Sorbian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Upper Sorbian Common Voice dataset, with an \nextra 28 minutes of audio from an online Sorbian course.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Upper Sorbian test data of Common Voice.\n\n\n\n\nTest Result: 48.2 %",
"## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training, with the vocabulary from the English A1 lesson from an online Sorbian course\n\nThe script used for training can be found here\n\nThe script used for cleaning the transcripts of the vocabulary data is here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #hsb #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-Upper-Sorbian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53\non the Upper Sorbian Common Voice dataset, with an \nextra 28 minutes of audio from an online Sorbian course.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Upper Sorbian test data of Common Voice.\n\n\n\n\nTest Result: 48.2 %",
"## Training\n\nThe Common Voice 'train' and 'validation' datasets were used for training, with the vocabulary from the English A1 lesson from an online Sorbian course\n\nThe script used for training can be found here\n\nThe script used for cleaning the transcripts of the vocabulary data is here"
] |
question-answering
|
transformers
|
# BERT-Base Uncased SQuADv1
`bert-base-uncased` trained on question answering with `squad`.
Evalulation scores:
```
***** eval metrics *****
epoch = 3.0
eval_exact_match = 80.6906
eval_f1 = 88.1129
eval_samples = 10784
```
|
{"license": "apache-2.0"}
|
jimypbr/bert-base-uncased-squad
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #license-apache-2.0 #endpoints_compatible #region-us
|
# BERT-Base Uncased SQuADv1
'bert-base-uncased' trained on question answering with 'squad'.
Evalulation scores:
|
[
"# BERT-Base Uncased SQuADv1\r\n\r\n'bert-base-uncased' trained on question answering with 'squad'. \r\n\r\nEvalulation scores:"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #license-apache-2.0 #endpoints_compatible #region-us \n",
"# BERT-Base Uncased SQuADv1\r\n\r\n'bert-base-uncased' trained on question answering with 'squad'. \r\n\r\nEvalulation scores:"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-multiwoz
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0064
- Acc: 1.0
- True Num: 56671
- Num: 56776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | True Num | Num |
|:-------------:|:-----:|:----:|:---------------:|:----:|:--------:|:-----:|
| 0.1261 | 1.13 | 1000 | 0.0933 | 0.98 | 55574 | 56776 |
| 0.0951 | 2.25 | 2000 | 0.0655 | 0.98 | 55867 | 56776 |
| 0.0774 | 3.38 | 3000 | 0.0480 | 0.99 | 56047 | 56776 |
| 0.0584 | 4.51 | 4000 | 0.0334 | 0.99 | 56252 | 56776 |
| 0.042 | 5.64 | 5000 | 0.0222 | 0.99 | 56411 | 56776 |
| 0.0329 | 6.76 | 6000 | 0.0139 | 1.0 | 56502 | 56776 |
| 0.0254 | 7.89 | 7000 | 0.0094 | 1.0 | 56626 | 56776 |
| 0.0214 | 9.02 | 8000 | 0.0070 | 1.0 | 56659 | 56776 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "t5-large-multiwoz", "results": []}]}
|
jinlmsft/t5-large-multiwoz
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-large-multiwoz
=================
This model is a fine-tuned version of t5-large on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0064
* Acc: 1.0
* True Num: 56671
* Num: 56776
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10.0
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu102
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-slots
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0889
- Acc: 0.76
- True Num: 11167
- Num: 14748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | True Num | Num |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:--------:|:-----:|
| 0.3539 | 0.56 | 1000 | 0.2669 | 0.56 | 8264 | 14748 |
| 0.2523 | 1.13 | 2000 | 0.2031 | 0.56 | 8317 | 14748 |
| 0.2003 | 1.69 | 3000 | 0.1498 | 0.58 | 8496 | 14748 |
| 0.1609 | 2.25 | 4000 | 0.1284 | 0.58 | 8612 | 14748 |
| 0.1431 | 2.82 | 5000 | 0.1119 | 0.59 | 8675 | 14748 |
| 0.1236 | 3.38 | 6000 | 0.1054 | 0.59 | 8737 | 14748 |
| 0.1172 | 3.95 | 7000 | 0.0981 | 0.59 | 8773 | 14748 |
| 0.1027 | 4.51 | 8000 | 0.0955 | 0.6 | 8787 | 14748 |
| 0.0968 | 5.07 | 9000 | 0.0931 | 0.6 | 8807 | 14748 |
| 0.0911 | 5.64 | 10000 | 0.0895 | 0.6 | 8787 | 14748 |
| 0.0852 | 6.2 | 11000 | 0.0912 | 0.6 | 8840 | 14748 |
| 0.0823 | 6.76 | 12000 | 0.0880 | 0.6 | 8846 | 14748 |
| 0.0768 | 7.33 | 13000 | 0.0915 | 0.6 | 8879 | 14748 |
| 0.0758 | 7.89 | 14000 | 0.0892 | 0.6 | 8853 | 14748 |
| 0.0708 | 8.46 | 15000 | 0.0885 | 0.6 | 8884 | 14748 |
| 0.0701 | 9.02 | 16000 | 0.0884 | 0.6 | 8915 | 14748 |
| 0.0685 | 9.58 | 17000 | 0.0884 | 0.6 | 8921 | 14748 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "t5-large-slots", "results": []}]}
|
jinlmsft/t5-large-slots
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-large-slots
==============
This model is a fine-tuned version of t5-large on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0889
* Acc: 0.76
* True Num: 11167
* Num: 14748
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10.0
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu102
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
null |
transformers
|
# DALL-E-Tokenizer
Huggingface package for the discrete VAE usded for [DALL-E](https://github.com/openai/DALL-E).
# How to use
```python
# from dall_e_tok import DallEEncoder
from dall_e_tok import DALLETokenizer
tokenizer = DALLETokenizer.from_pretrained("jinmang2/dall-e-tokenizer")
```
|
{}
|
jinmang2/dall-e-tokenizer
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #endpoints_compatible #region-us
|
# DALL-E-Tokenizer
Huggingface package for the discrete VAE usded for DALL-E.
# How to use
|
[
"# DALL-E-Tokenizer\n\nHuggingface package for the discrete VAE usded for DALL-E.",
"# How to use"
] |
[
"TAGS\n#transformers #pytorch #endpoints_compatible #region-us \n",
"# DALL-E-Tokenizer\n\nHuggingface package for the discrete VAE usded for DALL-E.",
"# How to use"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-TPU-cv-fine-tune
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6987
- Wer: 0.6019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1017 | 8.88 | 400 | 1.4635 | 0.7084 |
| 0.436 | 17.77 | 800 | 1.4765 | 0.6231 |
| 0.1339 | 26.66 | 1200 | 1.6987 | 0.6019 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-TPU-cv-fine-tune", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-TPU-cv-fine-tune
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6987
* Wer: 0.6019
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-10
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-9](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-9) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9567
- Wer: 0.3292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2892 | 1.62 | 1000 | 0.5745 | 0.3467 |
| 0.235 | 3.23 | 2000 | 0.6156 | 0.3423 |
| 0.1782 | 4.85 | 3000 | 0.6299 | 0.3484 |
| 0.1504 | 6.46 | 4000 | 0.6475 | 0.3446 |
| 0.133 | 8.08 | 5000 | 0.6753 | 0.3381 |
| 0.115 | 9.69 | 6000 | 0.7834 | 0.3529 |
| 0.101 | 11.31 | 7000 | 0.7924 | 0.3426 |
| 0.0926 | 12.92 | 8000 | 0.7887 | 0.3465 |
| 0.0863 | 14.54 | 9000 | 0.7674 | 0.3439 |
| 0.0788 | 16.16 | 10000 | 0.8648 | 0.3435 |
| 0.0728 | 17.77 | 11000 | 0.8460 | 0.3395 |
| 0.0693 | 19.39 | 12000 | 0.8941 | 0.3451 |
| 0.0637 | 21.0 | 13000 | 0.9079 | 0.3356 |
| 0.0584 | 22.62 | 14000 | 0.8851 | 0.3336 |
| 0.055 | 24.23 | 15000 | 0.9400 | 0.3338 |
| 0.0536 | 25.85 | 16000 | 0.9387 | 0.3335 |
| 0.0481 | 27.46 | 17000 | 0.9664 | 0.3337 |
| 0.0485 | 29.08 | 18000 | 0.9567 | 0.3292 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-checkpoint-10", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-10
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-checkpoint-10
===========================
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-9 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9567
* Wer: 0.3292
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-11.1
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-10](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-10) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0173
- Wer: 0.3350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2788 | 1.52 | 1000 | 0.5776 | 0.3410 |
| 0.2277 | 3.04 | 2000 | 0.6148 | 0.3465 |
| 0.1772 | 4.56 | 3000 | 0.6497 | 0.3497 |
| 0.1528 | 6.08 | 4000 | 0.6786 | 0.3430 |
| 0.1285 | 7.6 | 5000 | 0.6779 | 0.3489 |
| 0.1104 | 9.12 | 6000 | 0.7417 | 0.3528 |
| 0.0965 | 10.64 | 7000 | 0.7956 | 0.3477 |
| 0.0914 | 12.16 | 8000 | 0.7994 | 0.3570 |
| 0.082 | 13.68 | 9000 | 0.8690 | 0.3510 |
| 0.0788 | 15.2 | 10000 | 0.8569 | 0.3526 |
| 0.0727 | 16.72 | 11000 | 0.8885 | 0.3440 |
| 0.0656 | 18.24 | 12000 | 0.9586 | 0.3476 |
| 0.0608 | 19.76 | 13000 | 0.9317 | 0.3495 |
| 0.0588 | 21.28 | 14000 | 0.9809 | 0.3449 |
| 0.0547 | 22.8 | 15000 | 0.9552 | 0.3421 |
| 0.0519 | 24.32 | 16000 | 0.9782 | 0.3380 |
| 0.0474 | 25.84 | 17000 | 0.9923 | 0.3386 |
| 0.046 | 27.36 | 18000 | 0.9984 | 0.3347 |
| 0.045 | 28.88 | 19000 | 1.0173 | 0.3350 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-checkpoint-11.1", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-11.1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-checkpoint-11.1
=============================
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-10 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0173
* Wer: 0.3350
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-12
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-11.1](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-11.1) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0795
- Wer: 0.3452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2793 | 1.64 | 1000 | 0.5692 | 0.3518 |
| 0.2206 | 3.28 | 2000 | 0.6127 | 0.3460 |
| 0.1733 | 4.93 | 3000 | 0.6622 | 0.3580 |
| 0.1391 | 6.57 | 4000 | 0.6768 | 0.3519 |
| 0.1193 | 8.21 | 5000 | 0.7559 | 0.3540 |
| 0.1053 | 9.85 | 6000 | 0.7873 | 0.3562 |
| 0.093 | 11.49 | 7000 | 0.8170 | 0.3612 |
| 0.0833 | 13.14 | 8000 | 0.8682 | 0.3579 |
| 0.0753 | 14.78 | 9000 | 0.8317 | 0.3573 |
| 0.0698 | 16.42 | 10000 | 0.9213 | 0.3525 |
| 0.0623 | 18.06 | 11000 | 0.9746 | 0.3531 |
| 0.0594 | 19.7 | 12000 | 1.0027 | 0.3502 |
| 0.0538 | 21.35 | 13000 | 1.0045 | 0.3545 |
| 0.0504 | 22.99 | 14000 | 0.9821 | 0.3523 |
| 0.0461 | 24.63 | 15000 | 1.0818 | 0.3462 |
| 0.0439 | 26.27 | 16000 | 1.0995 | 0.3495 |
| 0.0421 | 27.91 | 17000 | 1.0533 | 0.3430 |
| 0.0415 | 29.56 | 18000 | 1.0795 | 0.3452 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-checkpoint-12", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-12
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-checkpoint-12
===========================
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-11.1 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0795
* Wer: 0.3452
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-13
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-12](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-12) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1804
- Wer: 0.3809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2688 | 1.92 | 1000 | 0.6518 | 0.3692 |
| 0.1944 | 3.85 | 2000 | 0.7188 | 0.3808 |
| 0.1503 | 5.77 | 3000 | 0.7552 | 0.3853 |
| 0.1218 | 7.69 | 4000 | 0.8155 | 0.3834 |
| 0.1024 | 9.62 | 5000 | 0.8867 | 0.3779 |
| 0.0874 | 11.54 | 6000 | 0.8917 | 0.3866 |
| 0.0775 | 13.46 | 7000 | 1.0320 | 0.4019 |
| 0.0712 | 15.38 | 8000 | 1.0110 | 0.3922 |
| 0.0656 | 17.31 | 9000 | 1.0494 | 0.3885 |
| 0.0578 | 19.23 | 10000 | 1.1054 | 0.3883 |
| 0.053 | 21.15 | 11000 | 1.1285 | 0.3938 |
| 0.0496 | 23.08 | 12000 | 1.1358 | 0.3884 |
| 0.0459 | 25.0 | 13000 | 1.2062 | 0.3904 |
| 0.0445 | 26.92 | 14000 | 1.1811 | 0.3830 |
| 0.0414 | 28.85 | 15000 | 1.1804 | 0.3809 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-checkpoint-13", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-13
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-checkpoint-13
===========================
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-12 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1804
* Wer: 0.3809
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-14
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-13](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-13) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2822
- Wer: 0.4068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1996 | 1.59 | 1000 | 0.7181 | 0.4079 |
| 0.1543 | 3.17 | 2000 | 0.7735 | 0.4113 |
| 0.1171 | 4.76 | 3000 | 0.8152 | 0.4045 |
| 0.0969 | 6.35 | 4000 | 0.8575 | 0.4142 |
| 0.082 | 7.94 | 5000 | 0.9005 | 0.4124 |
| 0.074 | 9.52 | 6000 | 0.9232 | 0.4151 |
| 0.0653 | 11.11 | 7000 | 0.9680 | 0.4223 |
| 0.0587 | 12.7 | 8000 | 1.0633 | 0.4232 |
| 0.0551 | 14.29 | 9000 | 1.0875 | 0.4171 |
| 0.0498 | 15.87 | 10000 | 1.0281 | 0.4105 |
| 0.0443 | 17.46 | 11000 | 1.2164 | 0.4274 |
| 0.0421 | 19.05 | 12000 | 1.1868 | 0.4191 |
| 0.0366 | 20.63 | 13000 | 1.1678 | 0.4173 |
| 0.0366 | 22.22 | 14000 | 1.2444 | 0.4187 |
| 0.0346 | 23.81 | 15000 | 1.2042 | 0.4169 |
| 0.0316 | 25.4 | 16000 | 1.3019 | 0.4127 |
| 0.0296 | 26.98 | 17000 | 1.2001 | 0.4081 |
| 0.0281 | 28.57 | 18000 | 1.2822 | 0.4068 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-checkpoint-14", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-14
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-checkpoint-14
===========================
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-13 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2822
* Wer: 0.4068
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-TPU-cv-fine-tune-2
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-TPU-cv-fine-tune](https://huggingface.co/jiobiala24/wav2vec2-base-TPU-cv-fine-tune) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6051
- Wer: 0.5484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.522 | 6.45 | 400 | 1.2550 | 0.5649 |
| 0.2874 | 12.9 | 800 | 1.4235 | 0.6054 |
| 0.152 | 19.35 | 1200 | 1.5743 | 0.5806 |
| 0.0857 | 25.8 | 1600 | 1.6051 | 0.5484 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-TPU-cv-fine-tune-2", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-TPU-cv-fine-tune-2
================================
This model is a fine-tuned version of jiobiala24/wav2vec2-base-TPU-cv-fine-tune on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6051
* Wer: 0.5484
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-3
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-2](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-2) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7007
- Wer: 0.5514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.358 | 14.8 | 400 | 1.4841 | 0.5338 |
| 0.1296 | 29.62 | 800 | 1.7007 | 0.5514 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-checkpoint-3", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-3
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-checkpoint-3
==========================
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-2 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7007
* Wer: 0.5514
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-4
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-3](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-3) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-checkpoint-4", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-4
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-base-checkpoint-4
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-3 on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
[
"# wav2vec2-base-checkpoint-4\n\nThis model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-3 on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-base-checkpoint-4\n\nThis model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-3 on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-5
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-4](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-4) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9849
- Wer: 0.3354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3947 | 1.96 | 1000 | 0.5749 | 0.3597 |
| 0.2856 | 3.93 | 2000 | 0.6212 | 0.3479 |
| 0.221 | 5.89 | 3000 | 0.6280 | 0.3502 |
| 0.1755 | 7.86 | 4000 | 0.6517 | 0.3526 |
| 0.1452 | 9.82 | 5000 | 0.7115 | 0.3481 |
| 0.1256 | 11.79 | 6000 | 0.7687 | 0.3509 |
| 0.1117 | 13.75 | 7000 | 0.7785 | 0.3490 |
| 0.0983 | 15.72 | 8000 | 0.8115 | 0.3442 |
| 0.0877 | 17.68 | 9000 | 0.8290 | 0.3429 |
| 0.0799 | 19.65 | 10000 | 0.8517 | 0.3412 |
| 0.0733 | 21.61 | 11000 | 0.9370 | 0.3448 |
| 0.066 | 23.58 | 12000 | 0.9157 | 0.3410 |
| 0.0623 | 25.54 | 13000 | 0.9673 | 0.3377 |
| 0.0583 | 27.5 | 14000 | 0.9804 | 0.3348 |
| 0.0544 | 29.47 | 15000 | 0.9849 | 0.3354 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-checkpoint-5", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-5
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-checkpoint-5
==========================
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-4 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9849
* Wer: 0.3354
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-6
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-5](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-5) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9738
- Wer: 0.3323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3435 | 1.82 | 1000 | 0.5637 | 0.3419 |
| 0.2599 | 3.65 | 2000 | 0.5804 | 0.3473 |
| 0.2043 | 5.47 | 3000 | 0.6481 | 0.3474 |
| 0.1651 | 7.3 | 4000 | 0.6937 | 0.3452 |
| 0.1376 | 9.12 | 5000 | 0.7221 | 0.3429 |
| 0.118 | 10.95 | 6000 | 0.7634 | 0.3441 |
| 0.105 | 12.77 | 7000 | 0.7789 | 0.3444 |
| 0.0925 | 14.6 | 8000 | 0.8209 | 0.3444 |
| 0.0863 | 16.42 | 9000 | 0.8293 | 0.3440 |
| 0.0756 | 18.25 | 10000 | 0.8553 | 0.3412 |
| 0.0718 | 20.07 | 11000 | 0.9006 | 0.3430 |
| 0.0654 | 21.9 | 12000 | 0.9541 | 0.3458 |
| 0.0605 | 23.72 | 13000 | 0.9400 | 0.3350 |
| 0.0552 | 25.55 | 14000 | 0.9547 | 0.3363 |
| 0.0543 | 27.37 | 15000 | 0.9715 | 0.3348 |
| 0.0493 | 29.2 | 16000 | 0.9738 | 0.3323 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-checkpoint-6", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-6
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-checkpoint-6
==========================
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-5 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9738
* Wer: 0.3323
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-7.1
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-6](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-6) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9369
- Wer: 0.3243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3124 | 1.75 | 1000 | 0.5602 | 0.3403 |
| 0.2428 | 3.5 | 2000 | 0.5924 | 0.3431 |
| 0.1884 | 5.24 | 3000 | 0.6161 | 0.3423 |
| 0.1557 | 6.99 | 4000 | 0.6570 | 0.3415 |
| 0.1298 | 8.74 | 5000 | 0.6837 | 0.3446 |
| 0.1141 | 10.49 | 6000 | 0.7304 | 0.3396 |
| 0.1031 | 12.24 | 7000 | 0.7264 | 0.3410 |
| 0.0916 | 13.99 | 8000 | 0.7229 | 0.3387 |
| 0.0835 | 15.73 | 9000 | 0.8078 | 0.3458 |
| 0.0761 | 17.48 | 10000 | 0.8304 | 0.3408 |
| 0.0693 | 19.23 | 11000 | 0.8290 | 0.3387 |
| 0.0646 | 20.98 | 12000 | 0.8593 | 0.3372 |
| 0.0605 | 22.73 | 13000 | 0.8728 | 0.3345 |
| 0.0576 | 24.48 | 14000 | 0.9111 | 0.3297 |
| 0.0529 | 26.22 | 15000 | 0.9247 | 0.3273 |
| 0.0492 | 27.97 | 16000 | 0.9248 | 0.3250 |
| 0.0472 | 29.72 | 17000 | 0.9369 | 0.3243 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-checkpoint-7.1", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-7.1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-checkpoint-7.1
============================
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-6 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9369
* Wer: 0.3243
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-8
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-7.1](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-7.1) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9561
- Wer: 0.3271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3117 | 1.59 | 1000 | 0.5514 | 0.3451 |
| 0.2509 | 3.19 | 2000 | 0.5912 | 0.3328 |
| 0.1918 | 4.78 | 3000 | 0.6103 | 0.3346 |
| 0.1612 | 6.38 | 4000 | 0.6469 | 0.3377 |
| 0.1388 | 7.97 | 5000 | 0.6597 | 0.3391 |
| 0.121 | 9.57 | 6000 | 0.6911 | 0.3472 |
| 0.1096 | 11.16 | 7000 | 0.7300 | 0.3457 |
| 0.0959 | 12.76 | 8000 | 0.7660 | 0.3400 |
| 0.0882 | 14.35 | 9000 | 0.8316 | 0.3394 |
| 0.0816 | 15.95 | 10000 | 0.8042 | 0.3357 |
| 0.0739 | 17.54 | 11000 | 0.8087 | 0.3346 |
| 0.0717 | 19.14 | 12000 | 0.8590 | 0.3353 |
| 0.066 | 20.73 | 13000 | 0.8750 | 0.3336 |
| 0.0629 | 22.33 | 14000 | 0.8759 | 0.3333 |
| 0.0568 | 23.92 | 15000 | 0.8963 | 0.3321 |
| 0.0535 | 25.52 | 16000 | 0.9391 | 0.3323 |
| 0.0509 | 27.11 | 17000 | 0.9279 | 0.3296 |
| 0.0498 | 28.71 | 18000 | 0.9561 | 0.3271 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-checkpoint-8", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-8
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-checkpoint-8
==========================
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-7.1 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9561
* Wer: 0.3271
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-9
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-8](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-8) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9203
- Wer: 0.3258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2783 | 1.58 | 1000 | 0.5610 | 0.3359 |
| 0.2251 | 3.16 | 2000 | 0.5941 | 0.3374 |
| 0.173 | 4.74 | 3000 | 0.6026 | 0.3472 |
| 0.1475 | 6.32 | 4000 | 0.6750 | 0.3482 |
| 0.1246 | 7.9 | 5000 | 0.6673 | 0.3414 |
| 0.1081 | 9.48 | 6000 | 0.7072 | 0.3409 |
| 0.1006 | 11.06 | 7000 | 0.7413 | 0.3392 |
| 0.0879 | 12.64 | 8000 | 0.7831 | 0.3394 |
| 0.0821 | 14.22 | 9000 | 0.7371 | 0.3333 |
| 0.0751 | 15.8 | 10000 | 0.8321 | 0.3445 |
| 0.0671 | 17.38 | 11000 | 0.8362 | 0.3357 |
| 0.0646 | 18.96 | 12000 | 0.8709 | 0.3367 |
| 0.0595 | 20.54 | 13000 | 0.8352 | 0.3321 |
| 0.0564 | 22.12 | 14000 | 0.8854 | 0.3323 |
| 0.052 | 23.7 | 15000 | 0.9031 | 0.3315 |
| 0.0485 | 25.28 | 16000 | 0.9171 | 0.3278 |
| 0.046 | 26.86 | 17000 | 0.9390 | 0.3254 |
| 0.0438 | 28.44 | 18000 | 0.9203 | 0.3258 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-base-checkpoint-9", "results": []}]}
|
jiobiala24/wav2vec2-base-checkpoint-9
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-checkpoint-9
==========================
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-8 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9203
* Wer: 0.3258
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
# BERT multilingual base model (cased)
Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is case sensitive: it makes a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the languages in the training set that can then be used to
extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
standard classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-multilingual-cased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] Hello I'm a model model. [SEP]",
'score': 0.10182085633277893,
'token': 13192,
'token_str': 'model'},
{'sequence': "[CLS] Hello I'm a world model. [SEP]",
'score': 0.052126359194517136,
'token': 11356,
'token_str': 'world'},
{'sequence': "[CLS] Hello I'm a data model. [SEP]",
'score': 0.048930276185274124,
'token': 11165,
'token_str': 'data'},
{'sequence': "[CLS] Hello I'm a flight model. [SEP]",
'score': 0.02036019042134285,
'token': 23578,
'token_str': 'flight'},
{'sequence': "[CLS] Hello I'm a business model. [SEP]",
'score': 0.020079681649804115,
'token': 14155,
'token_str': 'business'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
model = BertModel.from_pretrained("bert-base-multilingual-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
model = TFBertModel.from_pretrained("bert-base-multilingual-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list
[here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character.
The inputs of the model are then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"language": "multilingual", "license": "apache-2.0", "datasets": ["wikipedia"]}
|
jirmauritz/bert-multilingual-emoji
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1810.04805"
] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT multilingual base model (cased)
Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in this paper and first released in
this repository. This model is case sensitive: it makes a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the languages in the training set that can then be used to
extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
standard classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Training data
The BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list
here.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character.
The inputs of the model are then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### BibTeX entry and citation info
|
[
"# BERT multilingual base model (cased)\n\nPretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.\nIt was introduced in this paper and first released in\nthis repository. This model is case sensitive: it makes a difference\nbetween english and English.\n\nDisclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by\nthe Hugging Face team.",
"## Model description\n\nBERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of the languages in the training set that can then be used to\nextract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a\nstandard classifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list\nhere.",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a\nlarger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,\nJapanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. \n\nThe inputs of the model are then of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT multilingual base model (cased)\n\nPretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.\nIt was introduced in this paper and first released in\nthis repository. This model is case sensitive: it makes a difference\nbetween english and English.\n\nDisclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by\nthe Hugging Face team.",
"## Model description\n\nBERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n\nThis way, the model learns an inner representation of the languages in the training set that can then be used to\nextract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a\nstandard classifier using the features produced by the BERT model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Training data\n\nThe BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list\nhere.",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a\nlarger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,\nJapanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character. \n\nThe inputs of the model are then of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### BibTeX entry and citation info"
] |
fill-mask
|
transformers
|
<p align="center">
<img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo_with_name.png" alt="RobBERT: A Dutch RoBERTa-based Language Model" width="75%">
</p>
# RobBERT: Dutch RoBERTa-based Language Model.
[RobBERT](https://github.com/iPieter/RobBERT) is the state-of-the-art Dutch BERT model. It is a large pre-trained general Dutch language model that can be fine-tuned on a given dataset to perform any text classification, regression or token-tagging task. As such, it has been successfully used by many [researchers](https://scholar.google.com/scholar?oi=bibs&hl=en&cites=7180110604335112086) and [practitioners](https://huggingface.co/models?search=robbert) for achieving state-of-the-art performance for a wide range of Dutch natural language processing tasks, including:
- [Emotion detection](https://www.aclweb.org/anthology/2021.wassa-1.27/)
- Sentiment analysis ([book reviews](https://arxiv.org/pdf/2001.06286.pdf), [news articles](https://biblio.ugent.be/publication/8704637/file/8704638.pdf)*)
- [Coreference resolution](https://arxiv.org/pdf/2001.06286.pdf)
- Named entity recognition ([CoNLL](https://arxiv.org/pdf/2001.06286.pdf), [job titles](https://arxiv.org/pdf/2004.02814.pdf)*, [SoNaR](https://github.com/proycon/deepfrog))
- Part-of-speech tagging ([Small UD Lassy](https://arxiv.org/pdf/2001.06286.pdf), [CGN](https://github.com/proycon/deepfrog))
- [Zero-shot word prediction](https://arxiv.org/pdf/2001.06286.pdf)
- [Humor detection](https://arxiv.org/pdf/2010.13652.pdf)
- [Cyberbulling detection](https://www.cambridge.org/core/journals/natural-language-engineering/article/abs/automatic-classification-of-participant-roles-in-cyberbullying-can-we-detect-victims-bullies-and-bystanders-in-social-media-text/A2079C2C738C29428E666810B8903342)
- [Correcting dt-spelling mistakes](https://gitlab.com/spelfouten/dutch-simpletransformers/)*
and also achieved outstanding, near-sota results for:
- [Natural language inference](https://arxiv.org/pdf/2101.05716.pdf)*
- [Review classification](https://medium.com/broadhorizon-cmotions/nlp-with-r-part-5-state-of-the-art-in-nlp-transformers-bert-3449e3cd7494)*
\\* *Note that several evaluations use RobBERT-v1, and that the second and improved RobBERT-v2 outperforms this first model on everything we tested*
*(Also note that this list is not exhaustive. If you used RobBERT for your application, we are happy to know about it! Send us a mail, or add it yourself to this list by sending a pull request with the edit!)*
More in-depth information about RobBERT can be found in our [blog post](https://people.cs.kuleuven.be/~pieter.delobelle/robbert/), [our paper](https://arxiv.org/abs/2001.06286) and [the RobBERT Github repository](https://github.com/iPieter/RobBERT)
## How to use
RobBERT uses the [RoBERTa](https://arxiv.org/abs/1907.11692) architecture and pre-training but with a Dutch tokenizer and training data. RoBERTa is the robustly optimized English BERT model, making it even more powerful than the original BERT model. Given this same architecture, RobBERT can easily be finetuned and inferenced using [code to finetune RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html) models and most code used for BERT models, e.g. as provided by [HuggingFace Transformers](https://huggingface.co/transformers/) library.
By default, RobBERT has the masked language model head used in training. This can be used as a zero-shot way to fill masks in sentences. It can be tested out for free on [RobBERT's Hosted infererence API of Huggingface](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=De+hoofdstad+van+Belgi%C3%AB+is+%3Cmask%3E.). You can also create a new prediction head for your own task by using any of HuggingFace's [RoBERTa-runners](https://huggingface.co/transformers/v2.7.0/examples.html#language-model-training), [their fine-tuning notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) by changing the model name to `pdelobelle/robbert-v2-dutch-base`, or use the original fairseq [RoBERTa](https://github.com/pytorch/fairseq/tree/master/examples/roberta) training regimes.
Use the following code to download the base model and finetune it yourself, or use one of our finetuned models (documented on [our project site](https://people.cs.kuleuven.be/~pieter.delobelle/robbert/)).
```python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
tokenizer = RobertaTokenizer.from_pretrained("pdelobelle/robbert-v2-dutch-base")
model = RobertaForSequenceClassification.from_pretrained("pdelobelle/robbert-v2-dutch-base")
```
Starting with `transformers v2.4.0` (or installing from source), you can use AutoTokenizer and AutoModel.
You can then use most of [HuggingFace's BERT-based notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) for finetuning RobBERT on your type of Dutch language dataset.
## Technical Details From The Paper
### Our Performance Evaluation Results
All experiments are described in more detail in our [paper](https://arxiv.org/abs/2001.06286), with the code in [our GitHub repository](https://github.com/iPieter/RobBERT).
### Sentiment analysis
Predicting whether a review is positive or negative using the [Dutch Book Reviews Dataset](https://github.com/benjaminvdb/110kDBRD).
| Model | Accuracy [%] |
|-------------------|--------------------------|
| ULMFiT | 93.8 |
| BERTje | 93.0 |
| RobBERT v2 | **95.1** |
### Die/Dat (coreference resolution)
We measured how well the models are able to do coreference resolution by predicting whether "die" or "dat" should be filled into a sentence.
For this, we used the [EuroParl corpus](https://www.statmt.org/europarl/).
#### Finetuning on whole dataset
| Model | Accuracy [%] | F1 [%] |
|-------------------|--------------------------|--------------|
| [Baseline](https://arxiv.org/abs/2001.02943) (LSTM) | | 75.03 |
| mBERT | 98.285 | 98.033 |
| BERTje | 98.268 | 98.014 |
| RobBERT v2 | **99.232** | **99.121** |
#### Finetuning on 10K examples
We also measured the performance using only 10K training examples.
This experiment clearly illustrates that RobBERT outperforms other models when there is little data available.
| Model | Accuracy [%] | F1 [%] |
|-------------------|--------------------------|--------------|
| mBERT | 92.157 | 90.898 |
| BERTje | 93.096 | 91.279 |
| RobBERT v2 | **97.816** | **97.514** |
#### Using zero-shot word masking task
Since BERT models are pre-trained using the word masking task, we can use this to predict whether "die" or "dat" is more likely.
This experiment shows that RobBERT has internalised more information about Dutch than other models.
| Model | Accuracy [%] |
|-------------------|--------------------------|
| ZeroR | 66.70 |
| mBERT | 90.21 |
| BERTje | 94.94 |
| RobBERT v2 | **98.75** |
### Part-of-Speech Tagging.
Using the [Lassy UD dataset](https://universaldependencies.org/treebanks/nl_lassysmall/index.html).
| Model | Accuracy [%] |
|-------------------|--------------------------|
| Frog | 91.7 |
| mBERT | **96.5** |
| BERTje | 96.3 |
| RobBERT v2 | 96.4 |
Interestingly, we found that when dealing with **small data sets**, RobBERT v2 **significantly outperforms** other models.
<p align="center">
<img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_pos_accuracy.png" alt="RobBERT's performance on smaller datasets">
</p>
### Named Entity Recognition
Using the [CoNLL 2002 evaluation script](https://www.clips.uantwerpen.be/conll2002/ner/).
| Model | Accuracy [%] |
|-------------------|--------------------------|
| Frog | 57.31 |
| mBERT | **90.94** |
| BERT-NL | 89.7 |
| BERTje | 88.3 |
| RobBERT v2 | 89.08 |
## Pre-Training Procedure Details
We pre-trained RobBERT using the RoBERTa training regime.
We pre-trained our model on the Dutch section of the [OSCAR corpus](https://oscar-corpus.com/), a large multilingual corpus which was obtained by language classification in the Common Crawl corpus.
This Dutch corpus is 39GB large, with 6.6 billion words spread over 126 million lines of text, where each line could contain multiple sentences, thus using more data than concurrently developed Dutch BERT models.
RobBERT shares its architecture with [RoBERTa's base model](https://github.com/pytorch/fairseq/tree/master/examples/roberta), which itself is a replication and improvement over BERT.
Like BERT, it's architecture consists of 12 self-attention layers with 12 heads with 117M trainable parameters.
One difference with the original BERT model is due to the different pre-training task specified by RoBERTa, using only the MLM task and not the NSP task.
During pre-training, it thus only predicts which words are masked in certain positions of given sentences.
The training process uses the Adam optimizer with polynomial decay of the learning rate l_r=10^-6 and a ramp-up period of 1000 iterations, with hyperparameters beta_1=0.9
and RoBERTa's default beta_2=0.98.
Additionally, a weight decay of 0.1 and a small dropout of 0.1 helps prevent the model from overfitting.
RobBERT was trained on a computing cluster with 4 Nvidia P100 GPUs per node, where the number of nodes was dynamically adjusted while keeping a fixed batch size of 8192 sentences.
At most 20 nodes were used (i.e. 80 GPUs), and the median was 5 nodes.
By using gradient accumulation, the batch size could be set independently of the number of GPUs available, in order to maximally utilize the cluster.
Using the [Fairseq library](https://github.com/pytorch/fairseq/tree/master/examples/roberta), the model trained for two epochs, which equals over 16k batches in total, which took about three days on the computing cluster.
In between training jobs on the computing cluster, 2 Nvidia 1080 Ti's also covered some parameter updates for RobBERT v2.
## Investigating Limitations and Bias
In the [RobBERT paper](https://arxiv.org/abs/2001.06286), we also investigated potential sources of bias in RobBERT.
We found that the zeroshot model estimates the probability of *hij* (he) to be higher than *zij* (she) for most occupations in bleached template sentences, regardless of their actual job gender ratio in reality.
<p align="center">
<img src="https://github.com/iPieter/RobBERT/raw/master/res/gender_diff.png" alt="RobBERT's performance on smaller datasets">
</p>
By augmenting the DBRB Dutch Book sentiment analysis dataset with the stated gender of the author of the review, we found that highly positive reviews written by women were generally more accurately detected by RobBERT as being positive than those written by men.
<p align="center">
<img src="https://github.com/iPieter/RobBERT/raw/master/res/dbrd.png" alt="RobBERT's performance on smaller datasets">
</p>
## How to Replicate Our Paper Experiments
Replicating our paper experiments is [described in detail on teh RobBERT repository README](https://github.com/iPieter/RobBERT#how-to-replicate-our-paper-experiments).
## Name Origin of RobBERT
Most BERT-like models have the word *BERT* in their name (e.g. [RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html), [ALBERT](https://arxiv.org/abs/1909.11942), [CamemBERT](https://camembert-model.fr/), and [many, many others](https://huggingface.co/models?search=bert)).
As such, we queried our newly trained model using its masked language model to name itself *\\<mask\\>bert* using [all](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=Mijn+naam+is+%3Cmask%3Ebert.) [kinds](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=Hallo%2C+ik+ben+%3Cmask%3Ebert.) [of](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=Leuk+je+te+ontmoeten%2C+ik+heet+%3Cmask%3Ebert.) [prompts](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=Niemand+weet%2C+niemand+weet%2C+dat+ik+%3Cmask%3Ebert+heet.), and it consistently called itself RobBERT.
We thought it was really quite fitting, given that RobBERT is a [*very* Dutch name](https://en.wikipedia.org/wiki/Robbert) *(and thus clearly a Dutch language model)*, and additionally has a high similarity to its root architecture, namely [RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html).
Since *"rob"* is a Dutch words to denote a seal, we decided to draw a seal and dress it up like [Bert from Sesame Street](https://muppet.fandom.com/wiki/Bert) for the [RobBERT logo](https://github.com/iPieter/RobBERT/blob/master/res/robbert_logo.png).
## Credits and citation
This project is created by [Pieter Delobelle](https://people.cs.kuleuven.be/~pieter.delobelle), [Thomas Winters](https://thomaswinters.be) and [Bettina Berendt](https://people.cs.kuleuven.be/~bettina.berendt/).
If you would like to cite our paper or model, you can use the following BibTeX:
```
@inproceedings{delobelle2020robbert,
title = "{R}ob{BERT}: a {D}utch {R}o{BERT}a-based {L}anguage {M}odel",
author = "Delobelle, Pieter and
Winters, Thomas and
Berendt, Bettina",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.292",
doi = "10.18653/v1/2020.findings-emnlp.292",
pages = "3255--3265"
}
```
|
{"language": "nl", "license": "mit", "tags": ["Dutch", "Flemish", "RoBERTa", "RobBERT"], "datasets": ["oscar", "oscar (NL)", "dbrd", "lassy-ud", "europarl-mono", "conll2002"], "thumbnail": "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png", "widget": [{"text": "Hallo, ik ben RobBERT, een <mask> taalmodel van de KU Leuven."}]}
|
jirmauritz/robbert-v2-dutch-base
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"nl",
"arxiv:2001.06286",
"arxiv:2004.02814",
"arxiv:2010.13652",
"arxiv:2101.05716",
"arxiv:1907.11692",
"arxiv:2001.02943",
"arxiv:1909.11942",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2001.06286",
"2004.02814",
"2010.13652",
"2101.05716",
"1907.11692",
"2001.02943",
"1909.11942"
] |
[
"nl"
] |
TAGS
#transformers #pytorch #tf #jax #roberta #fill-mask #Dutch #Flemish #RoBERTa #RobBERT #nl #arxiv-2001.06286 #arxiv-2004.02814 #arxiv-2010.13652 #arxiv-2101.05716 #arxiv-1907.11692 #arxiv-2001.02943 #arxiv-1909.11942 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|

RobBERT: Dutch RoBERTa-based Language Model.
============================================
RobBERT is the state-of-the-art Dutch BERT model. It is a large pre-trained general Dutch language model that can be fine-tuned on a given dataset to perform any text classification, regression or token-tagging task. As such, it has been successfully used by many researchers and practitioners for achieving state-of-the-art performance for a wide range of Dutch natural language processing tasks, including:
* Emotion detection
* Sentiment analysis (book reviews, news articles\*)
* Coreference resolution
* Named entity recognition (CoNLL, job titles\*, SoNaR)
* Part-of-speech tagging (Small UD Lassy, CGN)
* Zero-shot word prediction
* Humor detection
* Cyberbulling detection
* Correcting dt-spelling mistakes\*
and also achieved outstanding, near-sota results for:
* Natural language inference\*
* Review classification\*
\\* *Note that several evaluations use RobBERT-v1, and that the second and improved RobBERT-v2 outperforms this first model on everything we tested*
*(Also note that this list is not exhaustive. If you used RobBERT for your application, we are happy to know about it! Send us a mail, or add it yourself to this list by sending a pull request with the edit!)*
More in-depth information about RobBERT can be found in our blog post, our paper and the RobBERT Github repository
How to use
----------
RobBERT uses the RoBERTa architecture and pre-training but with a Dutch tokenizer and training data. RoBERTa is the robustly optimized English BERT model, making it even more powerful than the original BERT model. Given this same architecture, RobBERT can easily be finetuned and inferenced using code to finetune RoBERTa models and most code used for BERT models, e.g. as provided by HuggingFace Transformers library.
By default, RobBERT has the masked language model head used in training. This can be used as a zero-shot way to fill masks in sentences. It can be tested out for free on RobBERT's Hosted infererence API of Huggingface. You can also create a new prediction head for your own task by using any of HuggingFace's RoBERTa-runners, their fine-tuning notebooks by changing the model name to 'pdelobelle/robbert-v2-dutch-base', or use the original fairseq RoBERTa training regimes.
Use the following code to download the base model and finetune it yourself, or use one of our finetuned models (documented on our project site).
Starting with 'transformers v2.4.0' (or installing from source), you can use AutoTokenizer and AutoModel.
You can then use most of HuggingFace's BERT-based notebooks for finetuning RobBERT on your type of Dutch language dataset.
Technical Details From The Paper
--------------------------------
### Our Performance Evaluation Results
All experiments are described in more detail in our paper, with the code in our GitHub repository.
### Sentiment analysis
Predicting whether a review is positive or negative using the Dutch Book Reviews Dataset.
### Die/Dat (coreference resolution)
We measured how well the models are able to do coreference resolution by predicting whether "die" or "dat" should be filled into a sentence.
For this, we used the EuroParl corpus.
#### Finetuning on whole dataset
Model: Baseline (LSTM), Accuracy [%]: , F1 [%]: 75.03
Model: mBERT, Accuracy [%]: 98.285, F1 [%]: 98.033
Model: BERTje, Accuracy [%]: 98.268, F1 [%]: 98.014
Model: RobBERT v2, Accuracy [%]: 99.232, F1 [%]: 99.121
#### Finetuning on 10K examples
We also measured the performance using only 10K training examples.
This experiment clearly illustrates that RobBERT outperforms other models when there is little data available.
Model: mBERT, Accuracy [%]: 92.157, F1 [%]: 90.898
Model: BERTje, Accuracy [%]: 93.096, F1 [%]: 91.279
Model: RobBERT v2, Accuracy [%]: 97.816, F1 [%]: 97.514
#### Using zero-shot word masking task
Since BERT models are pre-trained using the word masking task, we can use this to predict whether "die" or "dat" is more likely.
This experiment shows that RobBERT has internalised more information about Dutch than other models.
### Part-of-Speech Tagging.
Using the Lassy UD dataset.
Interestingly, we found that when dealing with small data sets, RobBERT v2 significantly outperforms other models.

### Named Entity Recognition
Using the CoNLL 2002 evaluation script.
Pre-Training Procedure Details
------------------------------
We pre-trained RobBERT using the RoBERTa training regime.
We pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus.
This Dutch corpus is 39GB large, with 6.6 billion words spread over 126 million lines of text, where each line could contain multiple sentences, thus using more data than concurrently developed Dutch BERT models.
RobBERT shares its architecture with RoBERTa's base model, which itself is a replication and improvement over BERT.
Like BERT, it's architecture consists of 12 self-attention layers with 12 heads with 117M trainable parameters.
One difference with the original BERT model is due to the different pre-training task specified by RoBERTa, using only the MLM task and not the NSP task.
During pre-training, it thus only predicts which words are masked in certain positions of given sentences.
The training process uses the Adam optimizer with polynomial decay of the learning rate l\_r=10^-6 and a ramp-up period of 1000 iterations, with hyperparameters beta\_1=0.9
and RoBERTa's default beta\_2=0.98.
Additionally, a weight decay of 0.1 and a small dropout of 0.1 helps prevent the model from overfitting.
RobBERT was trained on a computing cluster with 4 Nvidia P100 GPUs per node, where the number of nodes was dynamically adjusted while keeping a fixed batch size of 8192 sentences.
At most 20 nodes were used (i.e. 80 GPUs), and the median was 5 nodes.
By using gradient accumulation, the batch size could be set independently of the number of GPUs available, in order to maximally utilize the cluster.
Using the Fairseq library, the model trained for two epochs, which equals over 16k batches in total, which took about three days on the computing cluster.
In between training jobs on the computing cluster, 2 Nvidia 1080 Ti's also covered some parameter updates for RobBERT v2.
Investigating Limitations and Bias
----------------------------------
In the RobBERT paper, we also investigated potential sources of bias in RobBERT.
We found that the zeroshot model estimates the probability of *hij* (he) to be higher than *zij* (she) for most occupations in bleached template sentences, regardless of their actual job gender ratio in reality.

By augmenting the DBRB Dutch Book sentiment analysis dataset with the stated gender of the author of the review, we found that highly positive reviews written by women were generally more accurately detected by RobBERT as being positive than those written by men.

How to Replicate Our Paper Experiments
--------------------------------------
Replicating our paper experiments is described in detail on teh RobBERT repository README.
Name Origin of RobBERT
----------------------
Most BERT-like models have the word *BERT* in their name (e.g. RoBERTa, ALBERT, CamemBERT, and many, many others).
As such, we queried our newly trained model using its masked language model to name itself *\<mask\>bert* using all kinds of prompts, and it consistently called itself RobBERT.
We thought it was really quite fitting, given that RobBERT is a *very* Dutch name *(and thus clearly a Dutch language model)*, and additionally has a high similarity to its root architecture, namely RoBERTa.
Since *"rob"* is a Dutch words to denote a seal, we decided to draw a seal and dress it up like Bert from Sesame Street for the RobBERT logo.
Credits and citation
--------------------
This project is created by Pieter Delobelle, Thomas Winters and Bettina Berendt.
If you would like to cite our paper or model, you can use the following BibTeX:
|
[
"### Our Performance Evaluation Results\n\n\nAll experiments are described in more detail in our paper, with the code in our GitHub repository.",
"### Sentiment analysis\n\n\nPredicting whether a review is positive or negative using the Dutch Book Reviews Dataset.",
"### Die/Dat (coreference resolution)\n\n\nWe measured how well the models are able to do coreference resolution by predicting whether \"die\" or \"dat\" should be filled into a sentence.\nFor this, we used the EuroParl corpus.",
"#### Finetuning on whole dataset\n\n\nModel: Baseline (LSTM), Accuracy [%]: , F1 [%]: 75.03\nModel: mBERT, Accuracy [%]: 98.285, F1 [%]: 98.033\nModel: BERTje, Accuracy [%]: 98.268, F1 [%]: 98.014\nModel: RobBERT v2, Accuracy [%]: 99.232, F1 [%]: 99.121",
"#### Finetuning on 10K examples\n\n\nWe also measured the performance using only 10K training examples.\nThis experiment clearly illustrates that RobBERT outperforms other models when there is little data available.\n\n\nModel: mBERT, Accuracy [%]: 92.157, F1 [%]: 90.898\nModel: BERTje, Accuracy [%]: 93.096, F1 [%]: 91.279\nModel: RobBERT v2, Accuracy [%]: 97.816, F1 [%]: 97.514",
"#### Using zero-shot word masking task\n\n\nSince BERT models are pre-trained using the word masking task, we can use this to predict whether \"die\" or \"dat\" is more likely.\nThis experiment shows that RobBERT has internalised more information about Dutch than other models.",
"### Part-of-Speech Tagging.\n\n\nUsing the Lassy UD dataset.\n\n\n\nInterestingly, we found that when dealing with small data sets, RobBERT v2 significantly outperforms other models.\n\n\n\n",
"### Named Entity Recognition\n\n\nUsing the CoNLL 2002 evaluation script.\n\n\n\nPre-Training Procedure Details\n------------------------------\n\n\nWe pre-trained RobBERT using the RoBERTa training regime.\nWe pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus.\nThis Dutch corpus is 39GB large, with 6.6 billion words spread over 126 million lines of text, where each line could contain multiple sentences, thus using more data than concurrently developed Dutch BERT models.\n\n\nRobBERT shares its architecture with RoBERTa's base model, which itself is a replication and improvement over BERT.\nLike BERT, it's architecture consists of 12 self-attention layers with 12 heads with 117M trainable parameters.\nOne difference with the original BERT model is due to the different pre-training task specified by RoBERTa, using only the MLM task and not the NSP task.\nDuring pre-training, it thus only predicts which words are masked in certain positions of given sentences.\nThe training process uses the Adam optimizer with polynomial decay of the learning rate l\\_r=10^-6 and a ramp-up period of 1000 iterations, with hyperparameters beta\\_1=0.9\nand RoBERTa's default beta\\_2=0.98.\nAdditionally, a weight decay of 0.1 and a small dropout of 0.1 helps prevent the model from overfitting.\n\n\nRobBERT was trained on a computing cluster with 4 Nvidia P100 GPUs per node, where the number of nodes was dynamically adjusted while keeping a fixed batch size of 8192 sentences.\nAt most 20 nodes were used (i.e. 80 GPUs), and the median was 5 nodes.\nBy using gradient accumulation, the batch size could be set independently of the number of GPUs available, in order to maximally utilize the cluster.\nUsing the Fairseq library, the model trained for two epochs, which equals over 16k batches in total, which took about three days on the computing cluster.\nIn between training jobs on the computing cluster, 2 Nvidia 1080 Ti's also covered some parameter updates for RobBERT v2.\n\n\nInvestigating Limitations and Bias\n----------------------------------\n\n\nIn the RobBERT paper, we also investigated potential sources of bias in RobBERT.\n\n\nWe found that the zeroshot model estimates the probability of *hij* (he) to be higher than *zij* (she) for most occupations in bleached template sentences, regardless of their actual job gender ratio in reality.\n\n\n\n\n\n\n\nBy augmenting the DBRB Dutch Book sentiment analysis dataset with the stated gender of the author of the review, we found that highly positive reviews written by women were generally more accurately detected by RobBERT as being positive than those written by men.\n\n\n\n\n\n\n\nHow to Replicate Our Paper Experiments\n--------------------------------------\n\n\nReplicating our paper experiments is described in detail on teh RobBERT repository README.\n\n\nName Origin of RobBERT\n----------------------\n\n\nMost BERT-like models have the word *BERT* in their name (e.g. RoBERTa, ALBERT, CamemBERT, and many, many others).\nAs such, we queried our newly trained model using its masked language model to name itself *\\<mask\\>bert* using all kinds of prompts, and it consistently called itself RobBERT.\nWe thought it was really quite fitting, given that RobBERT is a *very* Dutch name *(and thus clearly a Dutch language model)*, and additionally has a high similarity to its root architecture, namely RoBERTa.\n\n\nSince *\"rob\"* is a Dutch words to denote a seal, we decided to draw a seal and dress it up like Bert from Sesame Street for the RobBERT logo.\n\n\nCredits and citation\n--------------------\n\n\nThis project is created by Pieter Delobelle, Thomas Winters and Bettina Berendt.\nIf you would like to cite our paper or model, you can use the following BibTeX:"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #fill-mask #Dutch #Flemish #RoBERTa #RobBERT #nl #arxiv-2001.06286 #arxiv-2004.02814 #arxiv-2010.13652 #arxiv-2101.05716 #arxiv-1907.11692 #arxiv-2001.02943 #arxiv-1909.11942 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Our Performance Evaluation Results\n\n\nAll experiments are described in more detail in our paper, with the code in our GitHub repository.",
"### Sentiment analysis\n\n\nPredicting whether a review is positive or negative using the Dutch Book Reviews Dataset.",
"### Die/Dat (coreference resolution)\n\n\nWe measured how well the models are able to do coreference resolution by predicting whether \"die\" or \"dat\" should be filled into a sentence.\nFor this, we used the EuroParl corpus.",
"#### Finetuning on whole dataset\n\n\nModel: Baseline (LSTM), Accuracy [%]: , F1 [%]: 75.03\nModel: mBERT, Accuracy [%]: 98.285, F1 [%]: 98.033\nModel: BERTje, Accuracy [%]: 98.268, F1 [%]: 98.014\nModel: RobBERT v2, Accuracy [%]: 99.232, F1 [%]: 99.121",
"#### Finetuning on 10K examples\n\n\nWe also measured the performance using only 10K training examples.\nThis experiment clearly illustrates that RobBERT outperforms other models when there is little data available.\n\n\nModel: mBERT, Accuracy [%]: 92.157, F1 [%]: 90.898\nModel: BERTje, Accuracy [%]: 93.096, F1 [%]: 91.279\nModel: RobBERT v2, Accuracy [%]: 97.816, F1 [%]: 97.514",
"#### Using zero-shot word masking task\n\n\nSince BERT models are pre-trained using the word masking task, we can use this to predict whether \"die\" or \"dat\" is more likely.\nThis experiment shows that RobBERT has internalised more information about Dutch than other models.",
"### Part-of-Speech Tagging.\n\n\nUsing the Lassy UD dataset.\n\n\n\nInterestingly, we found that when dealing with small data sets, RobBERT v2 significantly outperforms other models.\n\n\n\n",
"### Named Entity Recognition\n\n\nUsing the CoNLL 2002 evaluation script.\n\n\n\nPre-Training Procedure Details\n------------------------------\n\n\nWe pre-trained RobBERT using the RoBERTa training regime.\nWe pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus.\nThis Dutch corpus is 39GB large, with 6.6 billion words spread over 126 million lines of text, where each line could contain multiple sentences, thus using more data than concurrently developed Dutch BERT models.\n\n\nRobBERT shares its architecture with RoBERTa's base model, which itself is a replication and improvement over BERT.\nLike BERT, it's architecture consists of 12 self-attention layers with 12 heads with 117M trainable parameters.\nOne difference with the original BERT model is due to the different pre-training task specified by RoBERTa, using only the MLM task and not the NSP task.\nDuring pre-training, it thus only predicts which words are masked in certain positions of given sentences.\nThe training process uses the Adam optimizer with polynomial decay of the learning rate l\\_r=10^-6 and a ramp-up period of 1000 iterations, with hyperparameters beta\\_1=0.9\nand RoBERTa's default beta\\_2=0.98.\nAdditionally, a weight decay of 0.1 and a small dropout of 0.1 helps prevent the model from overfitting.\n\n\nRobBERT was trained on a computing cluster with 4 Nvidia P100 GPUs per node, where the number of nodes was dynamically adjusted while keeping a fixed batch size of 8192 sentences.\nAt most 20 nodes were used (i.e. 80 GPUs), and the median was 5 nodes.\nBy using gradient accumulation, the batch size could be set independently of the number of GPUs available, in order to maximally utilize the cluster.\nUsing the Fairseq library, the model trained for two epochs, which equals over 16k batches in total, which took about three days on the computing cluster.\nIn between training jobs on the computing cluster, 2 Nvidia 1080 Ti's also covered some parameter updates for RobBERT v2.\n\n\nInvestigating Limitations and Bias\n----------------------------------\n\n\nIn the RobBERT paper, we also investigated potential sources of bias in RobBERT.\n\n\nWe found that the zeroshot model estimates the probability of *hij* (he) to be higher than *zij* (she) for most occupations in bleached template sentences, regardless of their actual job gender ratio in reality.\n\n\n\n\n\n\n\nBy augmenting the DBRB Dutch Book sentiment analysis dataset with the stated gender of the author of the review, we found that highly positive reviews written by women were generally more accurately detected by RobBERT as being positive than those written by men.\n\n\n\n\n\n\n\nHow to Replicate Our Paper Experiments\n--------------------------------------\n\n\nReplicating our paper experiments is described in detail on teh RobBERT repository README.\n\n\nName Origin of RobBERT\n----------------------\n\n\nMost BERT-like models have the word *BERT* in their name (e.g. RoBERTa, ALBERT, CamemBERT, and many, many others).\nAs such, we queried our newly trained model using its masked language model to name itself *\\<mask\\>bert* using all kinds of prompts, and it consistently called itself RobBERT.\nWe thought it was really quite fitting, given that RobBERT is a *very* Dutch name *(and thus clearly a Dutch language model)*, and additionally has a high similarity to its root architecture, namely RoBERTa.\n\n\nSince *\"rob\"* is a Dutch words to denote a seal, we decided to draw a seal and dress it up like Bert from Sesame Street for the RobBERT logo.\n\n\nCredits and citation\n--------------------\n\n\nThis project is created by Pieter Delobelle, Thomas Winters and Bettina Berendt.\nIf you would like to cite our paper or model, you can use the following BibTeX:"
] |
fill-mask
|
transformers
|
BERT MLM
|
{}
|
jivatneet/bert-mlm-batchsize8
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
BERT MLM
|
[] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
feature-extraction
|
sentence-transformers
|
# sentence-transformers/gtr-t5-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model was specifically trained for the task of sematic search.
This model was converted from the Tensorflow model [gtr-base-1](https://tfhub.dev/google/gtr/gtr-base/1) to PyTorch. When using this model, have a look at the publication: [Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899). The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results.
The model uses only the encoder from a T5-base model. The weights are stored in FP16.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/gtr-t5-base')
embeddings = model.encode(sentences)
print(embeddings)
```
The model requires sentence-transformers version 2.2.0 or newer.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/gtr-t5-base)
## Citing & Authors
If you find this model helpful, please cite the respective publication:
[Large Dual Encoders Are Generalizable Retrievers](https://arxiv.org/abs/2112.07899)
|
{"language": "en", "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "feature-extraction"}
|
jj-co/gtr-t5-base
| null |
[
"sentence-transformers",
"pytorch",
"t5",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"arxiv:2112.07899",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2112.07899"
] |
[
"en"
] |
TAGS
#sentence-transformers #pytorch #t5 #feature-extraction #sentence-similarity #transformers #en #arxiv-2112.07899 #license-apache-2.0 #endpoints_compatible #region-us
|
# sentence-transformers/gtr-t5-base
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model was specifically trained for the task of sematic search.
This model was converted from the Tensorflow model gtr-base-1 to PyTorch. When using this model, have a look at the publication: Large Dual Encoders Are Generalizable Retrievers. The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results.
The model uses only the encoder from a T5-base model. The weights are stored in FP16.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
The model requires sentence-transformers version 2.2.0 or newer.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Citing & Authors
If you find this model helpful, please cite the respective publication:
Large Dual Encoders Are Generalizable Retrievers
|
[
"# sentence-transformers/gtr-t5-base\r\n\r\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model was specifically trained for the task of sematic search.\r\n\r\nThis model was converted from the Tensorflow model gtr-base-1 to PyTorch. When using this model, have a look at the publication: Large Dual Encoders Are Generalizable Retrievers. The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results.\r\n\r\nThe model uses only the encoder from a T5-base model. The weights are stored in FP16.",
"## Usage (Sentence-Transformers)\r\n\r\nUsing this model becomes easy when you have sentence-transformers installed:\r\n\r\n\r\n\r\nThen you can use the model like this:\r\n\r\n\r\n\r\nThe model requires sentence-transformers version 2.2.0 or newer.",
"## Evaluation Results\r\n\r\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Citing & Authors\r\n\r\nIf you find this model helpful, please cite the respective publication:\r\nLarge Dual Encoders Are Generalizable Retrievers"
] |
[
"TAGS\n#sentence-transformers #pytorch #t5 #feature-extraction #sentence-similarity #transformers #en #arxiv-2112.07899 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# sentence-transformers/gtr-t5-base\r\n\r\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space. The model was specifically trained for the task of sematic search.\r\n\r\nThis model was converted from the Tensorflow model gtr-base-1 to PyTorch. When using this model, have a look at the publication: Large Dual Encoders Are Generalizable Retrievers. The tfhub model and this PyTorch model can produce slightly different embeddings, however, when run on the same benchmarks, they produce identical results.\r\n\r\nThe model uses only the encoder from a T5-base model. The weights are stored in FP16.",
"## Usage (Sentence-Transformers)\r\n\r\nUsing this model becomes easy when you have sentence-transformers installed:\r\n\r\n\r\n\r\nThen you can use the model like this:\r\n\r\n\r\n\r\nThe model requires sentence-transformers version 2.2.0 or newer.",
"## Evaluation Results\r\n\r\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Citing & Authors\r\n\r\nIf you find this model helpful, please cite the respective publication:\r\nLarge Dual Encoders Are Generalizable Retrievers"
] |
image-classification
|
transformers
|
# lotr
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### aragorn

#### frodo

#### gandalf

#### gollum

#### legolas

|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
jjhoffstein/lotr
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# lotr
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### aragorn
!aragorn
#### frodo
!frodo
#### gandalf
!gandalf
#### gollum
!gollum
#### legolas
!legolas
|
[
"# lotr\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### aragorn\n\n!aragorn",
"#### frodo\n\n!frodo",
"#### gandalf\n\n!gandalf",
"#### gollum\n\n!gollum",
"#### legolas\n\n!legolas"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# lotr\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### aragorn\n\n!aragorn",
"#### frodo\n\n!frodo",
"#### gandalf\n\n!gandalf",
"#### gollum\n\n!gollum",
"#### legolas\n\n!legolas"
] |
null |
keras
|
# Simple CNN-based Artist Classifier
This repo contains a simple CNN-based Keras model which classifies images into one of 10 selected artists/painters.
- The purpose of this model was for a quick prototyping
- Data has been web-crawled using `https://github.com/YoongiKim/AutoCrawler`
- 10 popular artists/painters were chosen:
- \[ARTIST\]: \[ID\]
- claude_monet: 0,
- henri_matisse: 1,
- jean_michel_basquiat: 2,
- keith_haring: 3,
- pablo_picasso: 4,
- pierre_augste_renoir: 5,
- rene_magritte: 6,
- roy_richtenstein: 7,
- vincent_van_gogh: 8,
- wassily_kandinsky: 9
- About 100 representative paintings per artist were crawled and manually checked
- Dataset will be shared later
# How to use
```python
import tensorflow as tf
from huggingface_hub import from_pretrained_keras
model = from_pretrained_keras("jkang/drawing-artist-classifier")
image_file = 'monet.jpg'
img = tf.io.read_file(image_file)
img = tf.io.decode_jpeg(img, channels=3)
last_layer_activation, predictions = model(img[tf.newaxis,...])
```
# Intended uses & limitations
You can use this model freely for predicting artists or trends of a given image.
Please keep in mind that this model is not intended for production, but for research and quick prototyping.
Web-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists.
---
- 2022-01-18 first created by jaekoo kang
|
{"language": "en", "license": "mit", "datasets": ["web crawled (coming soon)"]}
|
jkang/drawing-artist-classifier
| null |
[
"keras",
"en",
"license:mit",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#keras #en #license-mit #has_space #region-us
|
# Simple CNN-based Artist Classifier
This repo contains a simple CNN-based Keras model which classifies images into one of 10 selected artists/painters.
- The purpose of this model was for a quick prototyping
- Data has been web-crawled using 'URL
- 10 popular artists/painters were chosen:
- \[ARTIST\]: \[ID\]
- claude_monet: 0,
- henri_matisse: 1,
- jean_michel_basquiat: 2,
- keith_haring: 3,
- pablo_picasso: 4,
- pierre_augste_renoir: 5,
- rene_magritte: 6,
- roy_richtenstein: 7,
- vincent_van_gogh: 8,
- wassily_kandinsky: 9
- About 100 representative paintings per artist were crawled and manually checked
- Dataset will be shared later
# How to use
# Intended uses & limitations
You can use this model freely for predicting artists or trends of a given image.
Please keep in mind that this model is not intended for production, but for research and quick prototyping.
Web-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists.
---
- 2022-01-18 first created by jaekoo kang
|
[
"# Simple CNN-based Artist Classifier\n\nThis repo contains a simple CNN-based Keras model which classifies images into one of 10 selected artists/painters.\n\n- The purpose of this model was for a quick prototyping\n- Data has been web-crawled using 'URL\n- 10 popular artists/painters were chosen:\n - \\[ARTIST\\]: \\[ID\\]\n - claude_monet: 0,\n - henri_matisse: 1,\n - jean_michel_basquiat: 2,\n - keith_haring: 3,\n - pablo_picasso: 4,\n - pierre_augste_renoir: 5,\n - rene_magritte: 6,\n - roy_richtenstein: 7,\n - vincent_van_gogh: 8,\n - wassily_kandinsky: 9\n- About 100 representative paintings per artist were crawled and manually checked\n- Dataset will be shared later",
"# How to use",
"# Intended uses & limitations\nYou can use this model freely for predicting artists or trends of a given image.\nPlease keep in mind that this model is not intended for production, but for research and quick prototyping.\nWeb-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists.\n---\n- 2022-01-18 first created by jaekoo kang"
] |
[
"TAGS\n#keras #en #license-mit #has_space #region-us \n",
"# Simple CNN-based Artist Classifier\n\nThis repo contains a simple CNN-based Keras model which classifies images into one of 10 selected artists/painters.\n\n- The purpose of this model was for a quick prototyping\n- Data has been web-crawled using 'URL\n- 10 popular artists/painters were chosen:\n - \\[ARTIST\\]: \\[ID\\]\n - claude_monet: 0,\n - henri_matisse: 1,\n - jean_michel_basquiat: 2,\n - keith_haring: 3,\n - pablo_picasso: 4,\n - pierre_augste_renoir: 5,\n - rene_magritte: 6,\n - roy_richtenstein: 7,\n - vincent_van_gogh: 8,\n - wassily_kandinsky: 9\n- About 100 representative paintings per artist were crawled and manually checked\n- Dataset will be shared later",
"# How to use",
"# Intended uses & limitations\nYou can use this model freely for predicting artists or trends of a given image.\nPlease keep in mind that this model is not intended for production, but for research and quick prototyping.\nWeb-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists.\n---\n- 2022-01-18 first created by jaekoo kang"
] |
null |
keras
|
# Simple CNN-based Artist Classifier
This repo contains a simple CNN-based Keras model which classifies images into one of 8 artistic trends.
See also: `https://huggingface.co/jkang/drawing-artist-classifier`
- The purpose of this model was for a quick prototyping
- Data has been web-crawled using `https://github.com/YoongiKim/AutoCrawler`
- 8 popular artists/painters were chosen:
- \[TREND\]: \[ID\]
- cubism: 0,
- expressionism: 1,
- fauvisme: 2,
- graffitiar: 3,
- impressionism: 4,
- popart: 5,
- post_impressionism: 6,
- surrealism: 7}
- About 100 representative paintings per artist considering 8 trends were crawled and manually checked
- Dataset will be shared later
# How to use
```python
import tensorflow as tf
from huggingface_hub import from_pretrained_keras
model = from_pretrained_keras("jkang/drawing-artistic-trend-classifier")
image_file = 'monet.jpg'
img = tf.io.read_file(image_file)
img = tf.io.decode_jpeg(img, channels=3)
last_layer_activation, predictions = model(img[tf.newaxis,...])
```
# Intended uses & limitations
You can use this model freely for predicting artists or trends of a given image.
Please keep in mind that this model is not intended for production, but for research and quick prototyping.
Web-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists.
---
- 2022-01-18 first created by jaekoo kang
|
{"language": "en", "license": "mit", "datasets": ["web crawled (coming soon)"]}
|
jkang/drawing-artistic-trend-classifier
| null |
[
"keras",
"en",
"license:mit",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#keras #en #license-mit #has_space #region-us
|
# Simple CNN-based Artist Classifier
This repo contains a simple CNN-based Keras model which classifies images into one of 8 artistic trends.
See also: 'URL
- The purpose of this model was for a quick prototyping
- Data has been web-crawled using 'URL
- 8 popular artists/painters were chosen:
- \[TREND\]: \[ID\]
- cubism: 0,
- expressionism: 1,
- fauvisme: 2,
- graffitiar: 3,
- impressionism: 4,
- popart: 5,
- post_impressionism: 6,
- surrealism: 7}
- About 100 representative paintings per artist considering 8 trends were crawled and manually checked
- Dataset will be shared later
# How to use
# Intended uses & limitations
You can use this model freely for predicting artists or trends of a given image.
Please keep in mind that this model is not intended for production, but for research and quick prototyping.
Web-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists.
---
- 2022-01-18 first created by jaekoo kang
|
[
"# Simple CNN-based Artist Classifier\n\nThis repo contains a simple CNN-based Keras model which classifies images into one of 8 artistic trends.\n\nSee also: 'URL\n\n- The purpose of this model was for a quick prototyping\n- Data has been web-crawled using 'URL\n- 8 popular artists/painters were chosen:\n - \\[TREND\\]: \\[ID\\]\n - cubism: 0,\n - expressionism: 1,\n - fauvisme: 2,\n - graffitiar: 3,\n - impressionism: 4,\n - popart: 5,\n - post_impressionism: 6,\n - surrealism: 7}\n- About 100 representative paintings per artist considering 8 trends were crawled and manually checked\n- Dataset will be shared later",
"# How to use",
"# Intended uses & limitations\nYou can use this model freely for predicting artists or trends of a given image.\nPlease keep in mind that this model is not intended for production, but for research and quick prototyping.\nWeb-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists.\n---\n- 2022-01-18 first created by jaekoo kang"
] |
[
"TAGS\n#keras #en #license-mit #has_space #region-us \n",
"# Simple CNN-based Artist Classifier\n\nThis repo contains a simple CNN-based Keras model which classifies images into one of 8 artistic trends.\n\nSee also: 'URL\n\n- The purpose of this model was for a quick prototyping\n- Data has been web-crawled using 'URL\n- 8 popular artists/painters were chosen:\n - \\[TREND\\]: \\[ID\\]\n - cubism: 0,\n - expressionism: 1,\n - fauvisme: 2,\n - graffitiar: 3,\n - impressionism: 4,\n - popart: 5,\n - post_impressionism: 6,\n - surrealism: 7}\n- About 100 representative paintings per artist considering 8 trends were crawled and manually checked\n- Dataset will be shared later",
"# How to use",
"# Intended uses & limitations\nYou can use this model freely for predicting artists or trends of a given image.\nPlease keep in mind that this model is not intended for production, but for research and quick prototyping.\nWeb-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists.\n---\n- 2022-01-18 first created by jaekoo kang"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `jkang/espnet2_an4_asr`
This model was trained by jaekookang using an4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 48422215e272812feb9bbac9d7cf4aae6a316bca
pip install -e .
cd egs2/an4/asr1
./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_an4_asr
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Feb 1 13:22:35 KST 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `48422215e272812feb9bbac9d7cf4aae6a316bca`
- Commit date: `Fri Jan 28 17:25:31 2022 +0000`
## asr_train_asr_transformer_raw_en_bpe30_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/test|130|773|91.5|6.5|2.1|0.6|9.2|38.5|
|decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/train_dev|100|591|88.8|7.4|3.7|0.7|11.8|41.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/test|130|2565|96.6|1.2|2.2|1.0|4.4|38.5|
|decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/train_dev|100|1915|94.0|1.7|4.3|0.4|6.4|41.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/test|130|2695|96.8|1.1|2.1|0.9|4.2|38.5|
|decode_asr_lm_lm_train_lm_en_bpe30_valid.loss.ave_asr_model_valid.acc.ave/train_dev|100|2015|94.3|1.6|4.1|0.4|6.1|41.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_raw_en_bpe30_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 64
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe30_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe30_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe30_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe30_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_nodev_sp/wav.scp
- speech
- sound
- - dump/raw/train_nodev_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/train_dev/wav.scp
- speech
- sound
- - dump/raw/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 2500
token_list:
- <blank>
- <unk>
- ▁
- T
- E
- O
- R
- Y
- A
- H
- U
- S
- I
- F
- B
- L
- P
- D
- G
- M
- C
- V
- X
- J
- K
- Z
- W
- N
- Q
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram30/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe30_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["an4"]}
|
jkang/espnet2_an4_asr
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:an4",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-an4 #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'jkang/espnet2\_an4\_asr'
This model was trained by jaekookang using an4 recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Tue Feb 1 13:22:35 KST 2022'
* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.6a1'
* pytorch version: 'pytorch 1.10.1'
* Git hash: '48422215e272812feb9bbac9d7cf4aae6a316bca'
+ Commit date: 'Fri Jan 28 17:25:31 2022 +0000'
asr\_train\_asr\_transformer\_raw\_en\_bpe30\_sp
------------------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'jkang/espnet2\\_an4\\_asr'\n\n\nThis model was trained by jaekookang using an4 recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Feb 1 13:22:35 KST 2022'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.6a1'\n* pytorch version: 'pytorch 1.10.1'\n* Git hash: '48422215e272812feb9bbac9d7cf4aae6a316bca'\n\t+ Commit date: 'Fri Jan 28 17:25:31 2022 +0000'\n\n\nasr\\_train\\_asr\\_transformer\\_raw\\_en\\_bpe30\\_sp\n------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-an4 #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'jkang/espnet2\\_an4\\_asr'\n\n\nThis model was trained by jaekookang using an4 recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Feb 1 13:22:35 KST 2022'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.6a1'\n* pytorch version: 'pytorch 1.10.1'\n* Git hash: '48422215e272812feb9bbac9d7cf4aae6a316bca'\n\t+ Commit date: 'Fri Jan 28 17:25:31 2022 +0000'\n\n\nasr\\_train\\_asr\\_transformer\\_raw\\_en\\_bpe30\\_sp\n------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `jkang/espnet2_librispeech_100_conformer`
- This model was trained by jaekookang using librispeech_100 recipe in [espnet](https://github.com/espnet/espnet/).
- Gradio Demo: [🤗 ESPNet2 ASR Librispeech Conformer](https://huggingface.co/spaces/jkang/espnet2_asr_librispeech_100h)
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 140704c146f8beeed74973f5258379f6133dcdfb
pip install -e .
cd egs2/librispeech_100/asr1
./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_librispeech_100_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Feb 11 01:42:52 KST 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `140704c146f8beeed74973f5258379f6133dcdfb`
- Commit date: `Tue Feb 8 16:06:02 2022 -0500`
- GPU: NVIDIA GeForce RTX 3090 (single GPU took: 13h)
## asr_conformer_lr2e-3_warmup15k_amp_nondeterministic
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|54402|94.5|5.1|0.4|0.7|6.3|56.6|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|50948|84.8|13.7|1.5|2.1|17.3|80.7|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|52576|94.2|5.3|0.5|0.8|6.6|57.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|52343|84.7|13.8|1.5|2.0|17.3|81.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|288456|98.2|1.1|0.8|0.7|2.5|56.6|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|265951|93.3|4.1|2.6|2.0|8.7|80.7|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|281530|98.0|1.1|0.9|0.7|2.7|57.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|272758|93.5|4.0|2.5|1.9|8.4|81.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|69558|92.0|5.0|3.0|0.7|8.7|56.6|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|64524|81.3|13.2|5.4|2.4|21.1|80.7|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|66983|91.8|5.1|3.1|0.6|8.8|57.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|66650|81.2|13.1|5.7|2.1|20.9|81.5|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_conformer_lr2e-3_warmup15k_amp_nondeterministic
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 400
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_clean_100_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_clean_100_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- ▁THE
- S
- ▁AND
- ▁OF
- ▁TO
- ▁A
- ▁IN
- ED
- ▁I
- ▁HE
- ▁WAS
- ▁THAT
- ING
- ▁IT
- ''''
- ▁HIS
- ▁HAD
- ▁WITH
- ▁YOU
- ▁FOR
- T
- ▁AS
- ▁HER
- LY
- ▁NOT
- ▁BUT
- ▁SHE
- ▁BE
- D
- E
- ▁IS
- ▁AT
- ▁ON
- ▁HIM
- ▁THEY
- ▁BY
- ▁HAVE
- Y
- ▁MY
- ▁SO
- ▁ALL
- ▁THIS
- ▁WERE
- ▁WHICH
- ▁ME
- ▁FROM
- ▁ONE
- ▁SAID
- ▁WE
- N
- ER
- ▁NO
- ▁THERE
- ▁WHEN
- ▁AN
- ▁THEIR
- ▁OR
- ▁WOULD
- ▁WHO
- ▁THEM
- R
- ▁IF
- ▁WHAT
- ▁ARE
- ▁BEEN
- ▁OUT
- ▁UP
- M
- ▁WILL
- ▁DO
- ▁MAN
- ▁COULD
- C
- ▁THEN
- ▁INTO
- ▁MORE
- ▁SOME
- ES
- P
- ▁VERY
- ▁NOW
- ▁YOUR
- ▁LITTLE
- ▁TIME
- ▁ABOUT
- ▁DID
- ▁THAN
- ▁LIKE
- ▁HAS
- L
- G
- AL
- IN
- ▁UPON
- ▁CAN
- ▁WELL
- ▁OTHER
- ▁OVER
- US
- ▁TWO
- ▁ONLY
- ▁ANY
- ▁OUR
- O
- EN
- RE
- ▁MADE
- U
- ▁AFTER
- ▁SEE
- ▁S
- ▁DOWN
- ▁BEFORE
- LL
- ST
- B
- ▁OLD
- ▁DAY
- ▁MISS
- ▁GREAT
- ▁US
- ▁KNOW
- OR
- ▁SUCH
- ▁GOOD
- ▁WAY
- A
- ▁THESE
- ▁CAME
- ▁UN
- ▁SHOULD
- ▁HOW
- ▁MISTER
- ▁GO
- ▁MUCH
- ▁WHERE
- ▁MUST
- ▁NEVER
- ▁COME
- ▁BACK
- ION
- 'ON'
- ▁LONG
- F
- ▁AGAIN
- ▁FIRST
- LE
- ▁MEN
- ▁EVEN
- NESS
- ▁MIGHT
- ▁OWN
- ▁MAY
- K
- ▁HIMSELF
- ▁SAY
- ▁JUST
- ▁THROUGH
- ▁RE
- ▁AM
- ▁ITS
- ▁WENT
- ▁THOUGHT
- ▁
- ▁DE
- ▁MAKE
- I
- ▁HAND
- ▁THINK
- ▁HOUSE
- ▁HERE
- IC
- H
- ATION
- ▁LIFE
- IT
- ▁EYES
- ▁MOST
- ▁WITHOUT
- ▁TOO
- ▁THOSE
- ABLE
- ▁EVERY
- ▁DON
- ▁MANY
- ▁AWAY
- ITY
- VE
- W
- ▁STILL
- ▁BEING
- ▁C
- ▁LAST
- ▁NIGHT
- ▁O
- ▁HEAD
- AN
- ▁FOUND
- ▁NOTHING
- ▁YOUNG
- ▁WHILE
- ▁TAKE
- ▁GET
- ▁PEOPLE
- RO
- ▁OFF
- ▁THOUGH
- EST
- ▁YET
- ▁THREE
- TH
- ▁RIGHT
- ▁UNDER
- AR
- ▁FACE
- IES
- ▁ROOM
- ▁NEW
- ▁SAW
- RA
- V
- ▁ASKED
- ▁TELL
- ERS
- ▁SAME
- MENT
- ▁HEART
- LESS
- ▁WORK
- ▁PLACE
- ▁ANOTHER
- ▁EVER
- ▁LEFT
- ▁SHALL
- ▁FATHER
- ▁PUT
- ▁ONCE
- ▁TOOK
- ▁LET
- ▁ALWAYS
- ▁SEEMED
- ▁PART
- IL
- UR
- ▁WHY
- ▁TOLD
- ▁GIVE
- ▁LOVE
- CE
- ▁MIND
- ▁LOOKED
- ▁HEARD
- ▁SOON
- ▁LOOK
- ▁MOTHER
- ▁FAR
- IVE
- ▁BECAUSE
- ▁HOME
- OUS
- ▁T
- EL
- ▁D
- ▁SOMETHING
- ▁SIDE
- ▁KING
- IS
- ATE
- ▁MOMENT
- ENT
- RY
- ▁THINGS
- ▁ST
- ▁LIGHT
- ▁FIND
- ▁GOING
- ▁THING
- ▁WORLD
- IR
- AT
- ▁WATER
- ▁END
- ▁DOOR
- ISH
- ▁KNEW
- ▁WOMAN
- ▁SIR
- ▁EACH
- RI
- ▁HAVING
- ▁AGAINST
- ▁FEW
- ▁E
- ▁BEGAN
- ▁BETTER
- ▁YES
- ▁NAME
- ▁ENOUGH
- ET
- ▁HARD
- ▁VOICE
- ▁YEARS
- ▁GOT
- ▁WHOLE
- ▁WHITE
- ▁WANT
- ▁GIRL
- ▁DONE
- ▁SEEN
- ▁HUNDRED
- ▁CALLED
- ▁BETWEEN
- ▁MORNING
- FUL
- AS
- ▁FELT
- TER
- ▁KIND
- X
- CH
- ▁HERSELF
- ANT
- ▁TOWARD
- ▁HALF
- ▁OH
- ▁AMONG
- ▁HOWEVER
- ▁TURNED
- ▁ALSO
- ▁BOTH
- ▁POOR
- ▁PERHAPS
- ▁REPLIED
- ▁COURSE
- UL
- ▁QUITE
- ▁REST
- ▁DOES
- ▁MYSELF
- NG
- LO
- ANCE
- ▁MA
- ▁SET
- ▁SMALL
- ▁B
- ▁SURE
- ▁F
- ▁GAVE
- ▁PRESENT
- ▁HIGH
- ▁ALMO
- ▁R
- CK
- ▁WHOM
- ▁NEAR
- ▁CARE
- ▁WAR
- ▁GOD
- ▁TOGETHER
- ▁SAT
- ▁SHOW
- TE
- NE
- ▁BEST
- ▁UNTIL
- ▁OPEN
- ▁W
- ▁FOUR
- ▁DEAR
- ▁HANDS
- ▁WORDS
- ▁SINCE
- ▁LAND
- ▁DIS
- MAN
- ▁ANYTHING
- ▁FEET
- ▁NEXT
- ▁GENERAL
- LING
- ▁LAY
- ▁NOR
- ▁STOOD
- ▁BLACK
- ▁POWER
- ▁BROUGHT
- Z
- IE
- ▁ROUND
- ▁BELIEVE
- ▁LARGE
- ▁ALONG
- ▁HELP
- ▁DAYS
- ▁FIVE
- ▁K
- ▁HOPE
- AM
- ▁CO
- ▁KEEP
- ▁FULL
- ▁WALK
- ▁MASTER
- ATED
- ▁NATURE
- ▁JOHN
- ▁POINT
- ▁DUR
- ▁MATTER
- ▁MONEY
- ▁CHILD
- ▁LOOKING
- ▁RATHER
- ▁AIR
- IA
- ▁P
- ▁TWENTY
- ▁FIRE
- OL
- ▁LESS
- ▁SHORT
- ▁PASSED
- ▁INDEED
- TY
- ▁CASE
- ▁WORD
- ▁WISH
- ▁COUNTRY
- LED
- ID
- ▁BOY
- ▁SOUND
- ▁FORM
- ▁CRIED
- LA
- ▁FRIEND
- TON
- ▁FACT
- ▁UNCLE
- ▁TAKEN
- ▁AL
- ▁TEN
- IAN
- ▁GONE
- ▁SEA
- ▁REASON
- TING
- ▁WHOSE
- ▁OTHERS
- AC
- ▁LI
- ▁DEATH
- ▁CERTAIN
- ▁ANSWERED
- ▁THEMSELVES
- ▁LADY
- ▁STATE
- ▁CAR
- ▁WIFE
- ▁THOUSAND
- ▁TRUE
- ▁BEHIND
- AGE
- ▁DOCTOR
- ▁FEAR
- ▁OFTEN
- OM
- ▁TILL
- ▁HA
- IOUS
- ▁AROUND
- IST
- ▁SENT
- ▁SPEAK
- ▁WOMEN
- ▁GROUND
- VER
- ENCE
- NA
- ▁TALK
- ▁CHILDREN
- TION
- CO
- MO
- ▁HEAR
- ▁ORDER
- ▁LEAVE
- ▁PRO
- ▁ALREADY
- ▁LA
- ▁FINE
- SE
- ▁BA
- PP
- ▁THUS
- AD
- ▁NEED
- ▁SIGHT
- ▁CALL
- ▁FELL
- ▁MANNER
- MP
- ▁BECAME
- UM
- ▁WATCH
- OW
- ▁FOOT
- ▁CANNOT
- ▁BODY
- ▁TOWN
- ▁LIVE
- INE
- ▁RETURNED
- ▁WONDER
- MA
- ▁G
- UT
- ▁CLOSE
- UN
- IM
- ▁ALONE
- ▁DIDN
- ▁LORD
- ▁RED
- ARY
- ▁GIVEN
- ▁SIX
- ▁EVERYTHING
- ▁DARK
- ▁DEAD
- ▁STRONG
- ▁SON
- ▁COMING
- URE
- ▁HELD
- ▁ABOVE
- ▁REALLY
- ▁BEAUTIFUL
- ▁SECOND
- ARD
- ▁EVENING
- ▁CON
- ▁HOUR
- ▁FELLOW
- ▁ROSE
- ▁PERSON
- ▁EX
- ▁CH
- ▁FORCE
- ▁MO
- ▁ARM
- ▁CAUSE
- ▁TURN
- ▁CITY
- ▁DOUBT
- ▁QUESTION
- TIC
- ▁DEEP
- ▁HAIR
- ICAL
- ▁MEAN
- ▁DI
- ▁CLEAR
- ▁SOMETIMES
- ▁STRANGE
- ▁FEEL
- ▁HO
- ▁IMP
- WARD
- AUGHT
- ▁CAPTAIN
- ▁USE
- ▁UNDERSTAND
- ▁KEPT
- ▁BR
- ▁WOOD
- ▁PRE
- ▁YEAR
- ▁TI
- ▁LEAST
- ▁BED
- ▁SA
- ▁TABLE
- ▁BECOME
- ▁FREE
- ▁FAMILY
- ME
- ▁EYE
- ▁WHETHER
- ▁MAKING
- ▁WITHIN
- ▁SORT
- ▁ANSWER
- ▁PO
- ▁SAYS
- ▁EARTH
- ▁RETURN
- ▁SUDDENLY
- ▁FRIENDS
- ▁GREEN
- ▁SUN
- ▁FAIR
- ▁TH
- ▁FALL
- ▁EITHER
- ▁BO
- ▁PRINCE
- ▁THOU
- ▁ITSELF
- ▁CHURCH
- ▁BIG
- ▁ABLE
- ▁DIFFERENT
- ▁SEVERAL
- ▁DAUGHTER
- ▁WON
- ▁WIND
- ▁BAD
- ▁LOST
- ▁READ
- ▁STORY
- ▁APPEARED
- DE
- ▁NUMBER
- ▁SP
- ▁LOW
- ▁ROAD
- ▁POSSIBLE
- ▁HUMAN
- ▁RIVER
- ▁STREET
- ▁GA
- ▁COLD
- ▁MET
- ▁ACT
- ▁BROTHER
- ▁AGE
- ▁KNOWN
- ▁CONTINUED
- ▁BRING
- ▁ILL
- ▁RUN
- ▁LAW
- ▁SUBJECT
- ▁CUT
- J
- PER
- ▁PA
- ▁TROUBLE
- ▁GLAD
- HE
- ▁SLEEP
- MEN
- ▁LATE
- ▁MEANS
- ▁ASK
- ▁REACHED
- ▁RAN
- AK
- ▁HORSE
- ▁USED
- WAY
- OP
- ▁WINDOW
- ▁SNOW
- ▁PAST
- ▁OBJECT
- ▁THEREFORE
- IONS
- ▁TREE
- ▁COMP
- ▁BLUE
- CA
- ▁VI
- ▁SIGN
- ▁EIGHTEEN
- ▁GARDEN
- ▁BUSINESS
- ▁PETER
- ▁FOLLOWED
- ▁SEEM
- ▁HOLD
- ▁HAPPY
- ▁LONGER
- ▁ACROSS
- ▁BU
- BE
- ▁ELSE
- ▁PLAY
- ▁SOUL
- ▁STAND
- ▁ARMS
- ▁SCHOOL
- ▁PRINCESS
- ▁CERTAINLY
- LT
- ▁ENGLISH
- ▁SEVEN
- ▁PER
- ▁IDEA
- ▁LE
- ▁BOOK
- ▁FEELING
- ▁HUSBAND
- ▁LINE
- PT
- THOUGH
- ▁OUGHT
- ▁RICH
- IP
- ▁VIEW
- ▁DREAM
- ▁SENSE
- ▁LO
- ▁READY
- ▁CARRIED
- ▁M
- ▁REGARD
- ▁CHANCE
- ▁WANTED
- ▁LIVED
- ▁LATER
- ▁INTEREST
- ▁EN
- ▁EFFECT
- ▁CLA
- ▁CHANGE
- ▁CA
- ▁REAL
- ▁SUPPOSE
- LES
- ▁ART
- ▁TIMES
- ▁MAR
- IF
- ▁WILD
- ▁ADDED
- ▁LETTER
- IAL
- ▁THANK
- ▁PARTY
- LAND
- ▁PAY
- ▁BREATH
- ▁TAKING
- ▁COURT
- ▁COUNT
- ILY
- ▁COMMON
- ▁PUBLIC
- ▁PURPOSE
- ▁PRETTY
- ▁TRUTH
- ▁STAY
- ▁EM
- NT
- ▁SH
- ▁REMEMBER
- ▁ENTERED
- ▁RECEIVED
- RED
- ▁SPOKE
- ▁USUAL
- ▁THY
- ▁FIGURE
- ▁LED
- ▁TREES
- ▁TRIED
- ▁FORWARD
- NED
- ▁HAT
- ▁BLOOD
- ▁BEYOND
- ▁BANK
- ▁LIVING
- ▁JOY
- ▁HOURS
- ▁ENGLAND
- ▁STONE
- VI
- GE
- ▁SWEET
- ▁POSITION
- ▁FRONT
- ▁GIRLS
- ▁VISIT
- ▁CHARACTER
- ▁SPIRIT
- ▁TA
- BO
- QUE
- QUI
- ▁OPENED
- ▁OCCASION
- ▁MEET
- ▁EIGHT
- ▁REMAIN
- ▁PASS
- TO
- ▁NORTH
- ▁SERVICE
- ▁SISTER
- ▁SE
- ▁BEAR
- ▁PLEASURE
- ▁CHIEF
- ▁FOREST
- ▁BELL
- ▁EXPERIENCE
- ▁STRUCK
- ▁CARRY
- ORY
- ▁WARM
- 'NO'
- ▁WORTH
- ▁SAYING
- ▁SILENCE
- ▁CROSS
- ▁JE
- ▁H
- ▁BEAUTY
- PH
- ▁DEAL
- KE
- ▁SECRET
- DY
- ▁MILES
- ▁LU
- ▁DOING
- ▁BOYS
- ▁CROWD
- ▁ACCOUNT
- REW
- ISM
- TI
- ▁FE
- ▁NONE
- ▁RO
- ▁NEARLY
- ▁CHA
- ▁YOUTH
- ▁CAP
- HA
- ▁BIT
- ▁LIE
- ▁ATTENTION
- ▁STANDING
- ▁STAR
- ▁RESPECT
- ▁FURTHER
- ATIONS
- ▁ROCK
- ▁BOW
- EM
- ▁EARLY
- ▁MOUTH
- ▁BOAT
- UB
- ▁IMMEDIATELY
- ▁EXCEPT
- SHIP
- ▁PICTURE
- ▁BRIGHT
- ▁WA
- ▁GREW
- ▁LEAD
- ▁CUR
- ▁TONE
- RRY
- RS
- ▁WIDE
- CHE
- ▁FORTH
- IG
- OS
- ▁NEITHER
- ▁YOURSELF
- ▁SMILE
- ▁DRESS
- ▁OPINION
- ▁HAPPENED
- ▁WAIT
- ▁SIT
- ▁SHIP
- ▁AH
- ▁DESIRE
- ▁THICK
- ▁THIRD
- ▁GRAND
- ▁FOLLOW
- ▁GATHER
- ▁HILL
- ALLY
- ▁COMPANY
- ▁CHAIR
- DER
- ▁TOP
- ▁PAR
- ▁LENGTH
- ▁THIRTY
- ▁MINE
- ▁MI
- ▁EAT
- ▁EQUAL
- ▁AFRAID
- ▁FRESH
- ▁TAIL
- ▁FILLED
- ▁SU
- ▁MINUTES
- ▁FAST
- BU
- ▁ENTER
- ▁QUEEN
- ▁UTTER
- AG
- ▁FLOOR
- ▁SHA
- DI
- ▁HEAVEN
- ▁STOPPED
- ▁GUARD
- ▁HALL
- ▁BAR
- ▁COMPLETE
- ▁NINE
- ▁WEEK
- ▁GOLD
- VA
- ▁FIFTY
- ▁BEAT
- ▁PRESS
- ▁ATTEMPT
- ▁EXCLAIMED
- DO
- ▁CONF
- ▁SEEMS
- ▁STARTED
- ▁EL
- ▁HAR
- ▁EXPRESSION
- ▁TRA
- ▁WONDERFUL
- ▁SAINT
- ▁APPEARANCE
- ▁GRAVE
- ▁OFFICE
- ▁INSTEAD
- ▁SILENT
- ▁SOUTH
- ▁AGO
- ▁CAMP
- ▁LOVED
- ▁PATH
- ▁LEARN
- ▁PLAN
- ▁GOVERNMENT
- OUR
- PPED
- ▁SITTING
- ▁SEAT
- TEN
- RESS
- SIDE
- ▁MOVED
- ▁DIE
- ▁RESULT
- ▁SPRING
- ▁PLEASE
- ▁RI
- ▁NATURAL
- ▁ANNE
- ▁STA
- ▁CORNER
- ▁WALL
- ▁IMPOSSIBLE
- ▁BROWN
- ▁SUIT
- ▁MUSIC
- PI
- ▁TRY
- ▁DIED
- ▁TEARS
- ▁JU
- ▁COMFORT
- ▁DANGER
- ▁MEASURE
- ▁PROPERTY
- ▁BORN
- CON
- ▁CR
- ▁BROKEN
- ▁MASS
- EVER
- IER
- ▁EXPRESS
- ▁POCKET
- ▁SCARCE
- ▁SELF
- NY
- ▁MADAME
- ▁LAUGHED
- ▁TOUCH
- ▁APPEAR
- ▁LONDON
- ▁SAFE
- ▁SHARP
- ▁ATTACK
- ▁JANE
- ▁COVERED
- ▁OUTSIDE
- ▁WHATEVER
- ▁PLACED
- ▁RACE
- ▁SHORE
- ▁LAID
- ▁ROMAN
- ▁PERSONAL
- UP
- AU
- ▁REMAINED
- ▁HAPPINESS
- ▁AFTERNOON
- ▁DISTANCE
- ▁STORM
- ▁MARRIED
- ▁FRANK
- ▁VALLEY
- ▁BOUND
- ▁TALKING
- ▁JO
- ▁QUICK
- ▁STEP
- AND
- ▁ARMY
- ▁EFFORT
- ▁FRENCH
- ▁V
- LEY
- ▁PARTICULAR
- ▁START
- ATING
- OO
- LU
- ▁TRANS
- ▁HAPPEN
- ▁HABIT
- ▁VILLAGE
- ▁BELOW
- ▁GENTLEMAN
- BLE
- ▁BILL
- ▁SAVE
- ACT
- ▁SOCIETY
- ▁MAJOR
- ▁QUARTER
- ▁SKY
- ▁GUESS
- CY
- ▁SAD
- ILE
- ▁SL
- ▁PLEASANT
- ▁STRAIGHT
- ▁STRENGTH
- ▁FORTUNE
- ▁WRONG
- ▁COMMAND
- ▁BOX
- ▁QUIET
- ISE
- ▁JA
- IBLE
- ▁TREAT
- ▁GLANCE
- ▁NECESSARY
- ▁FORGET
- ▁MOUNTAIN
- ▁WINTER
- ▁DREW
- ▁WAV
- ▁PLAIN
- ▁ENTIRELY
- ▁TEA
- ▁SOFT
- ▁QUICKLY
- ▁INFLUENCE
- ▁DINNER
- ▁FOOD
- ▁CHAPTER
- ▁YE
- ▁REACH
- ▁GETT
- ▁PAPER
- ▁GIVING
- ▁BEGINNING
- ▁SEND
- ▁FIGHT
- ▁SCENE
- ▁RUSH
- ▁PI
- ▁MARK
- ▁NA
- ▁BROKE
- ▁CLASS
- ▁BATTLE
- ▁EASY
- ▁GROUP
- BY
- ▁STOP
- ▁DIRECTION
- ▁BESIDE
- ▁MOR
- HAM
- UFF
- ▁WEST
- ▁OBLIG
- ▁COLOR
- ▁SINGLE
- ▁EASILY
- ▁PALE
- ▁ACTION
- ▁INTER
- ▁STRANGER
- ▁WI
- ▁CONVERSATION
- ▁BLOW
- ▁MARY
- ▁MU
- ▁TERRIBLE
- ▁THINKING
- ▁PULL
- ▁MOON
- AB
- ▁REP
- ▁ESPECIALLY
- ▁HEAVY
- ▁SICK
- ▁LUCK
- ▁TRAIN
- ▁GUN
- ▁GU
- ▁WAITING
- ▁TURNING
- ITIES
- ▁BREAD
- ▁BELONG
- ▁LOUD
- ▁REPORT
- ▁AMERICAN
- ▁JOURNEY
- ▁ANXIOUS
- ▁LIPS
- ▁KILLED
- IGHT
- GO
- ▁CONSIDER
- ▁PROBABLY
- ▁PALACE
- ▁HISTORY
- ▁LAKE
- ▁SHUT
- ▁SIMPLY
- WA
- ▁PAIN
- ▁HORSES
- ▁SEEING
- FULLY
- ▁EXPECTED
- ▁EVIL
- ▁BURN
- ▁SIMPLE
- ▁DIRECT
- IFIED
- HER
- ▁SLOWLY
- ▁LEG
- UGH
- ▁SAIL
- RIC
- ▁WISHED
- ▁RULE
- ▁LAD
- ▁MORAL
- ▁MOVE
- ▁FOLLOWING
- ▁SILVER
- ▁SEARCH
- ▁CHANGED
- ▁HANDSOME
- ▁COULDN
- ▁PASSION
- ▁HU
- ▁SMILED
- ▁STREAM
- ▁CONCERN
- ▁PRESENCE
- STER
- ▁CONTENT
- ▁BOARD
- ▁SHAPE
- ▁DECIDED
- ▁MARRY
- ▁PERFECT
- ▁STEPS
- ▁CLOSED
- ABLY
- DEN
- ▁WEAK
- ▁SUFFICIENT
- ▁SHADOW
- ▁EXPECT
- ▁SPOT
- ▁DUTY
- ▁SPEAKING
- ▁BESIDES
- ▁FIELD
- ▁ROLL
- ▁TRYING
- ▁EAR
- ▁VER
- ▁MARRIAGE
- ▁SHOT
- ▁SLAVE
- ▁MILL
- ▁NATION
- ▁NECK
- ▁ARRIVED
- ▁TALL
- ▁GRACE
- LIN
- ▁FORTY
- ▁BROAD
- ▁SUMMER
- ▁COUSIN
- ▁BEGIN
- ▁CATCH
- ▁FO
- ▁PE
- ▁MEANT
- ▁THIN
- IO
- ▁GROW
- ▁TRO
- ▁NOTICE
- ▁CRY
- ▁FISH
- ▁COM
- ▁DEGREE
- ▁HONOUR
- ▁UNDERSTOOD
- ▁SHOP
- ▁TRUST
- ▁CONDITION
- ▁FARM
- IZ
- ▁SUDDEN
- ▁SUCCESS
- ▁SURPRISE
- ORS
- ▁THOUGHTS
- UND
- ▁ALLOWED
- ITE
- ▁NARROW
- ▁GLASS
- ▁SERIOUS
- ▁STICK
- ▁GAME
- ▁SPENT
- ▁SELL
- ▁GRA
- ▁LOWER
- ▁RAISED
- ▁PIN
- ▁ALLOW
- ▁CALM
- FT
- ▁L
- ▁PU
- ▁FIT
- ACH
- ▁SUFFER
- ▁LEGS
- ▁SUPPORT
- ▁FRANCE
- ▁LATTER
- OV
- ▁TASTE
- ▁GATE
- ▁INSTANT
- ▁MINUTE
- ▁OFFER
- ▁GREATER
- ▁PORT
- ILL
- ▁INDIVIDUAL
- ▁AUNT
- ▁EAST
- ▁ADVANTAGE
- ▁FASHION
- ▁SWORD
- ▁TWELVE
- ▁HONOR
- ▁MOVEMENT
- ▁ISLAND
- ACK
- ▁WOODS
- NCH
- ▁PLEASED
- ▁ENEMY
- ▁RAIN
- ▁VARIOUS
- ▁OBSERVED
- ▁LADIES
- ▁BELIEVED
- ▁CAST
- ▁RISE
- ▁BALL
- ▁MONTHS
- ICE
- ▁MURDER
- ▁CONDUCT
- ▁SOCIAL
- ▁TENDER
- ▁LEARNED
- ▁FRA
- ▁FIRM
- CLOCK
- ▁PREVENT
- ▁RING
- LIE
- ▁GOLDEN
- ▁DECLARED
- ▁BUILDING
- ▁WRITE
- ▁ATTEND
- ▁CARRIAGE
- ▁SITUATION
- IDE
- ▁NOBLE
- ▁HUNG
- ▁RUNN
- ▁YELLOW
- ▁KNOWLEDGE
- ▁YORK
- ▁PUSH
- ▁LEAVING
- ▁POST
- ▁CIRCUMSTANCES
- ▁SEEK
- ▁FINALLY
- ▁MAIN
- ▁LETTERS
- ▁POL
- ▁ADD
- FE
- ▁ANCIENT
- ▁MARCH
- ▁WINE
- ▁STATES
- ▁WALLS
- ▁PRISONER
- ▁ISABEL
- ▁TEMPER
- ▁JUDGE
- ▁FAINT
- ▁POND
- ▁GRASS
- ▁FAM
- OUT
- ▁LAUGH
- ▁GRAY
- IGN
- ▁ESCAPE
- ▁KILL
- ▁PRAY
- ▁COMES
- ▁ABSOLUTE
- ▁BLIND
- ▁WIN
- ▁HOST
- ▁MERELY
- ▁RID
- ▁EVERYBODY
- ▁MATERIAL
- ▁STRETCH
- ▁DUE
- ▁ROW
- ▁TIN
- ▁PROMISE
- ▁LISTEN
- ▁WALKING
- ▁COMPANION
- ▁INDIAN
- ▁BREAK
- ▁BENEATH
- ▁RUIN
- ▁EDGE
- ▁WOR
- ▁FORMER
- ▁WORSE
- ▁EVIDENTLY
- ▁HARM
- ▁CENT
- ▁PIECE
- ▁LOT
- ▁PRESIDENT
- ▁SPECIAL
- ▁LABOR
- ▁HEALTH
- GA
- ▁PLACES
- ▁BEN
- ▁SOMEWHAT
- ▁DROPPED
- ▁AFFECTION
- ▁EXACTLY
- ▁DARKNESS
- ▁FALLEN
- ▁DRESSED
- ▁BILLY
- ▁ACCEPT
- ▁FL
- ▁HOT
- ▁REPEATED
- ▁MEETING
- PA
- ▁PERIOD
- ▁HONEST
- ▁INSTANCE
- ▁FLA
- ▁PASSAGE
- ▁NE
- ▁POSSESSION
- ▁WEAR
- ▁PEACE
- ▁COAT
- ▁HOUSES
- ▁MOUNTAINS
- ▁FIFTEEN
- ▁WELCOME
- ▁YARD
- ▁PROPER
- ▁MUS
- ADE
- ▁RECEIVE
- ▁SKIN
- ▁GROWN
- ▁AFTERWARDS
- ANG
- ▁DA
- ▁DIFFICULT
- ▁PERSONS
- ▁ACCORDING
- ▁FARMER
- ▁SPEECH
- ▁IMPORTANT
- PAR
- ▁PERFECTLY
- ▁MIN
- ▁CONSIDERED
- ▁NU
- ▁DEPEND
- ▁MORROW
- ▁MOUNT
- ▁KISS
- ▁LYING
- ▁SUFFERING
- ▁EXIST
- ERY
- OOK
- BA
- ▁PAINT
- AH
- ▁CAT
- ▁PURE
- ▁WISE
- ▁PRIVATE
- ▁REBECCA
- ▁VESSEL
- ▁CLEAN
- ▁GENTLEMEN
- ▁IRON
- ▁STORE
- ▁FUR
- ▁INDIANS
- ▁LOSE
- ▁BATH
- ▁NEWS
- ▁CHI
- ▁FA
- ▁CHARGE
- ▁PRIEST
- ▁WRITTEN
- ▁FORGOTTEN
- ▁TRAIL
- ▁CLOTHES
- ▁ALIVE
- ▁SUB
- ▁REPLY
- ▁THROW
- ▁AB
- ▁SOLDIERS
- ▁ISN
- ▁COTTAGE
- ▁COURAGE
- ▁CONTAIN
- ▁BUILT
- ▁PAID
- ▁HUNT
- ▁CASTLE
- HOOK
- ▁MERE
- GGED
- ▁NI
- ▁UNC
- ▁PREPARED
- ▁BARE
- ▁SMILING
- ▁SPREAD
- ▁WEATHER
- ▁EDWARD
- ▁GERMAN
- ▁CURIOUS
- ▁SERVANT
- ▁DISCOVERED
- ▁TRAVEL
- EY
- ▁DANCE
- ▁PEN
- BR
- GEN
- ▁BREAKFAST
- ▁CHAMBER
- ▁WILLIAM
- ▁TERROR
- ▁SPITE
- ▁TIRED
- ▁LOCK
- ▁CONSIDERABLE
- TLE
- ▁MANAG
- ▁DRY
- ▁FINISHED
- ▁MILLION
- ▁FRE
- ▁MIS
- ▁PASSING
- ▁DRAW
- ▁BON
- ▁VA
- ▁VEN
- ▁MAKES
- ▁VAIN
- ▁BOTTOM
- ▁DRINK
- ▁FUTURE
- ▁RACHEL
- ▁SORROW
- ▁SIXTEEN
- ▁KNIT
- ▁PROUD
- WI
- ▁TOBY
- ▁NOISE
- ▁SLIGHT
- ▁PROCEED
- ▁FER
- ▁COVER
- ▁DRAWING
- ▁FAVOR
- ▁CATHERINE
- ▁NEWSPAPER
- ▁NOBODY
- ▁ROOF
- ▁WEALTH
- ▁PROVE
- ▁DRAWN
- TTED
- OKE
- ▁DETERMINED
- ▁DOG
- ▁REMEMBERED
- ▁OPENING
- ▁FLOWERS
- ▁GENTLE
- ▁KNIGHT
- ▁RECOVER
- ▁DESERT
- ▁MOTION
- ▁NICE
- ▁INTENTION
- ▁GROWING
- ▁CLOUD
- ▁MONTH
- HOOD
- ▁POT
- UDE
- ▁PLANT
- ▁MAD
- ▁ENJOY
- ▁FAT
- ▁COR
- ▁KNOWING
- ▁IDEAS
- IZED
- ▁CHEEK
- ▁EUROPE
- ▁KNOCK
- ▁ALARM
- ▁TONGUE
- ▁SPACE
- ▁PATSY
- ▁MISTRESS
- ▁HENRY
- ▁JERRY
- ▁LIKED
- ▁PLAYED
- ▁BOOKS
- ▁MODER
- ▁CORN
- ▁ELIZABETH
- ▁CLUB
- ▁BRAIN
- ▁TROOP
- ▁COOK
- ▁DU
- ▁FUN
- DAY
- ▁QUA
- ▁FLOW
- ▁DARE
- ▁DELIGHT
- ▁WOUND
- ▁DESCEND
- ▁EVERYWHERE
- ▁FRIGHTENED
- ▁GEORGE
- ▁PECULIAR
- ▁MACHINE
- ▁PATIENT
- ▁MEADOW
- ▁PEASANT
- ▁BURST
- ▁ORDINAR
- ▁SONG
- ▁BRAVE
- ▁EXISTENCE
- ▁LUCY
- ▁J
- ▁CAREFULLY
- ▁PRESENTLY
- ▁GEN
- ▁COW
- LLY
- ▁PROMISED
- UOUS
- ▁LIFTED
- ▁MEANING
- ALL
- ▁FAIL
- NER
- ▁REGULAR
- ▁VIRTUE
- ▁STUDY
- ▁PROTECT
- ▁FOND
- ▁FANCY
- ▁STOCK
- ▁KEY
- ▁JUSTICE
- ▁PACK
- LET
- ▁AFFAIRS
- ▁DIFFICULTY
- ▁WORE
- ▁COST
- ▁HEAT
- ▁SHOULDER
- ▁OFFERED
- ▁MISTAKE
- ▁DOLLARS
- ▁LOOKS
- QUA
- ▁BREAST
- ▁PRINCIPLE
- ▁CHARLES
- ▁TEETH
- ▁OCCUPIED
- ▁DROP
- ▁PAPA
- ▁SHEEP
- ▁KNOWS
- ▁DECK
- ▁BORE
- ▁EXC
- ▁SURPRISED
- ▁STATION
- ▁PL
- ▁PR
- ▁OURSELVES
- ▁SYMPATHY
- ▁RUTH
- ▁EXCITED
- ▁CONTROL
- ▁ANGRY
- ▁IMAGINATION
- ▁WITNESS
- ▁HOLDING
- THER
- DA
- ▁TRADE
- ▁CREATURE
- ▁SISTERS
- ▁JOIN
- LAS
- ▁ALTOGETHER
- ▁CIVIL
- ▁EMPTY
- ▁LEAP
- ▁HURT
- ▁BOLD
- ▁TASK
- ▁POLICE
- ▁DRAGON
- ▁MAID
- ▁CLAIM
- ▁SHAME
- ▁PHYSICAL
- ▁CONC
- ▁SEIZED
- ▁OB
- ▁LIVES
- ▁HEIGHT
- ▁GI
- ▁PAL
- ▁CHARMING
- ▁FEELINGS
- ▁SERVANTS
- ▁DELIVER
- ▁FRUIT
- ▁SATISFIED
- ▁STRUGGLE
- ▁WROTE
- ▁CONCEAL
- ▁MOVING
- ▁FLASH
- ▁OPPOSITE
- ▁HURRY
- ▁ROUGH
- ▁PRICE
- ▁AWFUL
- ▁SAND
- ▁SLIPP
- ▁SHOWN
- ▁SPRA
- ▁AGREED
- ▁FIXED
- ▁PERCEIVED
- ▁UPPER
- ▁FINGER
- ▁FINGERS
- ▁EAGER
- LF
- ▁EARS
- LIGHT
- ▁IMAGINE
- ▁LIKELY
- ▁COAST
- ▁UNITED
- ▁VAN
- ▁EXPLAINED
- ▁TELLING
- ▁DANGEROUS
- ▁DICK
- ▁COOL
- ▁CAL
- ▁INSIST
- BI
- ▁SECURE
- ▁HILLS
- ▁SAN
- ▁CHEER
- ▁FILL
- ▁BUY
- ZA
- HI
- ▁CLOTH
- ▁POSSESSED
- ▁ADVANCE
- ▁METHOD
- ATIVE
- ▁GREATLY
- ▁SMOKE
- ▁HIGHER
- ▁COMPANIONS
- ▁ANIMALS
- ▁GALL
- ▁QUIETLY
- ▁TRAVELL
- ▁RESOLVED
- ▁FLEW
- ▁CARLYLE
- ▁MEMORY
- ▁RESIST
- ▁GRAHAM
- ▁LAUGHING
- ▁FAITH
- ▁BIRD
- CRI
- ▁LEAVES
- ▁AMERICA
- ▁DEMAND
- BOARD
- ▁AWAKE
- ▁CURIOSITY
- ▁LANGUAGE
- ▁VIOLENT
- ▁AWARE
- ▁DOUBLE
- ▁LOOSE
- LIKE
- ▁ADAM
- ▁RISING
- ▁HOTEL
- ▁BAND
- ▁ENGAGED
- ▁HEADS
- ▁LOG
- ▁FORMED
- ▁WINDOWS
- ▁PREFER
- RUS
- ▁THROWN
- ▁ARCH
- ▁PAUSE
- ▁SERVE
- KIN
- ▁FALLING
- ▁VO
- ▁WHISPERED
- ▁POWERFUL
- ▁ER
- ▁DEPART
- ▁CRUEL
- ▁EXAMPLE
- ▁SMOOTH
- ▁INTRODUC
- ▁RELIGION
- ▁SEVENTEEN
- ▁ABSENCE
- ▁PRINT
- ▁SHINING
- ▁ICE
- ▁POET
- ▁DREADFUL
- ▁REQUIRED
- ▁ORIGINAL
- ▁POINTED
- ▁INSIDE
- ▁BROTHERS
- ▁PRODUCED
- ▁SPOKEN
- ▁CREATURES
- ▁FLY
- ▁TOM
- ▁PURSU
- ▁SYSTEM
- ▁EXCELLENT
- ▁EXCITEMENT
- ▁MIDDLE
- ▁FALSE
- ▁REGRET
- ▁RAY
- ▁PHYSICIAN
- ▁COP
- ▁VALUE
- ▁TOUCHED
- ▁FLAT
- ▁OAK
- ▁SUM
- ▁LOSS
- ▁PAPERS
- ▁STEPP
- ▁REVER
- ▁SHADE
- SOME
- ▁LISTENED
- ▁N
- ▁DISCOVER
- ▁BITTER
- TERN
- ▁HOLE
- ▁ADVANCED
- ▁PICK
- ARTAGNAN
- ▁CORPORAL
- ▁ASLEEP
- ▁TEMPLE
- ▁INDICAT
- IUM
- ▁FARTHER
- ▁EXCUSE
- ▁FLU
- ▁NOSE
- ▁SIXTY
- ▁SUPPOSED
- ▁PROVED
- ▁RATE
- ▁SHOULDERS
- ▁AFFAIR
- ▁FIELDS
- ▁REMARKED
- AVE
- ▁WEEKS
- ▁ESTABLISH
- ▁PARIS
- ▁ADMIT
- ▁NEIGHBOR
- ▁ATTRACT
- ▁CUSTOM
- ▁DISTINGUISH
- ▁SURFACE
- ▁COUPLE
- ▁DEVIL
- ▁LIMIT
- ▁ROYAL
- ▁FOOL
- ▁RARE
- ▁PRIDE
- ▁PROFESSOR
- ▁SAKE
- ▁DALE
- ▁VAST
- ▁REFUSED
- ▁FAILED
- ▁BAG
- ▁ROB
- ▁WASH
- ▁FAIRY
- ▁FREQUENT
- ▁MARILLA
- ▁PROGRESS
- ▁RELIEF
- ▁DROVE
- ▁DOZEN
- ▁AHEAD
- ▁ADVENTURE
- ▁GRANT
- ▁PRIM
- ▁MENTAL
- ▁PAIR
- ▁IMPRESSION
- ▁WOUNDED
- ▁FULLY
- ▁DISAPPEARED
- ▁MILE
- ▁DRIVE
- ▁MUD
- ▁SIZE
- ▁ANIMAL
- ZE
- ▁GRE
- ▁REPRESENT
- ▁ACQUAINTANCE
- ▁INSTRUMENT
- ▁SPLENDID
- ▁UNKNOWN
- ▁CORONEL
- ▁EMPEROR
- ▁EARNEST
- ▁EXTEND
- ▁BRIEF
- ▁RENDER
- ▁PARENTS
- ▁GENTLY
- ▁CALLING
- ▁TRIBE
- ▁CHRISTIAN
- ▁INTERESTING
- ▁LAMP
- ▁JIMM
- ▁DIV
- ▁LOVER
- UCH
- ▁HID
- ▁NEEDED
- ▁ORDERED
- ▁MEAL
- ▁SLOW
- ▁DAM
- ▁CLOUDS
- ▁DAN
- ▁GAR
- ▁EXPLAIN
- ▁QUI
- ▁CLIMB
- ▁HURRIED
- ▁MURMUR
- ▁SWIFT
- ▁ARTHUR
- ▁JEFF
- ▁KINGDOM
- ▁MESSAGE
- ▁PROTEST
- ▁ORGAN
- ▁RISK
- ▁FORGIVE
- ▁OCCURRED
- ▁PEARL
- ▁ODD
- ▁INFORMATION
- ▁BUSY
- ▁TRI
- ▁LACK
- ▁BAY
- ▁FLEET
- ▁CROWN
- ▁WAITED
- ▁BIRDS
- ▁PITY
- ▁SUCCEEDED
- ▁INFORMED
- ▁WISHES
- ▁DIRECTLY
- ▁CABIN
- ▁AUGUST
- ▁COUNTENANCE
- ▁HORROR
- ▁PHILIP
- ▁POPULAR
- ▁PREVIOUS
- ▁CONTRARY
- ▁ARTICLE
- ▁DIFFERENCE
- ▁HIDDEN
- ▁HUGE
- ▁AUTHORITY
- ▁POUND
- ▁JUMP
- ▁SPI
- ▁SHAKE
- ▁EVENTS
- ▁FRO
- ▁LEAN
- ▁CRO
- ▁TRIM
- ▁SHARE
- ▁FISHER
- ▁SETTLED
- ▁QUESTIONS
- ▁SI
- ▁VAL
- ▁APPROACHED
- ▁SUGGESTED
- ▁CONTINU
- ▁PERFORM
- ▁ACKNOWLEDG
- ▁CLIFF
- ▁COLONEL
- ▁GHOST
- ▁MAJESTY
- ▁EMOTION
- ▁SUPPER
- ▁DISTANT
- ▁INTERESTED
- ▁JACK
- ▁HUM
- ▁TRAMP
- ▁BRI
- ▁POUR
- ▁SHIPS
- ▁CHAIN
- ▁DY
- ▁RANK
- ▁MATTERS
- ▁LOVELY
- AW
- ▁PAT
- ▁WORKING
- ▁CONSEIL
- ▁EVIDENCE
- ▁MERCHANT
- ▁SOLEMN
- ▁CONSTANT
- ▁MINISTER
- ▁OFFICIAL
- ▁SENTIMENT
- ▁CENTURY
- ▁DELAY
- ▁JAMES
- ▁MATCH
- ▁FOREIGN
- ▁AROSE
- ▁BEAST
- ▁BAB
- ▁WIT
- ▁REMARKABLE
- ▁THOR
- ▁COMPAR
- ▁MAL
- ▁NEARER
- ▁FOURTH
- ▁GREY
- ▁MENTION
- ▁RUBB
- ▁CHARM
- ▁BARON
- ▁DESIRED
- SCAR
- ▁HOPED
- ▁TEACHER
- ▁MON
- ITCH
- BEL
- ▁PARTS
- ▁EIGHTY
- LAC
- GGING
- ▁REFLECT
- ▁COLLECT
- ▁BULL
- ▁CONSCIOUS
- ▁MOMENTS
- ▁DISTURB
- ▁COLLEGE
- ▁EGGS
- ▁STUPID
- ▁YESTERDAY
- ▁EXAMINE
- ▁FAULT
- ▁DEPTH
- ▁ROOT
- ▁MOUSE
- ▁SOUGHT
- ▁TURTLE
- ▁NATIVE
- ▁CRACK
- ▁SOLD
- ▁INVIT
- ▁PICKED
- ▁CEASED
- ▁HEARING
- ▁MIDS
- ▁PLAYING
- ▁STAGE
- ▁UNTO
- ▁GAIN
- ▁MIST
- ▁ORDERS
- ▁KNEES
- ▁TALE
- ▁DISTINCT
- ▁BENT
- ▁DESPAIR
- ▁TRIUMPH
- ▁SQUARE
- ▁THROAT
- ▁BOUGHT
- ▁PERMIT
- ▁SPEND
- ▁TRIP
- ▁THREATEN
- ▁ROME
- INESS
- ▁EXPOS
- GON
- ▁WRITING
- ▁INCREASED
- ▁PORTION
- ▁TENT
- IUS
- ▁YO
- ▁INTENDED
- ▁NAMED
- RATION
- ▁NOTIC
- ▁PIPE
- ▁WILLING
- ▁INSTANTLY
- ▁SERVED
- ▁BAL
- ▁POSSESS
- ▁CRE
- ▁ADMIRATION
- ▁LIBERTY
- ▁OPPORTUNITY
- ▁SELDOM
- ▁BIRTH
- ▁GLOW
- ▁INCLUD
- ▁REQUEST
- ▁TYPE
- ▁SLEPT
- ▁CRIME
- ▁MOTIVE
- ▁ELSIE
- ▁BEGUN
- ▁CONSENT
- ▁ADMITTED
- ▁AVOID
- ▁ADDRESS
- ▁HATE
- ▁DEMANDED
- ▁APPARENTLY
- ▁SUGGESTION
- ▁CONSIDERATION
- ▁BLESS
- ▁PROCEEDED
- NCY
- ▁PRISON
- ▁CONT
- ▁SHOUTED
- ▁FACES
- ▁SPIRITS
- ▁DEVELOP
- ▁ACCIDENT
- ▁ADVICE
- ▁INNOCENT
- ▁INSTINCT
- ▁UNCONSCIOUS
- ▁MYSTERIOUS
- ▁PRETEND
- ▁PEEP
- ▁ANYONE
- ▁DUKE
- ▁PLUM
- VILLE
- ▁SEVERE
- ▁ALAS
- ▁DELIGHTED
- ▁ISSUE
- ▁ASKING
- ▁CROW
- ▁ACCEPTED
- ▁RIDE
- ▁DOORS
- ▁TAR
- ▁PREPAR
- ▁SUGGEST
- WOOD
- ▁CITIZEN
- ▁ENTRANCE
- ▁LINCOLN
- ▁POLITICAL
- ▁PRACTICAL
- ▁STIFF
- ▁WIDOW
- ▁CAPITAL
- ▁CLEVER
- ▁MAMMA
- ▁CREDIT
- ▁OBEY
- ▁STRING
- ▁DAILY
- ▁ARGUMENT
- ▁HEAP
- ▁APARTMENT
- ▁FLIGHT
- ▁ELDER
- ▁PUR
- ▁PAGE
- ▁DUST
- ▁GAZE
- ▁NATIONAL
- ▁BABY
- DDING
- ISTS
- ▁TEACH
- ▁STREETS
- CAL
- ▁GE
- AFF
- ▁GOES
- ▁POSSIBL
- UNG
- ▁LINES
- GUE
- ▁VOTE
- ▁HUNTING
- ▁QUO
- ▁RESEMBL
- ▁BASKET
- ▁CIRCLE
- ▁CONSEQUENCE
- ▁KITCHEN
- ▁TREASURE
- ▁NEVERTHELESS
- ▁FANCI
- ▁ASSEMBL
- ▁GRIEF
- ▁VEIL
- ▁SEASON
- ▁INVENT
- ▁VIRGINIA
- ▁HUT
- ▁GUEST
- ▁ROAR
- ▁BEHOLD
- ▁VICTORY
- ▁CAPABLE
- ▁DULL
- ▁SHOE
- ▁FLOAT
- ▁MERRY
- ▁IMMEDIATE
- ETH
- ▁ELEANOR
- ▁EXPLANATION
- ▁PARLIAMENT
- ▁PRINCIPAL
- ▁PROPORTION
- ▁RESOLUTION
- ▁UNUSUAL
- ▁BLUFF
- ▁NINETEEN
- ▁SENSATION
- ▁VISIBLE
- ▁INCOME
- ▁FATE
- ▁SUPER
- ▁LAUGHTER
- ▁EASE
- ▁LOAD
- ▁JEW
- ▁ZE
- ▁FEVER
- ▁WEDDING
- ▁JOINED
- ▁TRACE
- ▁LEADER
- ▁CLEARLY
- ▁FLOWER
- ▁TERMS
- ▁EMPLOYED
- OCK
- ▁PARTICULARLY
- ▁MEMBERS
- ▁CONFESS
- ▁GRO
- ▁ADDRESSED
- ▁CHRIST
- ▁ACCOMPANI
- ▁AFFORD
- ▁AMOUNT
- ▁BRILLIANT
- ▁COMMUNICAT
- ▁FIERCE
- ▁RECORD
- ▁SACRIFICE
- ▁TEMPT
- ▁CORDIAL
- ▁COLOUR
- ▁PROOF
- ▁ESTATE
- ▁PARDON
- ▁ADVIS
- ▁ATTITUDE
- ▁IMPORTANCE
- ▁BOOT
- ▁SHOCK
- ▁FIR
- ▁PLENT
- ▁HIT
- ▁MEMBER
- ▁SUR
- ▁SEATED
- ▁MAG
- AVING
- ▁FAVOUR
- ▁REMARK
- ▁DIM
- ▁FAITHFUL
- ▁SAVED
- CHI
- ▁SIN
- THE
- ▁CONFIDENCE
- ▁EXTRAORDINARY
- ▁FORTUNATE
- ▁MISFORTUNE
- ▁PATIENCE
- ▁RELIGIOUS
- ▁SATISFACTION
- ▁POSITIVE
- ▁SIMILAR
- ▁EXCHANG
- ▁RETREAT
- ▁FLESH
- ▁ADMIRE
- ▁SPIRITUAL
- ▁DAWN
- ▁BURIED
- ▁URGE
- ▁SUNDAY
- ▁FOX
- ▁EMMA
- ▁NURSE
- ▁SNAPP
- ▁PARK
- ▁OBTAIN
- ▁RECOGNIZED
- ▁SPEED
- ▁MAGIC
- ▁LAWS
- ▁REMOVED
- ▁HAM
- ▁PRESERV
- ▁AID
- HOUSE
- ▁MENTIONED
- ▁CONSCIENCE
- ▁CONTEMPT
- ▁DETAIL
- ▁IMMENSE
- ▁NERVOUS
- ▁PRISCILLA
- ▁UNFORTUNATE
- ▁UNHAPPY
- ▁COMPLAIN
- ▁TWICE
- ▁WHISTL
- ▁SNAKE
- ▁WASHINGTON
- ▁PIRATE
- ▁WICKED
- ▁BODIES
- ▁DESIGN
- ▁JASON
- ▁VAGUE
- ▁CONSIST
- ▁GIFT
- ▁ANGEL
- ▁RODE
- ▁FOLD
- ▁BRIDE
- ▁ANGER
- ▁BASE
- ITUDE
- ▁CONCLUDED
- ▁ALTER
- ▁FRI
- ▁PANT
- ▁BID
- ▁HIGHEST
- ▁SAILOR
- MPLE
- ▁OBSERV
- ▁CHEERFUL
- IFICATION
- RID
- ▁DESCRIBED
- ▁BIN
- ▁JEWEL
- ▁ARTIST
- ▁PEER
- ▁NORA
- ▁SKI
- ▁DIAMOND
- ▁ENCOURAGE
- ▁PRIVILEGE
- ▁PROJECT
- ▁ANYBODY
- ▁ENCOUNTER
- ▁HOLLOW
- ▁YIELD
- ▁BOBBY
- ▁SAVAGE
- ▁SOMEBODY
- ▁OTHERWISE
- ▁PRAISE
- ▁PROBLEM
- ▁DISTRESS
- ▁UGLY
- ▁WARRIOR
- ▁MOURN
- ▁RELIEV
- ▁DESK
- ▁FOOLISH
- ▁STARTLED
- ▁SKILL
- SHONE
- ▁LONE
- ▁OBSERVATION
- ▁DENI
- ▁NEST
- ▁SOLDIER
- ▁RELATION
- ▁TRULY
- ▁VISITOR
- ▁OFFICERS
- ERSON
- ▁YA
- ▁EVIDENT
- ▁DREAMS
- ▁KEEPING
- ▁PLAINLY
- ▁DRUNK
- ▁EMBRAC
- ▁INTELLIGENCE
- ▁LIEUTENANT
- ▁PERSUADE
- ▁SURROUNDING
- ▁UNIVERSAL
- ▁GLEAM
- ▁SUPERIOR
- ▁WHEEL
- ▁JEALOUS
- ▁QUEER
- ▁PIERRE
- ▁MILK
- ▁RAIL
- ▁FLUSH
- ▁STAIRS
- ▁JESUS
- ▁HORN
- ▁REGION
- ▁SAFETY
- ▁KA
- ▁GUIDE
- ▁CAKE
- ▁CUP
- ▁INQUIRED
- ▁DEFI
- ▁LESSON
- ▁WRETCHED
- ▁PACE
- ▁TEST
- ▁READING
- ▁ENTIRE
- ▁NET
- ▁DOGS
- ▁COMMANDER
- ▁PRODUCE
- ▁GAINED
- ▁ARRIVAL
- ▁FAMILIAR
- ▁MEANWHILE
- ▁SUSPICION
- ▁CHOICE
- ▁IMPULSE
- ▁THRUST
- ▁PROCESS
- ▁SUMMON
- ▁SHEPHERD
- ▁HASTILY
- ▁GRASP
- ▁COUNTESS
- ▁STYLE
- ▁DWELL
- ▁MERIT
- ▁PITCH
- ▁HUNGRY
- ▁SPORT
- ▁LOUISE
- ▁STERN
- ▁PROVIDED
- ▁ASSUME
- ▁EARLIE
- ▁RAGE
- ▁U
- ▁RAPIDLY
- PORT
- ▁SUCCESSFUL
- ▁FLED
- ▁AGREE
- ▁CONDITIONS
- ▁RELATIONS
- ▁DREAD
- ▁NATURALLY
- ▁EARL
- ▁GAY
- ▁HYPNOTI
- ▁PUTT
- ▁GAZ
- ▁JIM
- ▁PAUS
- ▁PROPOS
- ▁ADMINISTRATION
- ▁ELEVEN
- ▁HOSPITAL
- ▁MAGISTRATE
- ▁STRIKE
- ▁DIGNITY
- ▁GLORY
- ▁BOTTLE
- ▁THRONE
- ▁RECKON
- ▁COSETTE
- ▁MOREOVER
- ▁APPLI
- ▁HIND
- ▁PRODUCT
- ▁POOL
- ▁TRIAL
- HAN
- ▁ERIC
- ▁CUB
- ▁PIECES
- ▁EXCEPTION
- ▁ENJOYED
- ▁DARED
- ▁TRU
- ▁CLOSELY
- ▁RAPID
- ▁AFFECTED
- ▁REQUIRE
- ▁SOFTLY
- ▁BROW
- UCK
- ▁MARKED
- ▁SEVENT
- ▁ELECT
- ▁FORGOT
- ▁CORRECT
- ▁FRANCS
- ▁MARGUERITE
- ▁SCIENCE
- ▁UNEXPECTED
- ▁FOUGHT
- ▁MILITA
- ▁THUNDER
- ▁VOYAGE
- ▁GANEM
- ▁FREEDOM
- ▁NODDED
- ▁CAPTURE
- ▁MORTAL
- ▁OWNER
- ▁POLITE
- ▁VISION
- ▁EDUCATION
- ▁GOVERNOR
- ▁RAV
- ▁REWARD
- ▁HASTE
- ▁REPEAT
- ▁DETERMIN
- ▁PITI
- ▁KNEE
- LINE
- ▁DEVOTED
- ▁INTERRUPTED
- ▁FOLKS
- ▁EXTREME
- ▁APPROACH
- ▁CONTINUE
- ▁BEARING
- ▁CHAP
- ▁ACQUAINTED
- ▁GLIMPSE
- ▁GRADUALLY
- ▁SUNSHINE
- ▁PRACTICE
- ▁SUPPLI
- ▁DAVID
- ▁DRIFT
- ▁SHOWING
- ▁LEVEL
- ▁PROMPT
- ▁QUARREL
- ▁REPRESENTATIVE
- ▁PLUNG
- ▁GIANT
- FALL
- ▁STOUT
- CHA
- WEPT
- ▁GLANC
- ▁SALT
- ▁CHOSEN
- ▁BUCK
- ▁REALIZED
- ▁REALITY
- ▁TUR
- ▁DRIVEN
- ▁CARD
- ▁PRAYER
- ▁TERM
- AID
- ▁HOLY
- ▁ENDURE
- ▁RANGE
- ▁HANG
- ▁SAM
- LAN
- ▁CAVE
- INA
- ▁GRI
- ▁SIGH
- ▁NEIGHBOUR
- ▁COUNCIL
- ▁EXERCISE
- ▁NAUTILUS
- ▁SOMEWHERE
- ▁SYLVIA
- ▁THOROUGH
- ▁VICTIM
- ▁BRIDGE
- ▁COMPELLED
- ▁INCLINED
- ▁OVERCOME
- ▁RESERVE
- ▁ARREST
- ▁PRECIOUS
- ▁DUTCH
- ▁OCEAN
- ▁ACQUIR
- ▁RECALL
- ▁DESTIN
- ▁ATTACH
- ▁SLIM
- ▁WEEP
- ▁CONSCIOUSNESS
- ▁TIGHT
- ▁WAKE
- ▁COMFORTABLE
- ▁ACTIVE
- ▁WINGS
- ▁GRIN
- ▁AFFECT
- ▁WHIT
- ▁IDEAL
- ▁EASTER
- ▁APPROACHING
- ▁CREATED
- ▁PLANS
- ▁INCREASE
- ▁FLYING
- ▁SHOUT
- OES
- MISSION
- ▁ARMED
- ABILITY
- ▁BLUSH
- ▁CONNECTION
- ▁MATTHEW
- ▁MEDICINE
- ▁REMIND
- ▁EXHIBIT
- ▁BLOCK
- ▁DESERVE
- ▁LISTENING
- ▁TITLE
- ▁FLOUR
- ▁FLAME
- ▁AGENT
- ▁USEFUL
- ▁BRIG
- ▁BOIL
- ▁ASSURED
- ▁REFLECTION
- ▁PINE
- ▁WAG
- ▁YOUNGER
- ▁BEARD
- ▁KINDNESS
- CTUALLY
- ▁ACTUAL
- ▁WEIGHT
- ▁LILY
- ▁IMPRESS
- ▁DESCRIBE
- ▁BEHELD
- ▁COMMUNITY
- ▁DESPERATE
- ▁DISPLAY
- ▁ENEMIES
- ▁MELANCHOLY
- ▁MIRROR
- ▁RECOMMEND
- ▁SPANISH
- ▁BLAME
- ▁VOLUME
- ▁SHOOT
- ▁COMBIN
- ▁SHAKING
- ▁SOUTHERN
- ▁MYSTERY
- ▁EVERYONE
- ▁COMMISSION
- ▁COMPOSED
- ▁UDO
- ▁IMAGE
- ▁DECEIV
- ▁FAILURE
- ▁PATTY
- ▁ALICE
- ▁FRAME
- ▁MODEST
- ▁MAGNIFICENT
- ▁BRANCHES
- ▁REIGN
- ▁RAG
- ▁PARISH
- ▁KATE
- ▁AMID
- ▁SLEEPING
- ▁ANNOUNCED
- ▁EAGERLY
- ▁WIRE
- ▁LAP
- ▁ARAB
- ▁EATING
- ▁RUM
- ▁CAREFUL
- ▁DISCUSS
- WORTH
- ▁DISTRICT
- ▁FOREHEAD
- ▁FRANCIS
- ▁INCIDENT
- ▁APPEAL
- ▁EMBARRASS
- ▁MAINTAIN
- ▁PRONOUNC
- ▁FURNISH
- ▁STRAIN
- ▁ELEMENT
- ▁SILK
- ▁FEAST
- ▁RECENT
- ▁DANCING
- ▁LODGE
- ▁ASHAMED
- ▁TRICK
- ▁BOBO
- ▁STUFF
- ▁ET
- ▁ASSERT
- ▁SANK
- ▁TREATMENT
- ECI
- ▁SWIM
- ▁BECOMING
- ▁SINGING
- ▁PLATE
- ▁SCATTERED
- ▁EXTREMELY
- ▁GRIM
- ▁SANG
- ▁FIGHTING
- ▁FACTOR
- ▁PAINFUL
- ▁HIDE
- ▁FUNN
- ▁AFTERWARD
- ▁FROG
- ▁VENTURE
- ▁DISAPPOINT
- ▁COMRADE
- ▁MONSIEUR
- ▁OBVIOUS
- ▁PASSENGER
- ▁PROFOUND
- ▁PUBLISH
- ▁ACCUSTOM
- ▁BLOOM
- ▁SMITH
- ▁RELATIVE
- ▁ACCUSE
- ▁MANIFEST
- ▁SOLID
- ▁MONSTER
- ▁MARIUS
- ▁CANDLE
- ▁PROCUR
- ▁INTERFERE
- ▁HOUSEHOLD
- ▁DEVELOPMENT
- ▁AGREEABLE
- ▁HALT
- ▁NECESSITY
- FOLD
- ▁CITIES
- ▁REGI
- ▁GLOOMY
- BBL
- ▁SEPARATED
- ▁CHEST
- ▁STRIP
- ▁SPAR
- ▁DUN
- ▁SETTLE
- ▁STARED
- ▁HANGING
- ▁FEATURES
- ▁PILE
- ▁ORIGIN
- ARIES
- ▁LION
- ▁ALI
- ▁ASTONISHMENT
- ▁COMPLIMENT
- ▁DELICATE
- ▁COUNSEL
- ▁FIFTH
- ▁SUPPRESS
- ▁BURDEN
- ▁COMPLEX
- ▁ADDITION
- ▁CRUSH
- ▁TWIST
- ▁PIANO
- ▁BRUSH
- ▁CHECK
- ▁ANNIE
- ▁SHELTER
- ▁IMPROV
- ▁WESTERN
- ▁LOCAL
- ▁APPLE
- ▁GREET
- ▁MASK
- ▁RUSSIAN
- ▁TOWER
- ▁CREW
- ▁TIP
- ▁WANDERING
- ▁READER
- ▁WANDERED
- ▁DESTROY
- ▁OBSERVE
- MORE
- ▁ESCAPED
- ▁PET
- ▁BUILD
- ▁REAR
- ▁DESTROYED
- HIN
- ▁OWE
- ▁RANG
- ▁TEAR
- ▁NED
- ▁OFFICER
- ▁TRAP
- ▁OCCUR
- ▁APPOINTED
- ▁ATMOSPHERE
- ▁CHOOSE
- ▁CONCLUSION
- ▁CULTIVAT
- ▁DESCRIPTION
- ▁ENORMOUS
- ▁EXHAUSTED
- ▁LANDSCAPE
- ▁NATASHA
- ▁PROSPECT
- ▁REFRESH
- ▁SPECIES
- ▁SURROUNDED
- ▁WEAPON
- ▁BLANK
- ▁DEFEND
- ▁EDITH
- ▁HORRIBL
- ▁BETRAY
- ▁FERKO
- ▁LABOUR
- ▁NEGRO
- ▁RESUMED
- ▁LEAF
- ▁MUSKET
- ▁INTENSE
- ▁MERCY
- ▁ADOPT
- ▁SCORE
- ▁DASH
- ▁LAWYER
- ▁SLOPE
- ▁CHUCK
- ▁ASSISTANCE
- ▁BROOK
- ▁BREAKING
- ▁ASSIST
- ▁GROAN
- ▁HELEN
- ▁BEHAV
- ▁MAIDEN
- ▁CRIS
- ▁SHOUTING
- ▁NAY
- ▁PIG
- ▁ACCORDINGLY
- ETTE
- ▁DESIR
- ▁RUB
- ▁GRU
- ▁PIT
- ▁HEAVI
- ▁OBTAINED
- ▁SPARE
- ▁BRANCH
- ▁COUNTER
- ▁APART
- ▁AMBITION
- ▁ASTONISHED
- ▁CORRESPOND
- ▁DRIVING
- ▁ENERGY
- ▁HISTORIAN
- ▁REVOLUTION
- ▁SWEEP
- ▁TREMBLING
- ▁CRAFT
- ▁FAMILIES
- ▁LITERATURE
- SBURG
- ▁FEMALE
- ▁TILNEY
- ▁GENEROUS
- ▁SUBMIT
- ▁INTELLECTUAL
- ▁ORCHARD
- ▁STORIES
- ▁DIANA
- ▁VEIN
- ▁TRIFL
- ▁TWIN
- ▁WORSHIP
- ▁MARBLE
- ▁GALLANT
- ▁SENSIBLE
- ▁NEAT
- ▁BROWNIE
- ▁JUNE
- ▁SHAW
- ▁WORST
- ▁USELESS
- ▁FISHING
- ▁CRYING
- ▁MAYBE
- ▁VARI
- ▁PRESERVE
- ▁VOL
- ▁EMPLOY
- ▁INTERRUPT
- ▁SLIGHTLY
- ▁ACCOMPLISHED
- NEY
- ▁STEAM
- ▁BALANC
- ▁LEANING
- ▁SIGHED
- ▁REFUSE
- ▁IMAGINED
- ▁DATE
- GROUND
- ▁ENTERTAIN
- ▁PERCEIVE
- ▁ABROAD
- ▁CHEESE
- ▁DESTRUCTION
- ▁ESSENTIAL
- ▁EXPEDITION
- ▁GRANDFATHER
- ▁INFINITE
- ▁LIBRARY
- ▁MULTITUDE
- ▁NEGLECT
- ▁SWALLOW
- ▁VILLEFORT
- ▁BELOVED
- ▁COMMITTEE
- ▁CONFIDENT
- ▁PURPLE
- ▁PURCHAS
- ▁SCRAP
- ▁SPOIL
- ▁LIKEWISE
- ▁EXTRA
- ▁STRAW
- ▁SALUT
- ▁SOURCE
- ▁HASTENED
- ▁RESENT
- ▁FLOCK
- ▁LOFT
- ▁FLO
- ▁CLO
- ▁CONVINCED
- ▁GOODNESS
- ▁HYPNOTIZ
- ▁SETTING
- ▁HAIL
- ▁PHI
- ▁GROVE
- ▁DISCOVERY
- ▁DAMP
- ▁WHISPER
- ▁LIFT
- ▁HOP
- ▁SUSPECTED
- ▁SCR
- OLI
- ▁FAC
- ▁BUSH
- ▁FOREVER
- ▁BARRICADE
- ▁CONSTITUTION
- ▁ENDEAVOR
- ▁ENTHUSIASM
- ▁EXECUTION
- ▁HYACINTH
- ▁PERCEVAL
- ▁PSYCHE
- ▁REPROACH
- ▁THIRTEEN
- ▁ABSORB
- ▁GRATITUDE
- ▁MERCER
- ▁REPUTATION
- ▁SCREAM
- ▁PUPIL
- ▁RETIRED
- ▁STEEP
- ▁SUMMIT
- ▁MISERABLE
- ▁STRICT
- ▁MINGLED
- ▁DEFEAT
- ▁REVEAL
- ▁LOVING
- ▁GOOSE
- ▁ECHO
- ▁AWAIT
- ▁MOOD
- ▁CRAWLEY
- ▁CELL
- ▁ENGAGEMENT
- ▁PRECED
- ▁SOMEONE
- ▁ARRANGEMENT
- ▁PICKET
- ▁GASP
- ▁HUMOR
- ▁INVITATION
- ▁JOB
- WITHSTAND
- ▁LAMENT
- ▁CLASSES
- ▁HUNGER
- ▁DISPOSED
- ▁STEAMER
- ▁FEARFUL
- ▁GER
- ▁FINAL
- ▁FLAG
- ▁JULY
- ▁DIG
- WORK
- ▁OPPOS
- ▁ANXIETY
- ▁AUDIENCE
- ▁BACHELOR
- ▁COLUMN
- ▁HANDKERCHIEF
- ▁IMPATIENT
- ▁JUDGMENT
- ▁KNIFE
- ▁SOVEREIGN
- ▁STRIKING
- ▁THOMPSON
- ▁EMPIRE
- ▁FULFIL
- ▁CONSULT
- ▁JENNY
- ▁THENARDIER
- ▁POYSER
- ▁FOURTEEN
- ▁JAPANESE
- ▁INDULG
- ▁MARTIAN
- ▁COUNTRIES
- ▁FETCH
- ▁CRITIC
- ▁ROBBER
- ▁CROOK
- ▁DEPARTURE
- ▁MABEL
- ▁PREACH
- ESCENT
- ▁WHIP
- ▁NAIL
- ▁DELIGHTFUL
- ▁DISCUSSION
- ▁SENTENCE
- ▁LANE
- ▁ENGINEER
- ▁ARRANGED
- MMY
- ▁LEST
- ▁RENT
- MMED
- ▁LIST
- ▁ROBE
- ▁MISSION
- ▁GRACEFUL
- ▁LIGHTN
- STONE
- COURT
- ▁CONCEPTION
- ▁CONTRACT
- ▁DROWN
- ▁EXPERIMENT
- ▁HITHERTO
- ▁PLAGUE
- ▁PORTHOS
- ▁SHRIEK
- ▁DETECT
- ▁ACCENT
- ▁ERECT
- ▁SAZEN
- ▁PROFIT
- ▁VIVID
- ▁SQUIRE
- ▁OPERATION
- ▁SMELL
- ▁SIMON
- ▁EXTENT
- ▁KEEN
- ▁EMERG
- ▁REVIV
- ▁REGIMENT
- ▁DISAPPOINTMENT
- ▁STOLE
- ▁DIVINE
- ▁GUILTY
- ▁COWARD
- ▁EXPECTATION
- ▁SIGNOR
- ▁MODE
- ▁CENTRE
- ▁FIL
- HOW
- ▁WEARI
- ▁TOTAL
- ▁VICTOR
- ▁GOVERN
- ▁RAISE
- ▁ABANDON
- ▁ABSURD
- ▁ASPECT
- ▁CRIMINAL
- ▁DEFINITE
- ▁DELIBERAT
- ▁FEATHER
- ▁FLORINA
- ▁MIDNIGHT
- ▁RICHMOND
- ▁SATISFY
- ▁SINGULAR
- ▁STEADILY
- ▁SUPREME
- ▁TIMBER
- ▁PSYCHOLOG
- ▁GESTURE
- ▁VALUABLE
- ▁INTERVAL
- ▁CONFUSION
- ▁FLUTTER
- ▁SACRED
- ▁DISEASE
- ▁UNDERTAKE
- ▁PENETRAT
- ▁MARVEL
- ▁NORTHERN
- ▁GRIEV
- ▁GENIUS
- ▁SADDLE
- ▁NOVEL
- ▁MISERY
- ▁CONVICTION
- ▁SINK
- ▁WAGON
- ▁ARISE
- ▁COMMENT
- ▁BARN
- UPON
- ▁FENCE
- ▁ASSOCIATION
- ▁BONES
- ▁IDLE
- ▁DOUBTFUL
- ▁PREPARATION
- IZZ
- ▁RAIS
- ▁BITTERLY
- ▁JOE
- ▁RELI
- ADI
- ▁METAL
- ▁EXACT
- ▁GLOOM
- FIELD
- ▁DANGLARS
- ▁DISGRACE
- ▁EXAMINATION
- ▁FASCINAT
- ▁GLITTER
- ▁INCREASING
- ▁MESSENGER
- ▁PATRIOT
- ▁PLATFORM
- ▁PROVISION
- ▁QUALITIES
- ▁SELECT
- ▁STEADY
- ▁POVERTY
- ▁POWDER
- ▁PROPHET
- ▁HOLLAND
- ▁TRUNK
- ▁VARIETY
- ▁PLANCHET
- ▁CONQUER
- ▁CONCEIVE
- ▁COMBAT
- ▁STOOP
- ▁SHIRT
- ▁GENERATION
- ▁COMMITTED
- ▁INSULT
- ▁CONFUSED
- ▁RADIAN
- ▁DEBT
- ▁IMITAT
- ▁DART
- ▁CAROLINE
- ▁SWAM
- ▁WREN
- ▁CHILDHOOD
- ▁BRAND
- ▁JOKE
- ▁FRIENDSHIP
- ▁DIRT
- ▁JOLL
- ▁BUSHES
- ▁MINK
- ▁ROUT
- ▁EQUALITY
- ▁HESITATED
- ▁BARK
- ▁ANTI
- ▁STATEMENT
- PHER
- ▁SUNK
- ▁DAT
- ▁BACKWARD
- ▁SUSPECT
- ▁OBJECTION
- ▁RAP
- ▁CHIN
- ▁MATE
- ▁REDUC
- ▁GREGG
- ▁ACCOMPANY
- ▁ANYWHERE
- ▁BENEFIT
- ▁CLERK
- ▁EXPENSE
- ▁FETNAH
- ▁INTERPRET
- ▁LUKASHKA
- ▁NUMEROUS
- ▁SURGEON
- ▁PUZZL
- ▁RESCUE
- ▁GRATEFUL
- ▁APPROV
- ▁RIVAL
- ▁NIECE
- ▁FLOOD
- ▁VANISHED
- ▁ERROR
- ▁BLAZ
- ▁TUMBL
- ▁WENDY
- ▁PERSIST
- ▁CONSOL
- ▁SOAP
- ▁HUMOUR
- ▁FITTED
- ▁HOUSEKEEPER
- ▁ENABL
- ▁OCCASIONALLY
- ▁HATRED
- ▁SWELL
- ▁WORRY
- ▁RUST
- ▁PURSUIT
- ▁INTIMATE
- ▁SEAL
- ▁COLLECTION
- ▁TREMBLED
- ▁DENY
- ▁HUMANITY
- ▁FATAL
- ▁COCK
- ▁DRIVER
- ▁HOPELESS
- ▁MISTAKEN
- ▁LUC
- ▁ACCOMPLISH
- ▁COAL
- ▁ACCORD
- ▁PURSE
- ▁SEPARATE
- ▁ARRIVE
- ▁SMOK
- ▁MADAM
- ▁ASSOCIAT
- ▁INSTRUCT
- ▁CELEBR
- ▁CHANNEL
- ▁CIVILIZATION
- ▁DOCTRINE
- ▁ENDEAVOUR
- ▁GLACIER
- ▁INTELLIGENT
- ▁INVOLVE
- ▁LEATHER
- ▁MUTTERED
- ▁OLENIN
- ▁PENCROFT
- ▁PERPLEX
- ▁SPECTATOR
- ▁UNIVERSITY
- ▁ATTAIN
- ▁INEVITABL
- ▁YONDER
- ▁ENCHANT
- ▁REPAIR
- ▁CURRENT
- ▁ASCEND
- ▁CREEK
- ▁SPARKL
- ▁RUE
- ▁BEAVER
- ▁INFANT
- ▁CONTINUALLY
- ▁CLASP
- ▁IRISH
- ▁ROLLIN
- ▁PUNISHMENT
- ▁LUNCH
- ▁AGONY
- ▁RUDE
- ▁DRAGG
- ▁INQUIRI
- ▁SEX
- ▁TERRIFI
- ▁ROBIN
- ▁PROFESSIONAL
- ▁SPUR
- ▁GRAIN
- ▁VINE
- ▁PENN
- ▁ROC
- ▁CHASE
- ▁INFORM
- ▁WRITER
- ▁AVO
- ▁TAP
- ▁CREAT
- ▁WHIL
- ▁BARR
- ▁ASSURE
- ▁CIRCUMSTANCE
- ▁OIL
- ▁ROUSE
- ▁COLUMB
- ▁CUNNING
- ▁DOMESTIC
- ▁GLORIOUS
- ▁INDIGNATION
- ▁PRECISELY
- ▁PRUDENCE
- ▁RAILROAD
- ▁SATURDAY
- ▁UTMOST
- ▁VIOLENCE
- ▁WHIRL
- ▁CALCULAT
- ▁OVERWHELM
- ▁PERPETUAL
- ▁QUARLES
- ▁SLENDER
- ▁TELEGRAPH
- ▁ALOUD
- ▁OPPRESS
- ▁CROPPER
- ▁CANADIAN
- ▁HERBERT
- ▁TIMID
- ▁SUPPLY
- ▁STROLL
- ▁CREEP
- ▁OATH
- ▁DUSK
- ▁EXCESS
- ▁HUMBLE
- ▁FURIOUS
- ▁RIDGE
- ▁BULLET
- ▁PONY
- ▁STATU
- ▁ENJOYMENT
- ▁CONWAY
- ▁DIFFICULTIES
- ▁PATCH
- ▁JOYCE
- ▁CLOCK
- ▁RESTORED
- ▁ARGU
- ▁WIG
- ▁CHATT
- ▁PLAC
- ▁REMOVE
- ▁TORN
- ▁DISAPPEAR
- TIME
- WELL
- ▁RECOGNIZE
- ▁FISHE
- ▁DECLARE
- ISTIC
- ▁AUTHOR
- ▁WHISK
- ▁COFFEE
- ▁COMPREHEND
- ▁DISGUISE
- ▁ELZEVIR
- ▁ENTERPRISE
- ▁HOLIDAY
- ▁HORIZON
- ▁IGNORANT
- ▁INTERVIEW
- ▁OLIVER
- ▁RONICKY
- ▁CAPACITY
- ▁DISPOSITION
- ▁EXTERNAL
- ▁OPPOSITION
- ▁REPUBLIC
- ▁WHEAT
- ▁CORPSE
- ▁DARLING
- ▁THRILL
- ▁INHABITANTS
- ▁ORNAMENT
- ▁SHIFT
- ▁RECOGNISE
- ▁SHIVER
- ▁BOAST
- ▁HINT
- ▁BOSTON
- ▁MULTI
- IFYING
- ▁STEAL
- ▁INSTRUCTIONS
- ▁ELECTRIC
- ▁SWING
- ▁SOOTH
- ▁SCALE
- ▁MORLAND
- ▁DISLIKE
- ▁FLATTER
- ▁COACH
- ▁LEIF
- ▁STAMP
- ▁ANYHOW
- ▁MOTIONLESS
- ▁ANDREA
- ▁LOSING
- ▁PAUL
- ▁CAROL
- ▁ADVANC
- ▁IMAGIN
- ▁CENTER
- ▁JAR
- ▁SUCCEED
- ▁DISMISS
- CTOR
- ▁RECEIV
- ▁DRAG
- ▁INTENT
- ▁BARBAR
- ▁PUNISH
- ▁ABRUPTLY
- ▁BERNARD
- ▁DECISION
- ▁INDEPENDENT
- ▁PROVINCE
- ▁SLEEVE
- ▁TREMENDOUS
- ▁UNPLEASANT
- ▁LEISURE
- ▁THRONG
- ▁THUMB
- ▁BANNER
- ▁CONTRADICT
- ▁RESTRAIN
- ▁DIVIDED
- ▁WRAPPED
- ▁HAUNT
- ▁SNEER
- CHESTER
- ▁JULIA
- ▁MILD
- ▁CONTACT
- ▁MEANTIME
- ▁NEEDLE
- ▁BLOT
- ▁BARREL
- ▁ISABELLA
- ▁THEATRE
- ▁ESTABLISHMENT
- ▁MARKET
- ▁CHINA
- ▁FORBID
- ▁PERISH
- ▁DOORWAY
- ▁CARLING
- ▁PERIL
- ▁PRIZE
- ▁HATCH
- ▁CURL
- ▁REFER
- ▁DEVOT
- EMBER
- MONT
- ▁CANOE
- ▁PROFESSION
- ▁CONVICT
- ▁CRAWL
- ▁ACTIVITY
- ▁BEWILDER
- ▁BREEZE
- ▁CONTEMPLAT
- ▁DISGUST
- ▁FATIGUE
- ▁MERRICK
- ▁PRAIRIE
- ▁REFORM
- ▁SPECTACLE
- ▁STUDENT
- ▁TUMULT
- ▁UNIFORM
- ▁VIGOROUS
- ▁CONDEMN
- ▁GENUINE
- ▁THOMAS
- ▁ARROW
- ▁PILLOW
- ▁FEEBLE
- ▁RALPH
- ▁SCHEME
- ▁COLLAR
- ▁JUSTINIAN
- ▁NERVE
- ▁OYSTER
- ▁BENNET
- ▁DUTIES
- ▁BINGLEY
- ▁CHRISTMAS
- ▁CONVEY
- ▁DESPIS
- ▁RATTL
- ▁GARMENTS
- ▁GOWN
- ▁BERYL
- ▁BARRIER
- ▁CHARACTERISTIC
- ▁MEDITAT
- ▁DISCOURSE
- ▁STAFF
- ▁KARA
- ▁MONTE
- ▁READILY
- ▁VENTUR
- ▁HENCE
- ▁ROPE
- ▁CRIES
- ▁ANGLE
- ▁RESPECTABLE
- ▁MOAN
- ▁OUTLINE
- BORN
- ▁FIX
- ▁INTEND
- LIA
- ▁CHILL
- ▁CREP
- ▁CHOSE
- ▁SPECULAT
- ▁ATTRIBUT
- ▁BUFFALO
- ▁ENTREAT
- ▁ENVELOP
- ▁FREDERICK
- ▁IMPATIENCE
- ▁INDIFFERENCE
- ▁INDUSTRY
- ▁INSTITUTION
- ▁LYNDE
- ▁RETAIN
- ▁TROUTINA
- ▁UNCOMFORTABL
- ▁VENGEANCE
- ▁JENKS
- ▁CONGRESS
- ▁SMART
- ▁THITHER
- ▁DISAGREE
- ▁IMPROVEMENT
- ▁PISTOL
- ▁GOSSIP
- ▁ETERNAL
- ▁BELIEF
- ▁SLEDGE
- ▁AROUSED
- ▁ORANGE
- ▁FASTENED
- ▁MONKEY
- ▁WITHDREW
- ▁OFFEND
- ▁PIERC
- ▁MOONLIGHT
- ▁OARS
- ▁GROOM
- ▁FIDDLER
- ▁BARBARA
- SHIRE
- ▁ATTENDANT
- ▁DIVERS
- ▁DUCK
- ▁PROPOSAL
- ▁GROWTH
- ▁CURATE
- ▁STEWAR
- ▁MOCK
- ▁SUCCESSION
- ▁CREATION
- ▁PARTIAL
- ▁SWU
- ▁FROST
- ▁EIGHTH
- ▁AWE
- ▁PERCH
- ▁LACE
- SPOON
- ▁ARRANGE
- SERIES
- ▁FOG
- ▁SCU
- ▁ABRAHAM
- ▁ADMIRAL
- ▁BARBICANE
- ▁CAMPAIGN
- ▁CONSEQUENTLY
- ▁CULTURE
- ▁GRAMMONT
- ▁GWYNPLAINE
- ▁HAPPILY
- ▁HOOPDRIVER
- ▁INDEPENDENCE
- ▁LEOPOLD
- ▁MISCHIEF
- ▁MONTGOMERY
- ▁NECESSARILY
- ▁PSYCHIC
- ▁RABBIT
- ▁REFUGE
- ▁RESPONSIBILIT
- ▁SENATOR
- ▁UNCERTAIN
- ▁MENSTRUA
- ▁FANNY
- ▁SUBSTANCE
- ▁APRIL
- ▁ELBOW
- ▁QUALITY
- ▁BORDER
- ▁BRUTAL
- ▁CARPET
- ▁SOLITAR
- ▁FROWN
- ▁SCENT
- ▁ANNOY
- ▁NAKED
- ▁BOSOM
- ▁CONSUM
- ▁TIGER
- ▁ITALIAN
- ▁PARSON
- ▁DECLIN
- ▁NEIGHBORHOOD
- ▁GREGGORY
- ▁EXCEED
- ▁SILLY
- ▁ICELAND
- ▁HIDEOUS
- ▁STRU
- ▁ALTERNAT
- ▁CABINET
- ▁ABILITY
- ▁BEECH
- ▁SECRETARY
- ▁CONTEST
- ▁MONK
- ▁PADD
- ▁EVA
- ▁CREST
- ▁FINISH
- ▁APPARENT
- ▁MIX
- ▁SLIP
- ▁LUXURI
- ▁AUTUMN
- ▁CIRCULAR
- ▁COMPOSITION
- ▁DISPLEAS
- ▁EXCELLENC
- ▁FURNITURE
- ▁GRADUATE
- ▁INDIFFERENT
- ▁JOSEPH
- ▁OCCUPATION
- ▁POSSIBILITY
- ▁RENEWED
- ▁RESPONDED
- ▁PREVAIL
- ▁HOARSE
- ▁PRACTIS
- ▁FAREWELL
- ▁JULIET
- ▁OVERHEAD
- ▁THREAD
- ▁APPLICATION
- ▁SOLITUDE
- ▁ADAPT
- ▁FALK
- ▁LARK
- ▁COARSE
- ▁MANKIND
- ▁KICK
- ▁BATTER
- ▁SOLICIT
- ▁RESIGN
- ▁MOTOR
- ▁STEEL
- ▁CONTRIV
- ▁AUTHORITIES
- ▁HARSH
- ▁FAVORITE
- ▁TALENT
- ▁FLEECE
- ▁AGITATION
- ▁ABBE
- ▁STUCK
- ▁HEDGE
- ▁BIBLE
- ▁RECOLLECTION
- ▁PARTNER
- ▁DAMON
- ▁SHINE
- ▁HOOK
- ▁CONFESSION
- ▁ASSENT
- ▁ELDE
- ▁BIGGE
- ▁PEACEFUL
- SCRIBED
- ▁WEIGH
- CARLET
- ▁DECIDE
- ▁RECOLLECT
- ▁BOHEMIA
- ▁CALIFORNIA
- ▁CONSTRUCT
- ▁DEMONSTRAT
- ▁DISTRIBUT
- ▁FRIGHTFUL
- ▁GNOME
- ▁IGNORANCE
- ▁JANUARY
- ▁JULIUS
- ▁MEMORIES
- ▁OCCUPY
- ▁PHRASE
- ▁WHIRLWIND
- ▁WILMINGTON
- ▁CARLINI
- ▁CHAUVELIN
- ▁ESTEEM
- ▁GENZABURO
- ▁GLOBE
- ▁LECOQ
- ▁MARGARET
- ▁MONARCH
- ▁NAPOLEON
- ▁SCORN
- ▁STAGGER
- ▁SUSTAIN
- ▁TRADITION
- ▁ADJUST
- ▁FROZEN
- ▁IMPRISON
- ▁LANTERN
- ▁MICHEL
- ▁STOMACH
- ▁TORRENT
- ▁WITHDRAW
- ▁FRANZ
- ▁POISON
- ▁SURVEY
- ▁BRITISH
- ▁ELEVAT
- ▁AWOKE
- ▁ESTHER
- ▁INHERIT
- ▁TRAVERS
- ▁STOPPING
- ▁IRELAND
- ▁COMPARATIVE
- ▁SOBB
- ▁FAVOURITE
- ▁CANVAS
- ▁CLOAK
- ▁GLAR
- ▁ASSISTANT
- ▁DAMAGE
- ▁PEAK
- ▁DISTINCTION
- FARE
- ▁DOLLAR
- ▁BEGGAR
- LUSIVE
- ▁MODEL
- ▁SECUR
- ▁DISPOS
- ▁SLID
- ▁PEA
- ▁SPEEDI
- HOLD
- ▁SNAP
- ▁CIGAR
- ▁AFFLICT
- ▁AMAZEMENT
- ▁LAUNCELOT
- ▁LEAGUE
- ▁MARIPOSA
- ▁POPULATION
- ▁UNEASY
- ▁BLOSSOM
- ▁CATERPILLAR
- ▁INCLINATION
- ▁SUSPEND
- ▁SYNDIC
- ▁TAYLOR
- ▁WILSON
- ▁CONTRAST
- ▁PORTRAIT
- ▁CORONER
- ▁GREEK
- ▁BUNDLE
- ▁BLEW
- ▁THORPE
- ▁ORPHAN
- ▁MUSCLE
- ▁DEAF
- ▁SURVIV
- ▁EXCEEDINGLY
- ▁TENDENC
- ▁ISRAEL
- ▁QUANTIT
- ▁PENSION
- ▁DRIED
- TEXT
- ▁REFERENCE
- ▁REPOSE
- ▁FOLLY
- ▁REPLACE
- ▁TERR
- ▁ANKLE
- ▁SUNLIGHT
- ▁SECURITY
- ▁SHOV
- ▁RAW
- CULAR
- ▁JACKET
- ▁TUNE
- ▁HOBB
- ▁MARTIN
- DUCED
- ▁FIST
- ▁BEGG
- ▁CHOK
- ▁INQUIRE
- ▁INTELLECT
- ▁AMUSEMENT
- ▁APPROPRIATE
- ▁CONGRATULAT
- ▁CONVENTION
- ▁DISCOURAG
- ▁EXQUISITE
- ▁FOUNTAIN
- ▁JUNIOR
- ▁NONSENSE
- ▁OBSTACLE
- ▁SPECIMEN
- ▁SWEAR
- ▁TRANQUIL
- ▁VEHICLE
- ▁WISDOM
- ▁ASCERTAIN
- ▁CAUTIOUS
- ▁CENTURIES
- ▁CORRUPT
- ▁EXPLOR
- ▁TURKEY
- ▁BARGAIN
- ▁CONFOUND
- ▁FUNCTION
- ▁GRACIOUS
- ▁MONICA
- ▁ILLUSTRAT
- ▁CRUMB
- ▁REMEDY
- ▁REMOTE
- ▁REVENGE
- ▁BABYLON
- ▁CAUTION
- ▁INTERIOR
- ▁CRISTEL
- ▁BRAZ
- ▁THIRST
- ▁PROBABLE
- ▁HARMONY
- ▁CHARITY
- ▁DECAY
- ▁COLONI
- ▁AVAIL
- ▁REPULS
- ▁ABSENT
- ▁PULSE
- ▁PRESUM
- ▁CRANE
- ▁NEIGHBOURHOOD
- ▁SUNSET
- ▁CANNON
- ▁GRAPE
- ▁SOFA
- ▁DRANK
- MINOUS
- ▁DECLARATION
- ▁CLOSING
- ▁MEEK
- ▁STARV
- ▁BUNCH
- ▁PERFORMANCE
- ▁ENTERTAINMENT
- ▁STRIV
- ▁EMILY
- ▁VALET
- MPOSED
- ▁INTIMA
- ▁POLISH
- ▁HIRE
- POST
- ▁TREMBLE
- ▁CEASE
- ▁VIRGIN
- ▁RUSSIA
- COURSE
- ▁EDUCAT
- BOUND
- ▁INHABIT
- ▁SUPERINTEND
- ▁BISCUIT
- ▁CHICAGO
- ▁CHOKICHI
- ▁CONFLICT
- ▁ENCLOS
- ▁EXCLUSION
- ▁EXECUTIVE
- ▁GRANDMOTHER
- ▁HEADQUARTERS
- ▁INFERIOR
- ▁INVISIBLE
- ▁MUTUAL
- ▁OPPONENT
- ▁SENSITIVE
- ▁STUDIED
- ▁TEMPORARY
- ▁UNWILLING
- ▁PERMANENT
- ▁BEDROOM
- ▁NOVEMBER
- ▁COMPLICAT
- ▁DEVOUR
- ▁SCRAMBL
- ▁SECTION
- ▁PROPOSITION
- ▁DEPRIV
- ▁RYNCH
- ▁PLEAD
- ▁TORTURE
- ▁SCOUT
- ▁PILOT
- ▁CHERISH
- ▁SPEAR
- ▁SUGAR
- ▁JASPER
- ▁STRAY
- ▁RIFLE
- ▁NORMAL
- ▁JERK
- ▁HONEY
- ▁AWAKENED
- ▁QUIVER
- ▁PYE
- ▁APPLY
- LICK
- JA
- ▁ANNOUNC
- FORE
- ▁ENGINE
- ▁HESITATE
- ▁PROVIDE
- ▁REALIZE
- ▁SEIZE
- ▁RESTORE
- MOUTH
- FOOT
- ▁DIFFER
- ▁ULTIMATE
- ▁ABUNDANCE
- ▁APPRECIATE
- ▁APPREHENSION
- ▁AVENUE
- ▁AWKWARD
- ▁CETERA
- ▁CHIMNEY
- ▁CLUTCH
- ▁CONVENIENT
- ▁CORRIDOR
- ▁DISTRACT
- ▁ELEGANT
- ▁ELSEWHERE
- ▁ENTHUSIASTIC
- ▁EXECUTE
- ▁EXTREMIT
- ▁JERUSALEM
- ▁MIRACLE
- ▁MONSTROUS
- ▁OBEDIENCE
- ▁OBSCURE
- ▁PHENOMENA
- ▁RESIDENCE
- ▁RESOURCE
- ▁REVOLT
- ▁SCIENTIFIC
- ▁SHIELD
- ▁SIMPSON
- ▁UNIVERSE
- VOLUNTARY
- ▁ATTENTIVE
- ▁BRENDA
- ▁DEPOSIT
- ▁MAXIM
- ▁REJECT
- ▁STIRRED
- ▁DISORDER
- ▁SERENE
- ▁TOBACCO
- ▁MILTON
- ▁BALLOON
- ▁STEPHEN
- ▁STRAIT
- ▁CHINESE
- ▁COURTEOUS
- ▁RELEASE
- ▁RECESS
- ▁COTTON
- ▁STUMP
- ▁TANK
- ▁PROMOTE
- ▁DERIVE
- ▁LOYAL
- ▁GRANIT
- ▁DISMAL
- ▁CATTLE
- ▁DOONE
- ▁CUPID
- DIGNIFIED
- ▁RIPE
- ▁EXILE
- ▁ANTIQU
- UMINAT
- ▁SUPPOS
- ▁WRETCH
- ▁IDENTI
- ▁EASI
- ▁SERV
- ▁QUEST
- TOWN
- ▁ACHIEVEMENT
- ▁APPETITE
- ▁BUCCANEER
- ▁COMMENCED
- ▁DELAWARE
- ▁DISCERN
- ▁IMMORTAL
- ▁INDIGNANT
- ▁JOSIANA
- ▁MECHANICAL
- ▁MUSKRAT
- ▁REVIEW
- ▁ROBARTS
- ▁SIGNIFICANT
- ▁SUBSEQUENT
- ▁YOURSELVES
- ▁ANGRILY
- ▁BORROW
- ▁SUBLIME
- ▁AFRICA
- ▁CHICKEN
- ▁DEGRAD
- ▁GEORGI
- ▁HUMILIAT
- ▁LODGING
- ▁REDCOAT
- ▁VIOLET
- ▁HOPKINS
- ▁RAWDON
- ▁PRICK
- ▁WHALE
- ▁FUNERAL
- ▁GUINEA
- ▁DISMAY
- ▁PORCH
- ▁HARVEST
- ▁PARCEL
- ▁SUBDU
- ▁SYRIA
- ▁PANIC
- ▁BOUGHS
- ▁CIGARETTE
- ▁CHRON
- ▁INQUIRY
- ▁CRYSTAL
- ▁SPELL
- ▁PLUCK
- ▁PATTERN
- ▁DARING
- ▁CRITICISM
- ▁DAINT
- ▁DISTURBANCE
- ▁BUTCHER
- ▁LITERA
- ▁ABUSE
- IXTURE
- ▁ANIMAT
- ▁WRIT
- ▁BELIEV
- ▁INDUCE
- COMING
- ▁DRAMA
- ▁AGITAT
- SHAW
- ▁IMPERFECT
- ▁MANUFACTURE
- ▁AFFIRM
- ▁ANGUISH
- ▁ARTIFICIAL
- ▁BIBBS
- ▁CHARLOTTE
- ▁CIRCUS
- ▁CONNISTON
- ▁CONSTITUTE
- ▁DAZZL
- ▁DEFECT
- ▁DISCHARG
- ▁ESCORT
- ▁EXAGGERAT
- ▁GWENDOLEN
- ▁IRRESISTIBL
- ▁PHILOSOPHY
- ▁PHOTOGRAPH
- ▁PILGRIM
- ▁PLEASING
- ▁QUIXOTE
- ▁RESPONSE
- ▁SCRATCH
- ▁SERGEANT
- ▁SHERIFF
- ▁SHUDDER
- ▁STRUCTURE
- ▁SUFFRAGE
- ▁SURRENDER
- ▁SWORE
- ▁VILLAIN
- ▁HESITATING
- ▁FLORENCE
- ▁IRRITAT
- ▁RIGID
- ▁SINISTER
- ▁STUDIO
- ▁RAFT
- ▁CHAMPION
- ▁PAVEMENT
- ▁WOLF
- ▁DEVICE
- ▁WRECK
- ▁HESITATION
- ▁LAZY
- ▁ADJO
- ▁DECENT
- ▁INTERVEN
- ▁WOOL
- ▁ILLUSION
- ▁HAWK
- ▁IMPART
- ▁LUNGS
- ▁WINNING
- ▁VITAL
- ▁CONSPI
- ▁SUBTLE
- ▁CONSTANC
- ▁HURL
- ▁AMIABL
- ▁FOLK
- GGY
- ▁NECESSIT
- ▁PROFESS
- WASH
- ▁ADMIRING
- ▁AMBITIOUS
- ▁ANTHONY
- ▁CEREMONY
- ▁CONTRIBUTE
- ▁CRAGGS
- ▁DETAIN
- ▁DISCLOS
- ▁DWELT
- ▁EGYPT
- ▁FELIX
- ▁JOURNAL
- ▁KWAIRYO
- ▁LIBERAL
- ▁LUMBER
- ▁OCTOBER
- ▁ORGANIZATION
- ▁POPULACE
- ▁PRECAUTION
- ▁PREJUDICE
- ▁PROCLAIM
- ▁PROPRIETOR
- ▁RESPONSIBLE
- ▁RHYTHM
- ▁RIDICULOUS
- ▁SCHOLAR
- ▁SQUEEZ
- ▁SUBSTITUTE
- ▁SURPASS
- ▁THRESHOLD
- ▁WHARTON
- ▁FLICKER
- ▁AMAZED
- ▁BRONZE
- ▁COSSACK
- ▁SPILETT
- ▁CASUAL
- ▁DARCY
- ▁PARLOUR
- ▁SEXUAL
- ▁INSECT
- ▁NATHAN
- ▁EMINENT
- ▁PENCIL
- ▁PETITION
- ▁ROTTEN
- ▁VIGIL
- ▁CAESAR
- ▁EAGLE
- ▁TREAD
- ▁REACTION
- ▁TACIT
- ▁PARLOR
- ▁SPAIN
- ▁WILDERNESS
- ▁DICTAT
- ▁GRATIFY
- ▁STOVE
- ▁SKIRT
- ▁UTILI
- ▁CONCERT
- ▁GORGE
- ▁DECORAT
- ▁LATIN
- ▁ANCHOR
- ▁KNOT
- ▁MONDAY
- ▁GABLES
- ▁TOLERABL
- ▁ROGER
- BERRIES
- ▁INVAD
- IMMER
- OMETER
- ▁PRODUC
- OBIL
- ▁PERMISSI
- FICIENCY
- ▁WANDER
- RREL
- PIECE
- HORN
- ▁COMMIT
- ▁ACCUMULAT
- ▁JAPAN
- ▁ABUNDANT
- ▁ACADEMY
- ▁ALBERT
- ▁BANQUET
- ▁DELICIOUS
- ▁DOCUMENT
- ▁EXCLAMATION
- ▁FEBRUARY
- ▁GROTESQUE
- ▁HEATHERSTONE
- ▁HUMPHREY
- ▁HURSTWOOD
- ▁MOHAMMED
- ▁MOSCOW
- ▁NICHOLAS
- ▁OBSTINATE
- ▁PHANTOM
- ▁PHILOSOPHER
- ▁RECEPTION
- ▁SPANIARD
- ▁SWOLLEN
- ▁TELEPHONE
- ▁TRIBUTE
- ▁TUNNEL
- ▁UNREASONABL
- ▁WIGWAM
- ▁BUTTERFLY
- ▁COLLINS
- ▁DISPATCH
- ▁EDITOR
- ▁CONTINENT
- ▁DIMINISH
- ▁HORRID
- ▁KEATS
- ▁PROVIDENCE
- ▁BEHALF
- ▁CHARLEY
- ▁DRAKE
- ▁LAUNCH
- ▁SALOON
- ▁GIGANT
- ▁DISPUTE
- ▁HYSTERI
- ▁DEFENCE
- ▁SCREEN
- ▁VAULT
- ▁NINTH
- ▁HARBOR
- ▁FLANK
- ▁SPECK
- ▁UPRIGHT
- ▁KEMP
- ▁CANADA
- ▁STALK
- ▁OWL
- ▁BRUTE
- ▁FERRIS
- ▁DECREE
- ▁HABITUAL
- ▁BRISK
- ▁INSPIRE
- ▁HUSH
- ▁CROUCH
- ▁FRIDAY
- ▁MOUNTAINEER
- ▁HISTORIC
- ▁BATES
- ▁RUSK
- ▁SEMI
- DICTION
- ▁BUSI
- ▁REMOV
- MMI
- ▁SUFFIC
- ▁FLEE
- ▁LOUIS
- NLEA
- ▁IMPORT
- OLOGY
- ▁CLERGY
- ▁ADVERTISEMENT
- ▁BENEVOLEN
- ▁BORODINO
- ▁CATHOLIC
- ▁COMMERCIAL
- ▁CONJECTURE
- ▁CURTAIN
- ▁CUTHBERT
- ▁DEMOCRACY
- ▁GUARANTEE
- ▁HYPNOSIS
- ▁INDEFINITE
- ▁INVESTIGATION
- ▁IRREGULAR
- ▁KOYO
- ▁MERRIWIG
- ▁MIRANDA
- ▁NICHOLL
- ▁ONLOOKER
- ▁PERSECUT
- ▁RECOGNITION
- ▁REJOICE
- ▁REMEMBRANCE
- ▁REVELATION
- ▁SCOLD
- ▁SENIOR
- ▁SQUIRREL
- ▁SYMPATHETIC
- ▁TEMPEST
- ▁TREACHER
- ▁UNDERNEATH
- ▁UNEASINESS
- ▁UNNECESSARY
- ▁UPSTAIRS
- ▁VEXATION
- ▁ACCESS
- ▁CHEAP
- ▁ESTIMATE
- ▁HAZARD
- ▁HORSEBACK
- ▁PLUNDER
- ▁RASCAL
- ▁ROSTOV
- ▁ACCUR
- ▁GRAVITY
- ▁SITUATED
- ▁INVARIABL
- ▁PLENTIFUL
- ▁SPENCER
- ▁WALLACE
- ▁POLICY
- ▁WARRANT
- ▁ENVY
- ▁LAMB
- ▁EXTRACT
- ▁CORRAL
- ▁PANEL
- ▁LINK
- ▁LILIES
- ▁BECKON
- ▁SENOR
- ▁BORG
- ▁DEBATE
- ▁STEER
- COGNI
- COMB
- ▁SETTL
- ▁VENERA
- ▁FEATURE
- ▁TERRIBL
- CAPABLE
- OLOGICAL
- ▁INCESSANT
- ▁RESOLUTE
- SHAUGHNESSY
- ▁ABOLITION
- ▁ASSASSIN
- ▁BEHAVIOUR
- ▁BLUNT
- ▁COMMERCE
- ▁CONSTANTINOPLE
- ▁CRICKET
- ▁DISCIPLINE
- ▁DROUET
- ▁DWARF
- ▁INJUSTICE
- ▁LUXURY
- ▁MANUSCRIPT
- ▁MISUNDERSTAND
- ▁POLITICIAN
- ▁REDOUBT
- ▁SALVATION
- ▁SERMON
- ▁STRUGGLING
- ▁SURPRISING
- ▁TRIGGER
- ▁TUESDAY
- ▁TWILIGHT
- ▁UNDOUBTEDLY
- ▁VEGETABLE
- ▁VULGAR
- ▁WAISTCOAT
- ▁WRINKLE
- ▁ALEXANDER
- ▁CEILING
- ▁ECONOMIC
- ▁EVERLASTING
- ▁INFLICT
- ▁LEVISON
- ▁LOBSTER
- ▁OVERFLOW
- ▁SNATCH
- ▁TRAGEDY
- ▁DEASEY
- ▁ENLIGHTEN
- ▁FRIGATE
- ▁INSPECT
- ▁MARVELLOUS
- ▁ATLANTIC
- ▁LUFTON
- ▁BLADE
- ▁CRASH
- ▁SLAUGHTER
- ▁ANNUAL
- ▁CONFERENCE
- ▁TWIG
- ▁REASSUR
- ▁UNIQUE
- ▁WRATH
- ▁CRADLE
- ▁HULLO
- ▁LIQUID
- ▁MIRTH
- ▁EXPERT
- ▁HARVEY
- ▁RESTORATION
- ▁PRETTI
- ▁APOLOGY
- ▁SLAIN
- ▁BARBER
- ▁UPROAR
- ▁SCANT
- ▁BADGER
- ▁GROCER
- ▁ACRES
- ▁BRIDLE
- ▁SPECIFI
- ▁TANGLE
- ▁FERTIL
- ▁PATRON
- WIXT
- LAMOUR
- ▁DARN
- ▁POPE
- ▁PERCEIV
- ▁CONCLUDE
- ▁SIMPL
- ▁GUILT
- ▁CARRIE
- EFFICIENT
- SGIVING
- ▁APPOINTMENT
- ▁APPRECIATION
- ▁CARTRIDGE
- ▁CHALLENGE
- ▁CRAYFISH
- ▁CRIMSON
- ▁CUCUMETTO
- ▁ENERGETIC
- ▁EPOCH
- ▁EXAMINING
- ▁EXTENSIVE
- ▁EXTINGUISH
- ▁GLOODY
- ▁INSIGNIFICANT
- ▁LANDLORD
- ▁LANGUID
- ▁LEGISLATURE
- ▁MAJESTIC
- ▁PACIFIC
- ▁PASTRINI
- ▁PHRONSIE
- ▁RECONCIL
- ▁SIMULTANEOUS
- ▁SKELETON
- ▁SKETCH
- ▁TRANSFORM
- ▁UNJUST
- ▁VEXED
- ▁ASYLUM
- ▁CLUSTER
- ▁ERRAND
- ▁EXPEND
- ▁NEGATIVE
- ▁NORHALA
- ▁SCANDAL
- ▁STIMULAT
- ▁SWEAT
- ▁COMPOUND
- ▁DECEMBER
- ▁EXPAND
- ▁PROLONG
- ▁PURITAN
- ▁CONQUEST
- ▁MAGUA
- ▁SANCHO
- ▁TRENCH
- ▁ENTITLE
- ▁PEPPER
- ▁DISASTER
- ▁REGAIN
- ▁SHREWD
- ▁SULLEN
- ▁CLAVIER
- ▁COLOSS
- ▁SHILLING
- ▁ETHEL
- ▁MYSTERIES
- ▁BULK
- ▁GRANDEUR
- ▁AGNES
- ▁CONVERT
- ▁WRIST
- ▁GLID
- ▁TERRACE
- ▁SONYA
- ▁DANTES
- ▁MOULD
- ▁MAGNET
- ▁PLOT
- RANK
- ▁CAVIT
- ▁SUBSID
- ▁SLAP
- TURNED
- ▁THREAT
- BREAK
- ▁ANCESTORS
- ▁ANTICIPATED
- ▁APPLAUSE
- ▁ASSAULT
- ▁ATTORNEY
- ▁AUTOMATIC
- ▁CARAVAN
- ▁CATASTROPHE
- ▁CAVALCANTI
- ▁CROMWELL
- ▁ENVOY
- ▁EXHAUSTION
- ▁FIEND
- ▁GENEROSITY
- ▁GIMBLET
- ▁HARDQUANONNE
- ▁HOUARN
- ▁INJURY
- ▁MACKINSON
- ▁OGLETHORPE
- ▁PETTICOAT
- ▁RASPBERR
- ▁REHNHJELM
- ▁REJOICING
- ▁REMNANT
- ▁SCOTLAND
- ▁SHRINK
- ▁STANDPOINT
- ▁TESTIMONY
- ▁THEREAFTER
- ▁THIRTIETH
- ▁TWENTIETH
- ▁TYRANT
- ▁VENTNOR
- ▁VETERAN
- ▁WHITTAKER
- ▁ZVERKOV
- ▁ARCHITECTUR
- ▁BLUNDER
- ▁DENSHER
- ▁FORTNIGHT
- ▁JUDITH
- ▁MARIANNE
- ▁MEMORABLE
- ▁REFINED
- ▁REVOLV
- ▁UNDERTAKING
- ▁CLUMP
- ▁GRUMBLE
- ▁SYMPATHI
- ▁TICKET
- ▁TWITCH
- ▁EDITION
- ▁FALANDER
- ▁CARTHAGE
- ▁ORLEANS
- ▁POSSUM
- ▁SWITCH
- ▁CLUNG
- ▁CARDINAL
- ▁GNAW
- ▁LOCATED
- ▁HARROW
- ▁RASH
- ▁SIEGE
- ▁LOAF
- ▁BRUISE
- ▁REGULAT
- ▁RESORT
- ▁SARAH
- ▁LEVIN
- ▁NAVY
- ▁MOOSE
- ▁STOOL
- ▁CHANCELLOR
- ▁INGENIOUS
- ▁CHALK
- ▁PRETENCE
- ▁REPAY
- ▁ROAST
- ▁PLUTO
- ▁BAFFL
- ▁STUMBL
- ▁SPHERE
- ▁PLEDGE
- ▁SPRAWL
- ▁WRAP
- ▁FRINGE
- ▁DREAR
- ARRINGTON
- ▁FEDERA
- KEEPER
- ▁PHYSIC
- ▁ADVENT
- HUMAN
- OLOGIST
- ▁ALEXANDR
- ▁APPARITION
- ▁BARTHOLEMY
- ▁CITOYEN
- ▁CLIMATE
- ▁CONTEMPORAR
- ▁DESOLATE
- ▁DISCONTENT
- ▁ELEPHANT
- ▁FERNANDO
- ▁FERRALTI
- ▁FOLIAGE
- ▁FUGITIVE
- ▁GAMBLING
- ▁INVOLUNTARILY
- ▁LABYRINTH
- ▁LEGITIMATE
- ▁MILLIONAIRE
- ▁PERCEPTION
- ▁PROPRIETY
- ▁REBELLION
- ▁REFRAIN
- ▁RUGGLES
- ▁SCRIPTURE
- ▁SPLENDOR
- ▁SQUADRON
- ▁STRICKEN
- ▁SWARM
- ▁THEODORA
- ▁TOMORROW
- ▁VELVET
- ▁WOLVES
- ▁DISREGARD
- ▁GLIMMER
- ▁SHROUD
- ▁TWINKLING
- ▁UNEQUAL
- ▁CHANNING
- ▁CLUMS
- ▁ENIGMA
- ▁NAVIGAT
- ▁TARKAS
- ▁TEMPERATURE
- ▁DIVISION
- ▁GRATIFICATION
- ▁MONUMENT
- ▁SQUEAK
- ▁KAVIN
- ▁INTERPOSE
- ▁THORNTON
- ▁SOLUTION
- ▁STREAK
- ▁SHRILL
- ▁APRON
- ▁PITEOUS
- ▁HAUGHTY
- ▁RECKLESS
- ▁EMPTI
- ▁WADMAN
- ▁BONNET
- ▁MARTHA
- ▁DUMB
- ▁SHATTER
- ▁ACUTE
- ▁BRINK
- ▁CAPRICE
- ▁HURON
- ▁INFERN
- ▁FOWL
- ▁ENRAGE
- ▁ADORN
- ▁CRUIS
- ▁PROBABILIT
- ▁EXPIR
- ▁IMPETU
- ▁OVERHEAR
- BURTON
- ▁TRANSLAT
- ▁ENGAGE
- ▁CONVINCE
- ▁ABNORMAL
- ▁GESTICULAT
- ▁ABOMINABL
- ▁ADVERSARY
- ▁ADVERTISER
- ▁ADVERTISING
- ▁ANNIHILAT
- ▁ARTILLERY
- ▁CATHEDRAL
- ▁COMPETITOR
- ▁COULSON
- ▁CREVICE
- ▁CUSHION
- ▁DEBRAY
- ▁DEJECT
- ▁DIETRICH
- ▁DISADVANTAGE
- ▁ELLISON
- ▁EMPHASIS
- ▁EXCURSION
- ▁FANTASTIC
- ▁HYPOTHES
- ▁INCONVENIENCE
- ▁INDESCRIBABLE
- ▁INDUSTRI
- ▁INVALID
- ▁MERCILESS
- ▁MESOPOTAMIA
- ▁MOSQUITO
- ▁NARRATIVE
- ▁NOWADAYS
- ▁OPPORTUNITIES
- ▁PROMISING
- ▁RECTANGLE
- ▁REMONSTRANCE
- ▁RESTAURANT
- ▁RIBBON
- ▁SCIENTIST
- ▁SHALMANESER
- ▁SKULL
- ▁SPRUCE
- ▁SUBSTANTIAL
- ▁SYMBOL
- ▁TEAPOT
- ▁TERRITORY
- ▁TRAFFIC
- ▁TREASON
- ▁TRUMPET
- ▁TYRANN
- ▁UNANIMOUS
- ▁UNAWARE
- ▁VICINITY
- ▁WREATH
- ▁ZADIG
- ▁CHATEAU
- ▁CONFRONT
- ▁DUCHESS
- ▁EMBODI
- ▁FEMININ
- ▁FURNACE
- ▁MONTONI
- ▁RENOWN
- ▁SMASH
- ▁HARVARD
- ▁NEWBERRY
- ▁PERFUME
- ▁SIGNATURE
- ▁SPLASH
- ▁SUPPOSITION
- ▁HARBOUR
- ▁ASSURANCE
- ▁BRISTOL
- ▁BUCKINGHAM
- ▁DUDLEY
- ▁INTENSITY
- ▁CHOPIN
- ▁ENLIST
- Q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech_100"]}
|
jkang/espnet2_librispeech_100_conformer
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:librispeech_100",
"arxiv:1804.00015",
"license:cc-by-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"noinfo"
] |
TAGS
#espnet #audio #automatic-speech-recognition #dataset-librispeech_100 #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us
|
ESPnet2 ASR model
-----------------
### 'jkang/espnet2\_librispeech\_100\_conformer'
* This model was trained by jaekookang using librispeech\_100 recipe in espnet.
* Gradio Demo: ESPNet2 ASR Librispeech Conformer
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Fri Feb 11 01:42:52 KST 2022'
* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.7a1'
* pytorch version: 'pytorch 1.10.1'
* Git hash: '140704c146f8beeed74973f5258379f6133dcdfb'
+ Commit date: 'Tue Feb 8 16:06:02 2022 -0500'
* GPU: NVIDIA GeForce RTX 3090 (single GPU took: 13h)
asr\_conformer\_lr2e-3\_warmup15k\_amp\_nondeterministic
--------------------------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'jkang/espnet2\\_librispeech\\_100\\_conformer'\n\n\n* This model was trained by jaekookang using librispeech\\_100 recipe in espnet.\n* Gradio Demo: ESPNet2 ASR Librispeech Conformer",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Fri Feb 11 01:42:52 KST 2022'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.10.1'\n* Git hash: '140704c146f8beeed74973f5258379f6133dcdfb'\n\t+ Commit date: 'Tue Feb 8 16:06:02 2022 -0500'\n* GPU: NVIDIA GeForce RTX 3090 (single GPU took: 13h)\n\n\nasr\\_conformer\\_lr2e-3\\_warmup15k\\_amp\\_nondeterministic\n--------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #dataset-librispeech_100 #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us \n",
"### 'jkang/espnet2\\_librispeech\\_100\\_conformer'\n\n\n* This model was trained by jaekookang using librispeech\\_100 recipe in espnet.\n* Gradio Demo: ESPNet2 ASR Librispeech Conformer",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Fri Feb 11 01:42:52 KST 2022'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.10.1'\n* Git hash: '140704c146f8beeed74973f5258379f6133dcdfb'\n\t+ Commit date: 'Tue Feb 8 16:06:02 2022 -0500'\n* GPU: NVIDIA GeForce RTX 3090 (single GPU took: 13h)\n\n\nasr\\_conformer\\_lr2e-3\\_warmup15k\\_amp\\_nondeterministic\n--------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `jkang/espnet2_librispeech_100_conformer_char`
This model was trained by jaekookang using librispeech_100 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 82a0a0fa97b8a4a578f0a2c031ec49b3afec1504
pip install -e .
cd egs2/librispeech_100/asr1
./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_librispeech_100_conformer_char
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Feb 24 17:47:04 KST 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `82a0a0fa97b8a4a578f0a2c031ec49b3afec1504`
- Commit date: `Wed Feb 23 08:06:47 2022 +0900`
## asr_conformer_lr2e-3_warmup15k_amp_nondeterministic_char
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|54402|93.9|5.6|0.5|0.7|6.8|57.1|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|50948|82.5|15.7|1.8|1.9|19.3|82.6|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|52576|93.8|5.7|0.6|0.7|6.9|58.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|52343|82.2|15.9|2.0|1.7|19.5|83.6|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|288456|98.3|1.0|0.7|0.7|2.4|57.1|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|265951|93.3|4.1|2.6|1.9|8.7|82.6|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|281530|98.3|1.0|0.7|0.6|2.3|58.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|272758|93.2|4.1|2.7|1.8|8.6|83.6|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_char.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_conformer_lr2e-3_warmup15k_amp_nondeterministic_char
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 400
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1600000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_char_sp/train/speech_shape
- exp/asr_stats_raw_en_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_en_char_sp/valid/speech_shape
- exp/asr_stats_raw_en_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_clean_100_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_clean_100_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- <space>
- E
- T
- A
- O
- N
- I
- H
- S
- R
- D
- L
- U
- M
- C
- W
- F
- G
- Y
- P
- B
- V
- K
- ''''
- X
- J
- Q
- Z
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_char_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech_100"]}
|
jkang/espnet2_librispeech_100_conformer_char
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:librispeech_100",
"arxiv:1804.00015",
"license:cc-by-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"noinfo"
] |
TAGS
#espnet #audio #automatic-speech-recognition #dataset-librispeech_100 #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us
|
ESPnet2 ASR model
-----------------
### 'jkang/espnet2\_librispeech\_100\_conformer\_char'
This model was trained by jaekookang using librispeech\_100 recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Thu Feb 24 17:47:04 KST 2022'
* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.7a1'
* pytorch version: 'pytorch 1.10.1'
* Git hash: '82a0a0fa97b8a4a578f0a2c031ec49b3afec1504'
+ Commit date: 'Wed Feb 23 08:06:47 2022 +0900'
asr\_conformer\_lr2e-3\_warmup15k\_amp\_nondeterministic\_char
--------------------------------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'jkang/espnet2\\_librispeech\\_100\\_conformer\\_char'\n\n\nThis model was trained by jaekookang using librispeech\\_100 recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Thu Feb 24 17:47:04 KST 2022'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.10.1'\n* Git hash: '82a0a0fa97b8a4a578f0a2c031ec49b3afec1504'\n\t+ Commit date: 'Wed Feb 23 08:06:47 2022 +0900'\n\n\nasr\\_conformer\\_lr2e-3\\_warmup15k\\_amp\\_nondeterministic\\_char\n--------------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #dataset-librispeech_100 #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us \n",
"### 'jkang/espnet2\\_librispeech\\_100\\_conformer\\_char'\n\n\nThis model was trained by jaekookang using librispeech\\_100 recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Thu Feb 24 17:47:04 KST 2022'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.10.1'\n* Git hash: '82a0a0fa97b8a4a578f0a2c031ec49b3afec1504'\n\t+ Commit date: 'Wed Feb 23 08:06:47 2022 +0900'\n\n\nasr\\_conformer\\_lr2e-3\\_warmup15k\\_amp\\_nondeterministic\\_char\n--------------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `jkang/espnet2_librispeech_100_conformer_word`
This model was trained by jaekookang using librispeech_100 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 82a0a0fa97b8a4a578f0a2c031ec49b3afec1504
pip install -e .
cd egs2/librispeech_100/asr1
./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_librispeech_100_conformer_word
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Feb 22 17:38:22 KST 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `e79e7185780b90e56618859855a038b4369b002c`
- Commit date: `Tue Feb 22 15:34:12 2022 +0900`
## asr_conformer_lr2e-3_warmup15k_amp_nondeterministic
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|54402|91.0|8.4|0.6|1.0|10.0|70.1|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|50948|82.9|15.6|1.5|2.5|19.6|83.3|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|52576|90.7|8.7|0.6|1.0|10.3|71.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|52343|82.1|16.1|1.7|2.3|20.2|85.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|288456|95.7|2.6|1.7|1.3|5.6|70.1|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|265951|91.0|5.6|3.4|2.5|11.5|83.3|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|281530|95.7|2.7|1.7|1.2|5.5|71.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|272758|90.9|5.6|3.6|2.5|11.6|85.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_conformer_lr2e-3_warmup15k_amp_nondeterministic
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 400
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word_sp/train/speech_shape
- exp/asr_stats_raw_en_word_sp/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word_sp/valid/speech_shape
- exp/asr_stats_raw_en_word_sp/valid/text_shape.word
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_clean_100_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_clean_100_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- THE
- AND
- OF
- TO
- A
- IN
- I
- WAS
- HE
- THAT
- IT
- HIS
- HAD
- AS
- WITH
- YOU
- FOR
- HER
- BUT
- IS
- NOT
- SHE
- AT
- 'ON'
- BE
- HIM
- THEY
- BY
- HAVE
- THIS
- MY
- WERE
- WHICH
- ALL
- FROM
- SO
- SAID
- ONE
- ME
- WE
- THERE
- THEIR
- 'NO'
- WHEN
- AN
- OR
- THEM
- WOULD
- IF
- WHO
- ARE
- BEEN
- WHAT
- UP
- THEN
- OUT
- COULD
- WILL
- INTO
- MORE
- SOME
- VERY
- MAN
- DO
- NOW
- LITTLE
- ABOUT
- YOUR
- DID
- THAN
- TIME
- LIKE
- UPON
- WELL
- HAS
- ONLY
- TWO
- OTHER
- ANY
- OUR
- MADE
- AFTER
- BEFORE
- ITS
- DOWN
- OVER
- SUCH
- OLD
- SEE
- THESE
- KNOW
- CAME
- DAY
- GREAT
- US
- MISTER
- GOOD
- SHOULD
- MUCH
- CAN
- HOW
- WAY
- NEVER
- MUST
- COME
- AGAIN
- BACK
- FIRST
- WHERE
- GO
- HIMSELF
- OWN
- LONG
- MAY
- MEN
- EVEN
- WENT
- SAY
- JUST
- MIGHT
- HERE
- THROUGH
- EYES
- MAKE
- TOO
- WITHOUT
- HOUSE
- THINK
- THOSE
- THOUGHT
- MANY
- MOST
- EVERY
- LIFE
- AWAY
- BEING
- STILL
- AM
- WHILE
- NOTHING
- DON'T
- LAST
- THOUGH
- YOUNG
- YET
- FOUND
- PEOPLE
- THREE
- 'OFF'
- HAND
- GET
- TAKE
- ASKED
- SAW
- SAME
- NIGHT
- MISSUS
- HEAD
- RIGHT
- LEFT
- ANOTHER
- TELL
- ONCE
- SHALL
- PLACE
- EVER
- TOOK
- FACE
- SEEMED
- ALWAYS
- ROOM
- NEW
- UNDER
- WHY
- TOLD
- LOOKED
- HEARD
- PUT
- BECAUSE
- THINGS
- SOMETHING
- LET
- GOING
- GIVE
- LOOK
- SOON
- THING
- MIND
- FATHER
- LOVE
- KNEW
- EACH
- FAR
- AGAINST
- HAVING
- HEART
- MOTHER
- WORLD
- FEW
- BEGAN
- 'YES'
- MISS
- DOOR
- BETTER
- WORK
- HOME
- MOMENT
- YEARS
- ENOUGH
- SIR
- DONE
- GOT
- SIDE
- SEEN
- WOMAN
- CALLED
- IT'S
- WHOLE
- BETWEEN
- FELT
- KING
- MORNING
- HERSELF
- FIND
- TURNED
- HOWEVER
- WHITE
- ALSO
- HALF
- PERHAPS
- GIRL
- REPLIED
- HUNDRED
- QUITE
- OH
- MYSELF
- PART
- WATER
- COURSE
- VOICE
- POOR
- BOTH
- NAME
- GAVE
- HANDS
- WHOM
- DAYS
- ALMOST
- AMONG
- SET
- TOGETHER
- WORDS
- UNTIL
- ANYTHING
- FEET
- NEXT
- WANT
- STOOD
- FOUR
- I'M
- BROUGHT
- BEST
- LIGHT
- OTHERS
- FIVE
- LOOKING
- SMALL
- ALONG
- NOR
- NEAR
- RATHER
- SINCE
- BELIEVE
- PASSED
- DOES
- MONEY
- OPEN
- LAY
- END
- INDEED
- ROUND
- KIND
- FULL
- TWENTY
- CRIED
- TAKEN
- SURE
- MATTER
- WORD
- DEAR
- GONE
- COUNTRY
- WHOSE
- ANSWERED
- LESS
- HIGH
- THEMSELVES
- SAT
- AIR
- BLACK
- BEHIND
- POWER
- 'TRUE'
- UNCLE
- AROUND
- NATURE
- CHILD
- DEATH
- DURING
- CERTAIN
- REST
- KEEP
- JOHN
- OFTEN
- TILL
- WOMEN
- ALREADY
- CHILDREN
- THUS
- PRESENT
- HOPE
- LARGE
- LADY
- BECAME
- RETURNED
- WIFE
- CANNOT
- WISH
- DIDN'T
- GOD
- BOY
- SENT
- GIVEN
- LEAVE
- ALONE
- CASE
- SHORT
- BODY
- LAND
- EVERYTHING
- COMING
- GENERAL
- SAYS
- REALLY
- HELD
- DOCTOR
- ABOVE
- GROUND
- FELL
- FIRE
- HELP
- THOUSAND
- SPEAK
- EVENING
- FACT
- CITY
- SOMETIMES
- HEAR
- ORDER
- STATE
- FRIEND
- KEPT
- WITHIN
- POINT
- FRIENDS
- LEAST
- MASTER
- HOUR
- THAT'S
- USE
- FAMILY
- CARE
- MAKING
- WHETHER
- BEAUTIFUL
- SIGHT
- TIMES
- SUDDENLY
- BED
- SIX
- I'LL
- DEAD
- EITHER
- CALL
- ITSELF
- USED
- ABLE
- TOWARDS
- DARK
- MANNER
- MEAN
- SEVERAL
- CAPTAIN
- LOST
- APPEARED
- STORY
- TOWN
- KNOWN
- BIG
- POSSIBLE
- THOU
- FINE
- MEANS
- SEA
- SECOND
- CONTINUED
- STRANGE
- SON
- RED
- HUMAN
- LORD
- HARD
- PERSON
- STREET
- REACHED
- FEEL
- CLOSE
- HAIR
- QUESTION
- ARMS
- ROSE
- THEREFORE
- BECOME
- LONGER
- FOLLOWED
- BUSINESS
- UNDERSTAND
- YEAR
- TABLE
- SORT
- HAPPY
- DIFFERENT
- SOUND
- ACROSS
- LIVE
- CERTAINLY
- WINDOW
- MET
- TREE
- BLUE
- NEED
- ELSE
- WAR
- TURN
- WANTED
- FELLOW
- READ
- TOWARD
- REASON
- READY
- OUGHT
- EARTH
- ASK
- CARRIED
- LIVED
- GREEN
- TEN
- FEELING
- IDEA
- ANSWER
- RUN
- PRINCE
- BROTHER
- COLD
- LATER
- EIGHTEEN
- CHURCH
- FEAR
- ALTHOUGH
- ADDED
- STRONG
- PARTY
- SHOW
- EYE
- PETER
- RIVER
- CAN'T
- TAKING
- SUPPOSE
- GIRLS
- PRINCESS
- FOOT
- TREES
- BOOK
- PRETTY
- ENTERED
- ROAD
- HOURS
- SLEEP
- FALL
- RECEIVED
- CLEAR
- LOW
- FREE
- LETTER
- TRIED
- SPOKE
- PAST
- FORM
- DOUBT
- TALK
- BEYOND
- DAUGHTER
- OPENED
- LIVING
- SAYING
- HOLD
- NUMBER
- DOING
- HORSE
- SCHOOL
- BOYS
- ENGLAND
- O
- LED
- DEEP
- I'VE
- GLAD
- ENGLISH
- THY
- THEE
- RETURN
- HUSBAND
- EIGHT
- RAN
- STRUCK
- ILL
- SEVEN
- SNOW
- SOUL
- AGE
- MILES
- TRUTH
- FORWARD
- SUN
- WALKED
- AH
- POSITION
- BEAUTY
- MEET
- NEARLY
- WON'T
- SPIRIT
- SEEM
- NONE
- LATE
- BAD
- STANDING
- WONDER
- CUT
- SILENCE
- EARLY
- IMMEDIATELY
- WIND
- SENSE
- CHANCE
- HAPPENED
- REMEMBER
- GREW
- FRONT
- CAUGHT
- BRING
- NEITHER
- YOURSELF
- WILD
- GARDEN
- BLOOD
- MINUTES
- LINE
- FURTHER
- COMPANY
- THIRTY
- FORCE
- TROUBLE
- SEEMS
- FILLED
- ARM
- AFRAID
- ATTENTION
- PLEASURE
- FORTH
- LAW
- CHANGE
- PURPOSE
- WOOD
- SISTER
- STOPPED
- SUBJECT
- INTEREST
- PUBLIC
- STARTED
- FIFTY
- LOVED
- EXCEPT
- EXCLAIMED
- PAY
- TONE
- REAL
- YOUTH
- INSTEAD
- WALK
- HARDLY
- CHARACTER
- WAIT
- THIRD
- LENGTH
- DIED
- MOVED
- SITTING
- HE'S
- AGO
- GOLD
- CAUSE
- GETTING
- THERE'S
- FLOOR
- VIEW
- IMPOSSIBLE
- THOUGHTS
- TEARS
- STAND
- HILL
- PLAY
- PLACED
- SERVICE
- BROKEN
- PROPERTY
- FOREST
- LAUGHED
- TALKING
- OBJECT
- REMAINED
- COVERED
- DEAL
- TRY
- TOP
- LAID
- LONDON
- APPEARANCE
- WEEK
- MADAME
- HAPPINESS
- SMILE
- MARRIED
- WHATEVER
- BEAR
- ACCOUNT
- COMES
- OUTSIDE
- WONDERFUL
- NATURAL
- SAINT
- QUEEN
- ARMY
- SEEING
- BELOW
- THINKING
- FIGURE
- COMMON
- SOCIETY
- SWEET
- FAIR
- PLEASE
- SHOWED
- DESIRE
- MAN'S
- RICH
- GOVERNMENT
- QUICKLY
- BOAT
- NECESSARY
- ENTIRELY
- MINE
- FRESH
- AFTERNOON
- MOUTH
- GIVING
- DREW
- OPINION
- YOU'RE
- EXPRESSION
- COURT
- EASY
- FOOD
- SIT
- STEP
- EASILY
- DE
- SHIP
- GENTLEMAN
- PASS
- DISTANCE
- TURNING
- TERRIBLE
- WAITING
- WIDE
- SKY
- COULDN'T
- HEAVY
- WISHED
- ACT
- ESPECIALLY
- VALLEY
- HOUSES
- PAPER
- STAY
- KILLED
- OCCASION
- BESIDE
- STONE
- EXPECTED
- LIPS
- USUAL
- WINTER
- OFFICE
- SECRET
- HORSES
- DANGER
- SAVE
- MOUNTAIN
- CHAPTER
- PROBABLY
- BROKE
- SIMPLY
- ART
- STEPS
- JOY
- FOLLOWING
- CHIEF
- SLOWLY
- HALL
- DINNER
- BESIDES
- KNOWS
- SPRING
- SPEAKING
- BEGINNING
- CHANGED
- NORTH
- HISTORY
- STRENGTH
- CLOSED
- PLACES
- SMILED
- CHAIR
- ANNE
- MEANT
- TRYING
- FORTY
- DUTY
- BROWN
- STOP
- CORNER
- PRESENCE
- DIE
- QUIET
- SILENT
- SINGLE
- VISIT
- SCARCELY
- EFFECT
- MAKES
- ARRIVED
- PARTICULAR
- BORN
- CONVERSATION
- FORTUNE
- ALLOWED
- RACE
- PALACE
- LEGS
- WALL
- CARRY
- UNDERSTOOD
- GREATER
- VILLAGE
- NINE
- JANE
- CRY
- SELF
- FIGHT
- SPENT
- RAISED
- WOODS
- FIELD
- FRENCH
- WRONG
- REGARD
- DREAM
- BIT
- LIE
- SUDDEN
- LAKE
- MONTHS
- PALE
- MARRIAGE
- BELIEVED
- LETTERS
- CAMP
- SOUTH
- ISN'T
- OBSERVED
- LEARNED
- STRAIGHT
- PLEASED
- LADIES
- SOFT
- SURPRISE
- SEAT
- PLEASANT
- BREAD
- BRIGHT
- WEST
- EXPERIENCE
- NEWS
- MOVE
- CONDITION
- WALLS
- EAT
- FOLLOW
- O'CLOCK
- POCKET
- DECLARED
- MUSIC
- PATH
- EVIL
- CIRCUMSTANCES
- MARY
- WARM
- FINALLY
- LATTER
- INFLUENCE
- WATCH
- LEAVING
- KNOWLEDGE
- BATTLE
- STATES
- WASN'T
- PERSONAL
- PERSONS
- HANDSOME
- ACTION
- SHORE
- WALKING
- GOLDEN
- TWELVE
- HEAVEN
- FORGET
- SHOOK
- AMERICAN
- THANK
- VARIOUS
- JOURNEY
- MOON
- MARRY
- MERELY
- DIRECTION
- CROWD
- MAJOR
- I'D
- SUMMER
- UNLESS
- SHUT
- REMAIN
- ANXIOUS
- SHOT
- DRESSED
- WOULDN'T
- DRESS
- EAST
- LOOKS
- BENEATH
- THICK
- WORSE
- WORTH
- MOUNTAINS
- EVIDENTLY
- INSTANT
- ESCAPE
- WE'LL
- GRACE
- FATHER'S
- TALL
- SOMEWHAT
- DROPPED
- EXACTLY
- ONES
- STORM
- KNOWING
- FALLEN
- DARKNESS
- GRAY
- EVERYBODY
- SIMPLE
- AFTERWARDS
- MINUTE
- SEND
- PAIN
- COUNT
- SAFE
- PICTURE
- FAST
- YELLOW
- CONSIDERED
- GROWN
- BREATH
- HEADS
- BANK
- COMFORT
- ISABEL
- REACH
- INDIANS
- DECIDED
- SITUATION
- DIFFICULT
- BOX
- IMPORTANT
- PERFECTLY
- ACCORDING
- AUNT
- ANCIENT
- FRANK
- PIECE
- RUNNING
- MORROW
- WHAT'S
- LYING
- FISH
- CLASS
- BILLY
- PLAIN
- PEACE
- LIKED
- HAT
- SICK
- CARRIAGE
- REPEATED
- LAUGH
- STRANGER
- SILVER
- SOLDIERS
- CLOTHES
- ALIVE
- HUNG
- GLANCE
- FORGOTTEN
- IDEAS
- ENEMY
- WRITTEN
- LOWER
- THREW
- TAIL
- HONOUR
- PRESIDENT
- BUILT
- DISCOVERED
- PREPARED
- OBLIGED
- PAID
- BOUND
- GENTLEMEN
- MERE
- YORK
- GUESS
- NARROW
- PASSING
- QUICK
- CONSIDERABLE
- BROAD
- SCENE
- TIRED
- WRITE
- SUCCESS
- BEGIN
- SOCIAL
- REMEMBERED
- FINISHED
- REPLY
- INDIAN
- REBECCA
- TOM
- WAYS
- FLOWERS
- BELL
- APPEAR
- PERFECT
- YOU'LL
- FIFTEEN
- WEATHER
- BOARD
- FRANCE
- GAME
- PLAYED
- POSSESSION
- FUTURE
- QUARTER
- LOSE
- LIVES
- GROWING
- ONE'S
- COUSIN
- DRAWN
- NECK
- SPOT
- NOTICED
- TEA
- FARM
- TALKED
- LIKELY
- LISTEN
- ATTEMPT
- CROSS
- HOT
- BILL
- SPITE
- SORRY
- EDWARD
- PRESENTLY
- NOBODY
- DRAWING
- GRASS
- MEASURE
- DETERMINED
- EQUAL
- FEELINGS
- SISTERS
- SHARP
- TELLING
- AFFAIRS
- LEAVES
- SMILING
- GROUP
- RESULT
- OPENING
- BREAKFAST
- LUCK
- EXPECT
- SERIOUS
- PROMISED
- OFFERED
- SERVANT
- EFFORT
- EVERYWHERE
- COURAGE
- FRIGHTENED
- FACES
- LIFTED
- CAREFULLY
- GATHERED
- GREATLY
- PARTS
- MAIN
- DUE
- THIN
- ISLAND
- WORE
- RESPECT
- LEARN
- DIFFICULTY
- EXISTENCE
- TOUCH
- GRAVE
- DOLLARS
- SHOP
- SURPRISED
- EDGE
- WINDOWS
- MOMENTS
- OCCUPIED
- SERVANTS
- PROMISE
- TEETH
- MARK
- VAIN
- HOLDING
- GREATEST
- MEETING
- WATCHED
- BUILDING
- CAST
- HAPPEN
- OURSELVES
- COMPANION
- ALLOW
- SAD
- ANGRY
- SYMPATHY
- GLASS
- FINGERS
- BROTHERS
- JERRY
- START
- ALTOGETHER
- SHOWN
- COMPANIONS
- FORMED
- TASTE
- PRIVATE
- BOOKS
- COAT
- POND
- EARS
- SEIZED
- HILLS
- LUCY
- DOESN'T
- POINTED
- BEAT
- GEORGE
- SATISFIED
- EXPLAINED
- MOVING
- NOTE
- WROTE
- PERCEIVED
- RECEIVE
- SPEECH
- CHARLES
- EAR
- AGREED
- ANIMALS
- CATCH
- RACHEL
- SIGN
- WATCHING
- OPPOSITE
- PERIOD
- YOURS
- UNITED
- DOG
- POSSESSED
- FINDING
- HIGHER
- SHOULDER
- RAIN
- HENRY
- CATHERINE
- ORDINARY
- QUIETLY
- ENTER
- MATTERS
- GRAND
- EMPTY
- MISTRESS
- CAUSED
- PAPERS
- TRAIL
- MEANING
- DRY
- DEGREE
- FALLING
- PATSY
- WELCOME
- FANCY
- CASTLE
- CREATURES
- SIXTEEN
- SUIT
- CREATURE
- SHE'S
- HADN'T
- BLOW
- COMPLETE
- RING
- JUSTICE
- SPREAD
- WEEKS
- RESOLVED
- FIXED
- BOTTOM
- ATTACK
- ELIZABETH
- TOBY
- QUESTIONS
- GENERALLY
- CURIOSITY
- BREAK
- TOUCHED
- SHOULDERS
- LOT
- MEMORY
- FLEW
- WHISPERED
- JUDGE
- SURELY
- ENGAGED
- AWARE
- MORAL
- FIELDS
- BALL
- FORMER
- THROWN
- TONGUE
- LISTENED
- TERROR
- KILL
- EXCITED
- AMERICA
- PASSION
- PRODUCED
- SPECIAL
- PASSAGE
- REQUIRED
- RISING
- CHARMING
- SPOKEN
- SHINING
- TASK
- PAPA
- SWORD
- IMAGINE
- ABSENCE
- NEEDED
- SPACE
- ADVANTAGE
- ORDERS
- BURST
- INSIDE
- DANGEROUS
- ORDERED
- NOISE
- DELIGHT
- RISE
- ICE
- CHAMBER
- ADVANCED
- HEALTH
- DOORS
- SHEEP
- WE'RE
- SIXTY
- SUPPOSED
- FAILED
- IMAGINATION
- PROUD
- EXCITEMENT
- MAID
- ASLEEP
- HONEST
- MASS
- PROVED
- WINE
- TRUST
- EXCELLENT
- CALLING
- ROCK
- FARTHER
- REMARKED
- PUTTING
- TRAIN
- LAUGHING
- NOTICE
- INTERESTING
- SELL
- WOUNDED
- REFUSED
- SHIPS
- SEARCH
- COAST
- SIDES
- FULLY
- CLOUDS
- LEAD
- FARMER
- STREAM
- SAKE
- INSTANCE
- MISTAKE
- BIRDS
- WAITED
- YOU'VE
- CLUB
- MONTH
- HABIT
- KING'S
- BORE
- FINGER
- SUFFICIENT
- GUARD
- STUDY
- DISAPPEARED
- MOVEMENT
- ASIDE
- AHEAD
- D'ARTAGNAN
- CARLYLE
- PARENTS
- DARE
- GENTLY
- LOVELY
- ROOF
- AFFAIR
- BIRD
- CALM
- UNKNOWN
- GATE
- BRAIN
- GENTLE
- MIDDLE
- UPPER
- DROVE
- SHAPE
- HEAT
- INDIVIDUAL
- BREAST
- ROOMS
- PHYSICAL
- NATION
- INFORMATION
- RELIEF
- FASHION
- IRON
- INFORMED
- PARIS
- LEADING
- SHADOW
- HONOR
- PRESENTED
- DIRECTLY
- SUFFERING
- GROW
- FOND
- LOUD
- OFFER
- PRIDE
- SUCCEEDED
- INTERESTED
- OCCURRED
- WISHES
- WORKING
- HEARTS
- VOICES
- SUGGESTED
- CHARGE
- EVENTS
- HEARING
- WEAK
- SETTLED
- WANTS
- SURFACE
- PAUSED
- FAITH
- NOBLE
- HOPED
- HURT
- SMOKE
- COTTAGE
- SPIRITS
- SPRANG
- CORPORAL
- HIDDEN
- APPROACHED
- CONTRARY
- STREETS
- AUTHORITY
- WEALTH
- CORONEL
- BUSY
- MARILLA
- PROPER
- DESIRED
- POWERFUL
- FIT
- RATE
- USUALLY
- PREVENT
- PLAYING
- LINES
- SERVE
- SONG
- MATERIAL
- HUGE
- NEARER
- CLEAN
- MILE
- STICK
- FLY
- AROSE
- CONSIDER
- NAMED
- CLOUD
- EIGHTY
- BUY
- YE
- REMARKABLE
- KNEES
- WISE
- CURIOUS
- CENTURY
- PICKED
- RELIGION
- CONSEIL
- PRIEST
- CONSCIOUS
- MEAL
- FORCED
- MIGHTY
- SEVENTEEN
- EXPRESSED
- DOZEN
- PROVE
- LOSS
- SUPPORT
- CEASED
- SKIN
- SYSTEM
- PRAY
- DISTANT
- RUTH
- SUPPER
- DEMANDED
- PROCEEDED
- EGGS
- PITY
- NICE
- SERVED
- INTENDED
- INSTANTLY
- DIFFERENCE
- TENDER
- ASKING
- WATERS
- SOUGHT
- INCREASED
- LANGUAGE
- ANIMAL
- VALUE
- VAST
- KNIT
- LAWS
- SELDOM
- OPPORTUNITY
- LIBERTY
- SLEPT
- ADMIT
- FAIRY
- PURE
- FOURTH
- COUNTENANCE
- ACCEPTED
- TEMPER
- SOONER
- SOLD
- BEGUN
- APPARENTLY
- BOUGHT
- ROME
- MODERN
- SHOUTED
- SPLENDID
- MOUSE
- DECK
- MENTAL
- ADVICE
- GOES
- HOTEL
- DREADFUL
- SEEK
- BITTER
- TREATED
- CARRYING
- CONTROL
- SEVENTY
- ADMIRATION
- FAT
- BLIND
- DRINK
- GRAHAM
- EM
- COLLEGE
- DALE
- WOUND
- WILLIAM
- YESTERDAY
- FLAT
- EVIDENCE
- WHENEVER
- DAILY
- REGULAR
- FORMS
- ABSOLUTELY
- ADD
- CONDUCT
- ADVANCE
- PRICE
- PLAN
- ANYONE
- COLOR
- POLICE
- WORKED
- EQUALLY
- DREAMS
- LEG
- HUNTING
- DRAGON
- COLONEL
- DICK
- CAPABLE
- KITCHEN
- POSSIBLY
- HAVEN'T
- SEATED
- ADMITTED
- NEVERTHELESS
- PAIR
- MEMBERS
- TERMS
- JOINED
- EXAMPLE
- CLEARLY
- PUSHED
- CABIN
- GREY
- COUPLE
- JAMES
- SLOW
- PRISONER
- VICTORY
- PROFESSOR
- WRITING
- VISIBLE
- FAIRLY
- DRIVE
- SHAME
- EMPLOYED
- FAMOUS
- TAKES
- EUROPE
- HOPES
- SIZE
- ADDRESSED
- IMMEDIATE
- PULLED
- LAUGHTER
- WEDDING
- PARTICULARLY
- PHYSICIAN
- POLITICAL
- VESSEL
- CAT
- ARTHUR
- KEEPING
- STEPPED
- TAUGHT
- EXPLAIN
- LIGHTS
- CASES
- SAVED
- MENTION
- DELIGHTED
- ROYAL
- COMMAND
- BARE
- POWERS
- DOUBLE
- AFFECTION
- AWFUL
- FRUIT
- THROAT
- HURRY
- MAJESTY
- MESSAGE
- MIDST
- PRESS
- MEADOW
- PLENTY
- WORTHY
- EXTRAORDINARY
- SLAVE
- INNOCENT
- PATIENCE
- BENT
- IMPORTANCE
- REMOVED
- SQUARE
- MERRY
- BURIED
- MENTIONED
- RECOGNIZED
- KINGDOM
- MAMMA
- ELSIE
- CONCLUDED
- POINTS
- MYSTERIOUS
- WORN
- GOODS
- HIGHEST
- GRIEF
- UNHAPPY
- PRISON
- ROW
- DESCRIBED
- HANDED
- POPULAR
- FORCES
- SATISFACTION
- CONDITIONS
- TWICE
- NOSE
- KEY
- LOOSE
- FAINT
- ORIGINAL
- THROUGHOUT
- BODIES
- DOGS
- HORROR
- OFFICERS
- PROGRESS
- RODE
- STONES
- EMMA
- FUN
- PLAINLY
- UGLY
- FORGIVE
- TRULY
- STRETCHED
- CONFIDENCE
- ACQUAINTANCE
- OTHERWISE
- READING
- STARTLED
- PECULIAR
- PIECES
- EAGER
- ENTRANCE
- VIRTUE
- HURRIED
- ATE
- LABOR
- MEMBER
- ROUGH
- MOTION
- DUKE
- VIRGINIA
- BLUFF
- CONSCIENCE
- ACTUALLY
- YARD
- NIGHTS
- RELATIONS
- STAIRS
- ALAS
- INQUIRED
- BABY
- FATE
- VIOLENT
- SAFETY
- SUM
- COST
- BAY
- FACTS
- CAR
- MINDS
- BRILLIANT
- GAINED
- PARDON
- GERMAN
- WASHINGTON
- EMPEROR
- FOOL
- HEIGHT
- LINCOLN
- PRISCILLA
- JESUS
- NORA
- CLOSELY
- ANYBODY
- ENJOYED
- HUNGRY
- WEAR
- WILLING
- INTELLIGENCE
- SHOWING
- EXCUSE
- PROVIDED
- TRADE
- ANGER
- HASTILY
- MEANWHILE
- DIRECT
- RELIGIOUS
- SECURE
- CONTENT
- RAPIDLY
- SOUNDS
- NATIONAL
- THROW
- RIDE
- PLANS
- STAGE
- MUD
- ENTIRE
- SORROW
- NATURALLY
- TRIM
- HARM
- ELEANOR
- GUN
- SHADE
- EVIDENT
- PRINCIPLE
- 'FALSE'
- TINY
- ROMAN
- FOREIGN
- MOREOVER
- DIGNITY
- DELAY
- FLED
- HERS
- CROW
- RUSHED
- AFFECTED
- ACCEPT
- REASONS
- BRAVE
- DARED
- ARMED
- FIGURES
- FLESH
- SOFTLY
- DANCE
- CHOICE
- ENTERING
- SLIGHT
- GLORY
- MARKED
- LIES
- UNTO
- STARS
- LAMP
- RISK
- ATTITUDE
- YOU'D
- PARTLY
- FAIL
- RAGE
- FORGOT
- SUFFERED
- FREEDOM
- LARGER
- PARLIAMENT
- FOUGHT
- EFFORTS
- RULE
- GIVES
- NAMES
- GLANCED
- NODDED
- ENDED
- SAND
- OAK
- EXPLANATION
- PATIENT
- JIM
- FRANCS
- DEVIL
- ROCKS
- INCOME
- HOLY
- CROSSED
- PROOF
- SUNSHINE
- STATION
- DROP
- SOMEBODY
- AWAKE
- ENJOY
- ACQUAINTED
- DULL
- POST
- CHOSEN
- INTERRUPTED
- COMPLETELY
- REALITY
- MARCH
- WON
- LIEUTENANT
- BEHOLD
- WONDERED
- DRIVEN
- EASE
- UTTERED
- SMOOTH
- FACED
- REALIZED
- WORKS
- GRADUALLY
- YOUNGER
- LOUISE
- HUT
- LAD
- JASON
- HOLLOW
- FOLKS
- SUNDAY
- WE'VE
- ARRIVAL
- BANKS
- OVERCOME
- WRETCHED
- SOMEWHERE
- FLIGHT
- SLEEPING
- FLYING
- SHARE
- CONSCIOUSNESS
- APPROACHING
- COMFORTABLE
- DUTCH
- CREATED
- FLEET
- COMPELLED
- UNABLE
- CORN
- GAZE
- MAD
- OBJECTS
- WINGS
- ACCOMPANIED
- BOBBY
- LISTENING
- THRONE
- ROLLED
- MILL
- M
- INTENTION
- SUBJECTS
- ADMINISTRATION
- INCLINED
- CONSIDERATION
- PIERRE
- TURTLE
- ODD
- AID
- ALARM
- BAG
- STYLE
- BOWED
- DESK
- DUST
- PRESSED
- FLOWER
- PAUSE
- DEVOTED
- ESTABLISHED
- BRIEF
- DESPAIR
- RECOVERED
- JOIN
- PRINCIPAL
- FELLOWS
- REPORT
- PRECIOUS
- QUEER
- BATH
- EATING
- LIGHTED
- MASTERS
- TAILS
- KNIGHT
- DON
- FREQUENTLY
- SWEPT
- MYSTERY
- FOOLISH
- EFFECTS
- EAGERLY
- BOLD
- ANNOUNCED
- SACRIFICE
- SPEND
- SUFFICIENTLY
- ENEMIES
- REGARDED
- SAILED
- MILITARY
- RESOLUTION
- COOK
- SHAKING
- MELANCHOLY
- DYING
- BEHELD
- FLUNG
- FIGHTING
- BIRTH
- STARED
- KINDLY
- THOUSANDS
- CHRISTIAN
- TEMPLE
- WIDOW
- BRANCHES
- LOVER
- SPANISH
- CIVIL
- ALICE
- COMMUNITY
- DIRECTED
- LET'S
- TRAMP
- BEN
- PHILIP
- POOL
- HELPED
- STORE
- ELEVEN
- EXTREME
- WISHING
- EXTREMELY
- ASHAMED
- KINDNESS
- CRIME
- WITNESS
- IMPRESSION
- BECOMING
- CREDIT
- SCATTERED
- STRUGGLE
- SIGNS
- GHOST
- PEASANT
- MANAGED
- SINGING
- EVERYONE
- RETURNING
- TEACHER
- MILLION
- GOVERNOR
- MAGNIFICENT
- PORTION
- SPIRITUAL
- CAP
- BEARING
- COUNTESS
- KATE
- SYLVIA
- HATED
- WHENCE
- BELONGED
- WICKED
- CRUEL
- SLIM
- OURS
- FAULT
- EDUCATION
- GLOOMY
- TREATMENT
- DANCING
- AGREEABLE
- FIRM
- SIMILAR
- ACTIVE
- UNIVERSAL
- CLEVER
- SEPARATED
- USEFUL
- ENDS
- STANDS
- NINETEEN
- CITIES
- FEATURES
- AMOUNT
- MONSIEUR
- NEWSPAPER
- HOLE
- SHONE
- OCEAN
- SANK
- MARGUERITE
- MARIUS
- ERIC
- MATTHEW
- THEY'RE
- OFFICIAL
- FOREHEAD
- MANNERS
- SOLEMN
- DOUBTLESS
- THEORY
- SUGGESTION
- INTERESTS
- DESTROYED
- NATIONS
- NECESSITY
- BREAKING
- INCREASE
- ASTONISHMENT
- AFTERWARD
- CAREFUL
- BAR
- METHOD
- HAST
- ESCAPED
- SOLDIER
- COMPOSED
- HANGING
- EXAMINED
- FAVOR
- WANDERED
- WAVES
- PATTY
- COSETTE
- ACCUSTOMED
- BRIDGE
- FALLS
- JEFF
- ATTEND
- ACCORDINGLY
- NATIVE
- TRIP
- NAY
- LOVES
- ASSURED
- CRYING
- TENT
- HOUSEHOLD
- SENTIMENT
- MURDER
- COUNCIL
- APPOINTED
- SAIL
- FISHING
- OBTAINED
- SKILL
- TOWNS
- REQUEST
- STOCK
- THRUST
- ASSISTANCE
- BEG
- EXHAUSTED
- CHOOSE
- SUFFER
- RESUMED
- MOUNTED
- RANK
- SOUTHERN
- SPECIES
- PARTED
- TROOPS
- SCIENCE
- CLIFF
- SURROUNDED
- CHAIN
- SHED
- VOYAGE
- BASKET
- SHOUTING
- RANGE
- JIMMIE
- UDO
- JUNE
- DRIVING
- UNUSUAL
- SLIGHTLY
- MAYBE
- ASTONISHED
- STUPID
- PICK
- BRINGING
- DEMAND
- VEIL
- YARDS
- IMAGINED
- MERCY
- FUNNY
- TYPE
- COVER
- CHEEKS
- STRIKE
- STORIES
- SIGHED
- CARED
- LITERATURE
- APART
- RARE
- COMMANDER
- ENERGY
- FRIENDLY
- ACCOMPLISHED
- WOODEN
- OWNER
- SOUNDED
- INVITED
- ACCIDENT
- DISCOVER
- DISTINGUISHED
- CONNECTION
- CHARM
- TREMBLING
- FAMILIES
- MILLIONS
- RUSH
- BARON
- ATMOSPHERE
- SENSIBLE
- EDITH
- NEEDS
- DAVID
- OLDER
- WASH
- NED
- ANNIE
- SEVERE
- PURPLE
- MARBLE
- WORST
- BRANCH
- LEANING
- MERCHANT
- WEIGHT
- MOTHER'S
- HASTE
- SUSPECTED
- PRISONERS
- ABROAD
- TRIAL
- CONSENT
- CONVINCED
- ARGUMENT
- WESTERN
- BADE
- EVENT
- LIBRARY
- ABSOLUTE
- DISCOVERY
- FAILURE
- SUPERIOR
- FIFTH
- BELOVED
- DESTRUCTION
- GAIN
- VAGUE
- DAWN
- LILY
- HASTENED
- MACHINE
- FOREVER
- WOMAN'S
- CHANGES
- LIKEWISE
- DISTRICT
- DEPTHS
- STOUT
- PICTURES
- HIDE
- SUCCESSFUL
- GOODNESS
- IMMENSE
- BOBO
- APARTMENT
- SHOES
- SANG
- RETIRED
- NEIGHBORS
- REGRET
- MINISTER
- PRACTICE
- CROWDED
- DESCENDED
- MISERABLE
- HATE
- OBSERVATION
- FAMILIAR
- MEASURES
- DISPOSED
- REFUSE
- DESCRIBE
- SHOCK
- MILK
- REPUTATION
- DEVELOPMENT
- FEARED
- LIGHTNING
- BROWNIE
- BEGGED
- STIFF
- LEVEL
- ADAM
- SETTING
- NEWSPAPERS
- HUNT
- CLASSES
- ROOTS
- MINGLED
- CONSEQUENCE
- APPROACH
- DIANA
- COLORED
- STAYED
- GRATITUDE
- BOW
- SPEED
- HOST
- PASSENGERS
- WONDERING
- PSYCHE
- NATASHA
- PERCEVAL
- CIRCLE
- GLIMPSE
- DEEPLY
- VILLEFORT
- PREVIOUS
- ADVENTURE
- LEST
- USELESS
- KNIFE
- PULL
- CONTINUE
- CAUSES
- JULY
- COUNTRIES
- TITLE
- DELIVERED
- UNCONSCIOUS
- SOMEONE
- POUNDS
- SLIPPED
- MOTIVE
- LANDSCAPE
- DEPARTURE
- EXPRESS
- FINAL
- MOVEMENTS
- ARRANGED
- NERVOUS
- RUIN
- KISSED
- DRAW
- LEANED
- CONCERNED
- HUNGER
- ELDER
- PIPE
- TIS
- GANEM
- JENNY
- THENARDIER
- ANXIETY
- JAPANESE
- DESERTED
- DELIGHTFUL
- VIEWS
- MATCH
- SUSPICION
- GUILTY
- LEADER
- KISS
- CHIEFLY
- JUDGMENT
- WASTE
- EXERCISE
- HITHERTO
- EXTENT
- DELICATE
- PROPOSED
- THANKS
- SALT
- BUTTER
- RELATION
- SEES
- PROCEED
- DISTURBED
- BAND
- COW
- FISHES
- WIN
- UTTER
- SPARE
- CLAIM
- PEN
- CHEEK
- INSTRUMENT
- BEATING
- AGES
- EASTERN
- ATTENDED
- PAINTED
- ENTHUSIASM
- FERKO
- EARL
- HELEN
- GUNS
- COMMITTEE
- EARLIER
- HE'D
- TODAY
- LACK
- STEADILY
- PAINFUL
- BONES
- ENORMOUS
- CONFUSION
- MAGISTRATE
- PLAGUE
- BLAME
- SACRED
- TREAT
- APPLIED
- COOL
- PIANO
- STRIKING
- DISTINCT
- ATTACKED
- PORT
- BITTERLY
- MIDNIGHT
- POSSESS
- RAPID
- PRODUCE
- SAVAGE
- WET
- SMALLER
- APPEARS
- AUDIENCE
- JOB
- HEADED
- EXPERIENCES
- CROWN
- FAITHFUL
- EXPEDITION
- REGION
- DEGREES
- MISERY
- FED
- LEAPED
- PEEP
- OFFICER
- HUNDREDS
- NAUTILUS
- MABEL
- HYACINTH
- ORCHARD
- BUSHES
- CHEERFUL
- EARNEST
- GRANDFATHER
- SOMEHOW
- UNFORTUNATE
- FLASH
- VENTURED
- DANGLARS
- RESTED
- OBTAIN
- CONTEMPT
- EXTENDED
- GENIUS
- HESITATED
- SWIFT
- AMONGST
- AUGUST
- WHOLLY
- NUMBERS
- ARTICLE
- NOON
- FILL
- GODS
- VARIETY
- WEARY
- KINDS
- JUMPED
- COMMITTED
- BACHELOR
- BOTTLE
- SOLE
- DESERT
- GOD'S
- HIGHLY
- INTRODUCED
- CITIZENS
- POVERTY
- EQUALITY
- BEINGS
- RAYS
- JOLLY
- QUALITIES
- TALE
- LIMBS
- AMBITION
- CREW
- KNOCKED
- JOE
- BELONG
- CONFESS
- BRIDE
- BOOTS
- NINETY
- CAPITAL
- LIGHTLY
- PROPORTION
- GAZED
- AFFORD
- DESCRIPTION
- TREMBLED
- FITTED
- BYE
- RANG
- DISAPPOINTED
- CONSTANTLY
- CONTAINED
- THREATENED
- SEAS
- VALUABLE
- HERO
- INSISTED
- WANDERING
- LOVING
- VISION
- EXAMINATION
- THOROUGHLY
- RID
- FORTUNATE
- SHORTLY
- NEST
- HORRIBLE
- POURED
- OCCASIONALLY
- FEMALE
- MISTAKEN
- PURPOSES
- ANYWHERE
- CHEESE
- PERCEIVE
- HATH
- ACTUAL
- NOTES
- BURNED
- PROBLEM
- HABITS
- CHRIST
- HIDING
- BECOMES
- CONCLUSION
- INTELLECTUAL
- MIRROR
- VANISHED
- DAUGHTERS
- PRESERVED
- TRIBE
- GROUPS
- NORTHERN
- NOTWITHSTANDING
- NEAREST
- CHILDHOOD
- DISTRESS
- EMPIRE
- CONNECTED
- SNAKE
- SHAKE
- GREGG
- PARISH
- TILNEY
- PORTHOS
- REPRESENTATIVE
- FORT
- GOOSE
- FLORINA
- FRIENDSHIP
- BEARD
- AIN'T
- UNION
- CONTINUALLY
- DISCUSSION
- SHARPLY
- SURROUNDING
- REWARD
- PURSUED
- VISITOR
- SHADOWS
- LEARNING
- FEVER
- INTENTIONS
- GENEROUS
- INTELLIGENT
- HOLLAND
- HATRED
- VESSELS
- FIRED
- AVOID
- SUPREME
- DATE
- FAVOUR
- USING
- STUFF
- INFINITE
- PAGE
- HUMANITY
- T
- EYED
- ADDRESS
- HOUSEKEEPER
- LONELY
- NUMEROUS
- INN
- MURMURED
- INVITATION
- UNDERSTANDING
- ESTATE
- GATHER
- MUTTERED
- MONSTER
- AGREE
- PROFOUND
- STAR
- GATES
- FOX
- CUP
- RE
- HAPPENS
- YONDER
- KINGS
- WARRIORS
- DEPARTED
- FREELY
- SOAP
- MEAT
- TRAVELLING
- DRUNK
- CAROLINE
- AGONY
- CRAFT
- CORDIAL
- QUOTH
- MERCER
- UNIVERSITY
- FRANCIS
- COMMONS
- POYSER
- CRAWLEY
- SLENDER
- CANADIAN
- FEARS
- GRAVELY
- SOIL
- ROADS
- INSTINCT
- FLUSHED
- GAY
- WENDY
- RAISE
- NEGRO
- CONVICTION
- TRAVEL
- TROUBLED
- DEPEND
- OCCASIONS
- INCREASING
- INDIGNATION
- POWDER
- DIFFICULTIES
- SING
- LOCKED
- ALOUD
- CANDLE
- IMPULSE
- PEARLS
- STRAW
- FIERCE
- QUARTERS
- STEADY
- RESTORED
- OBEYED
- UNEXPECTED
- MEDICINE
- DRESSING
- PRECISELY
- TRACKS
- CLIMBED
- THIRTEEN
- KNEE
- CONCERNING
- CREEK
- LATELY
- PEASANTS
- OBSERVE
- ORIGIN
- COMMANDED
- BUILD
- FETNAH
- MADAM
- WHILST
- SHOUT
- FOURTEEN
- THOMPSON
- UTMOST
- RICHMOND
- CONDUCTED
- DEVELOPED
- DESPERATE
- TIED
- ANYHOW
- UTTERLY
- REMARK
- FIRMLY
- ASPECT
- LOSING
- TRIUMPH
- INSTRUCTIONS
- MISSED
- INTENSE
- MOTIONLESS
- MERIT
- HOSPITAL
- REFLECTED
- RECORD
- MORTAL
- PUBLISHED
- RUINED
- ATTEMPTED
- ESSENTIAL
- SLIGHTEST
- OPPOSITION
- SEASON
- SCORE
- ASSURE
- KEEPS
- CONSTITUTION
- DREAD
- PRIVILEGE
- PRAISE
- MAGIC
- CAPACITY
- SATURDAY
- LOCAL
- INHABITANTS
- CALLS
- PER
- RENDERED
- THROWING
- FATAL
- WEPT
- FEAST
- COFFEE
- IGNORANT
- VISITED
- BADLY
- GIANT
- FRAME
- VIOLENCE
- PRUDENCE
- STERN
- FANCIED
- REMAINS
- BURNING
- LANDED
- SONS
- HID
- CIVILIZATION
- HANDKERCHIEF
- PONY
- HIT
- PLANCHET
- MARCHED
- SHEPHERD
- LEIF
- LUKASHKA
- SAZEN
- PENCROFT
- LANE
- FEARFUL
- IDEAL
- SUPPORTED
- REFLECTION
- SURGEON
- ACTED
- CIRCUMSTANCE
- TORN
- PIRATE
- CONTACT
- IMAGE
- HE'LL
- FEELS
- DIVIDED
- COLLECTION
- DAMP
- ABRUPTLY
- INCLUDING
- ACQUIRED
- BREATHING
- SENSES
- WRAPPED
- NOTED
- LEATHER
- CHEST
- SERVICES
- BURDEN
- DAY'S
- CONCERN
- PUNISHMENT
- DETAILS
- GRATEFUL
- REMOVE
- EXTERNAL
- WHEAT
- LONGED
- ENGINEER
- MEANTIME
- MULTITUDE
- UNC
- CONFUSED
- OPINIONS
- REVOLUTION
- PINE
- SENTENCE
- SLAVERY
- ET
- TRIBES
- DIAMOND
- WARNING
- MOUNT
- RONICKY
- CENTRE
- TRAP
- ROMANS
- ELZEVIR
- BEAVER
- BARRICADE
- ROLLIN
- JOYCE
- OLENIN
- QUARLES
- BROOK
- BLOOM
- STRANGERS
- ENJOYMENT
- AREN'T
- CHRISTMAS
- DISPOSITION
- SENSATION
- PLATFORM
- CONCEALED
- PRONOUNCED
- RESTING
- DUTIES
- ACTIVITY
- RUE
- RAISING
- REQUIRE
- TOPS
- SHEET
- RALPH
- DISAPPOINTMENT
- OLIVER
- CRIES
- ACKNOWLEDGE
- RETREAT
- DIVINE
- ARTICLES
- EXCHANGE
- FISHER
- STARING
- SNAPPED
- LABOUR
- POT
- READILY
- REAR
- LAWYER
- ARRIVE
- RELIEVED
- BOSTON
- CENTS
- CUSTOM
- GRANT
- RESIST
- MASTER'S
- EXPERIENCED
- REPRESENTED
- RAILROAD
- SEEKING
- PRACTICAL
- GARMENTS
- HEAVILY
- ADVANCING
- PROCESS
- CREPT
- ASSUMED
- SILENTLY
- ROLL
- SWORDS
- RESPECTABLE
- SMITH
- ANGEL
- SUMMIT
- ROC
- EATEN
- PEARL
- SILK
- DIM
- TEACH
- SHOWS
- ABSORBED
- HEARTED
- LONGING
- CAREER
- INDUSTRY
- PRACTICALLY
- FLAG
- WITHDREW
- AROUSED
- PROFESSIONAL
- ISSUE
- LEAF
- EMOTION
- POINTING
- MESSENGER
- HEAP
- CHOSE
- READER
- WHEREVER
- PLUNGED
- SHELLS
- OWING
- PRESENTS
- SEATS
- POSITIVE
- SUCCESSION
- CONSIDERING
- FENCE
- CLOSER
- INDIFFERENCE
- PERFORM
- FILLING
- RESULTS
- RELATED
- ADDITION
- SATISFY
- RIDING
- GLORIOUS
- GUESTS
- TREASURE
- BEARS
- FASTENED
- VENTURE
- RECOGNIZE
- LESSON
- IMPATIENCE
- ROLLING
- FORESTS
- SOULS
- ACCUSED
- ENGAGEMENT
- VENGEANCE
- REGIMENT
- BARBARA
- JENKS
- TROUTINA
- STEEP
- CLEARED
- TWISTED
- STARTING
- DREAMING
- EXPECTATION
- ANDREA
- SCARED
- OWNED
- VOLUME
- EXCEPTION
- DARLING
- WAKE
- DOUBTFUL
- PRETENDED
- GALLANT
- PERMITTED
- VOTE
- FUR
- OTHER'S
- SIGH
- SINGULAR
- QUALITY
- GIFT
- GLOOM
- HAPPILY
- PERSUADED
- GUESSED
- ABILITY
- PACE
- HENCE
- BALANCE
- NEIGHBORHOOD
- SQUIRE
- DRIVER
- ENDURE
- MARKET
- PERMIT
- BENEFIT
- CONSEQUENTLY
- VICTIM
- THITHER
- MISCHIEF
- NECESSARILY
- BASE
- BARBICANE
- BEASTS
- LANDING
- REMAINING
- DRAGGED
- AMID
- WAVED
- BELLE
- CONCEPTION
- NAKED
- LOFTY
- ASSEMBLED
- SUPPLY
- BROW
- SOLID
- THINKS
- ABRAHAM
- DECLARE
- SILLY
- SECURED
- MODE
- CURATE
- RUSSIAN
- CHINA
- HERBERT
- JUSTINIAN
- LEOPOLD
- CONWAY
- THOMAS
- NEAT
- STUCK
- DENY
- SAFELY
- SECRETLY
- HANDLE
- RESPONDED
- SECRETARY
- INDEPENDENT
- PREVIOUSLY
- MISFORTUNE
- MISFORTUNES
- MANKIND
- LA
- RENEWED
- GRACEFUL
- ESTABLISHMENT
- CHEER
- CONSTANT
- ENDLESS
- RECALLED
- APRIL
- INDEPENDENCE
- CREATION
- STRONGER
- CAPTURED
- WINDS
- SUSPECT
- SHELTER
- HUMBLE
- PREPARE
- PARTIES
- SOLITARY
- DINE
- APPARENT
- STAFF
- HEELS
- SOVEREIGN
- JOKE
- OARS
- ARRANGE
- HOLES
- SADDLE
- BARK
- COVERING
- POSSIBILITY
- QUARREL
- GETS
- GROWTH
- FURNITURE
- ALARMED
- FOLLOWS
- CENT
- NUTS
- SAM
- BIBLE
- FOG
- JACK
- LOUDLY
- THEATRE
- ANYWAY
- OVERHEAD
- LOG
- SWUNG
- AGENTS
- POLITE
- PLAINS
- MOONLIGHT
- PRINCIPLES
- ISLANDS
- VIRTUES
- CALMLY
- CAKES
- SPEEDILY
- AGITATION
- WING
- RIDGE
- ELDEST
- MUSICAL
- MAIDEN
- SUNK
- ISABELLA
- ARTIST
- TIMBER
- BINGLEY
- CHARACTERS
- AUTHORITIES
- FANNY
- THUMB
- HISTORIANS
- BERYL
- ALI
- GWYNPLAINE
- GRAMMONT
- BERNARD
- PUZZLED
- APPLE
- TIGHT
- SAILOR
- NURSE
- INTIMATE
- REPEAT
- CRIMINAL
- COUNTED
- DEAREST
- LUCKY
- PROFESSION
- ORANGE
- LIST
- ADVANTAGES
- METAL
- THUNDER
- DECISION
- FLOWING
- VIVID
- APPEAL
- STOPPING
- REACHING
- HUMOUR
- ADMIRED
- CURRENT
- TEAR
- RECEIVING
- ENTERPRISE
- MATE
- BEACH
- FURNISHED
- TRUNK
- DECIDE
- CLOTHING
- FROZEN
- BEAST
- DEFINITE
- STATEMENT
- OBVIOUS
- PRAYERS
- RUBBED
- PRAIRIE
- WHOEVER
- HA
- GARDENS
- GLASSES
- EXISTS
- RABBIT
- ATTACHED
- ROUSED
- PARK
- MICHEL
- GATHERING
- SIXTH
- DEADLY
- OUTER
- REASONABLE
- YO
- MEMORIES
- SCENES
- COLOURED
- CHAIRS
- TOUCHING
- BETH
- SIGNOR
- MERRICK
- AWOKE
- LODGE
- CUNNING
- ENCOUNTER
- CHASE
- LOADED
- SCARLET
- TREMENDOUS
- CAPE
- TOWER
- SUFFERINGS
- WREN
- SEPARATE
- WORSHIP
- FRANZ
- PAUL
- SHOOT
- NATURED
- PURSUIT
- INNER
- IGNORANCE
- TROOP
- MA'AM
- GUARDS
- IRELAND
- REPORTER
- ICELAND
- JULIA
- JULIUS
- CROPPER
- POLLY
- ESTHER
- JULIET
- HOOPDRIVER
- MONTGOMERY
- COLLAR
- CONTENTED
- SUNLIGHT
- ADOPTED
- MEADOWS
- PREVENTED
- REVEALED
- REPORTED
- STRONGLY
- BRINGS
- HIDEOUS
- PREFER
- SLAVES
- IRISH
- SHOULDN'T
- DENIED
- EMOTIONS
- RECKON
- ABSURD
- JANUARY
- BRITISH
- JEALOUS
- SERIES
- EIGHTH
- KNOCK
- DECEIVED
- SENDING
- FREDERICK
- POETRY
- FEED
- FAVOURITE
- PAYING
- STEEL
- CONTENTS
- PLATE
- SEX
- GROUNDS
- REJOINED
- FEEBLE
- LOUDER
- GUIDE
- JEWELS
- WORRY
- AMAZEMENT
- LIVELY
- UNPLEASANT
- DOLLAR
- SECURITY
- URGED
- MOOD
- WAGON
- CONTAINING
- PROVISIONS
- DIRECTIONS
- ROBE
- GUEST
- SHORES
- MODEST
- BREEZE
- FOLLY
- DOORWAY
- INDIVIDUALS
- ALIKE
- HARE
- HEAVENS
- CIRCULAR
- UNEASY
- SUGGEST
- GRAIN
- CATCHING
- INSTANCES
- EXCEEDINGLY
- PACKED
- DRIED
- FATHERS
- YOUNGEST
- PROPERLY
- BOXES
- LAP
- DUSK
- DINING
- WEEPING
- FLAME
- BLESS
- PLANTS
- SHELL
- ROSES
- FETCH
- COUNSEL
- WILLIAMS
- MARIPOSA
- GROVE
- BO
- LAUNCELOT
- CABINET
- DAMON
- FIDDLER
- WILMINGTON
- SOURCE
- STAYING
- EXISTED
- SECONDS
- TROUBLES
- INDICATED
- PURELY
- UNCOMFORTABLE
- CARELESSLY
- FASHIONED
- WISDOM
- POSITIVELY
- RECENT
- BLEW
- ISSUED
- ERROR
- INTERIOR
- CURIOUSLY
- PRIZE
- MISSING
- GROWS
- DRANK
- INTELLECT
- FORMERLY
- LAWN
- GRANTED
- BELIEF
- PROTECTION
- PROSPECT
- RIGHTS
- DESTROY
- VEINS
- CLOSING
- PURSE
- SWIM
- TABLES
- HEARTILY
- DESIRES
- GESTURE
- BILLS
- CLAY
- DREAMED
- GENUINE
- WARNED
- SLIP
- HARMONY
- REMEDY
- DISEASE
- MC
- CLOTH
- OIL
- SETTLE
- INQUIRE
- POCKETS
- POPULATION
- SENATOR
- CULTURE
- TEAM
- CHARITY
- SUBSTANCE
- PITCH
- CONCEAL
- RECOVER
- GLADLY
- ACTING
- MASSES
- ITALIAN
- CHANCES
- SHIRT
- CENTURIES
- STREAMS
- DISCOURSE
- IDLE
- EXECUTION
- IMPATIENT
- INSTRUMENTS
- PAINT
- BOSOM
- AUTUMN
- EXPENSE
- ACCOMPANY
- FAVORITE
- NONSENSE
- PUPILS
- GOWN
- TURNS
- FLOW
- SAILORS
- PROBABLE
- TOSSED
- IMPRESSED
- HOMES
- BUILDINGS
- PERFORMED
- BULLET
- TALES
- LORDS
- MAYOR
- FLEECE
- FROGS
- FAREWELL
- ANDREW
- LARK
- HARDING
- BARN
- CAKE
- PILE
- LION
- GLOWING
- EXACT
- ENJOYING
- DEBT
- PERSUADE
- SADNESS
- TELEGRAPH
- SEARCHING
- OBSERVING
- FINEST
- ITALY
- PRESERVE
- FIRING
- CENTRAL
- NOVEMBER
- STORES
- DEMANDS
- HOPING
- OFFICES
- HEIR
- OPERATION
- SIGNED
- CLERK
- FLOUR
- DOMESTIC
- RUDE
- THRONG
- PILLOW
- WHIP
- OBEY
- DIRTY
- SMILES
- NEIGHBOURHOOD
- SADLY
- IMPRESSIONS
- MOTHERS
- DROWNED
- WHISPER
- INVISIBLE
- HAY
- TRUSTED
- DISTINCTION
- LETTING
- FATIGUE
- PUSHING
- TEMPORARY
- BRUSH
- INTERVIEW
- AWAKENED
- SUMMONED
- TIP
- HEADQUARTERS
- CHICAGO
- COAL
- WASHED
- FRIGHTFUL
- PERMISSION
- LOAD
- DESIGN
- CAMPAIGN
- NEGLECTED
- LESSONS
- FASTER
- EXPOSED
- GLOW
- REIGN
- RESCUE
- HYPNOTIC
- STUDIED
- STRANGELY
- BACKS
- WHIRLWIND
- FURY
- GLOBE
- EXIST
- SUNSET
- JEWS
- SORTS
- RENDER
- ACTS
- HORN
- EXECUTIVE
- CONFESSION
- TOTAL
- BORNE
- RUSSIA
- MIST
- ERE
- TORE
- PRAYER
- BOATS
- RUSHING
- POET
- VENUS
- PRIME
- SPORT
- CANVAS
- WILSON
- FLOCK
- CONGRESS
- BULL
- JIMMY
- JASPER
- BAB
- GREGGORY
- LECOQ
- AMEER
- CARLINI
- MANAGE
- FLOOD
- HORIZON
- HARDER
- DECIDEDLY
- DWELLING
- CRUSHED
- ASSOCIATION
- OATH
- WEAKNESS
- JANE'S
- PIRATES
- TELLS
- RETORTED
- COMPLIMENT
- DECLARATION
- GIRL'S
- BEAUTIFULLY
- HANG
- FOLDED
- ESTATES
- STIRRED
- REDUCED
- MARTIN
- CHANNEL
- MAJORITY
- DEFEND
- SEVENTH
- MOTIVES
- KEEN
- WALKS
- AWE
- NORMAL
- LUNCH
- WIFE'S
- EAGERNESS
- INVOLVED
- RENT
- THANKED
- ELSEWHERE
- PERMANENT
- COLUMN
- FINDS
- DAYLIGHT
- BELONGING
- BUSH
- EXHIBITED
- WARMTH
- RESERVE
- PREPARATIONS
- IMPOSED
- PSYCHIC
- CAROL
- SELLING
- LIT
- ABUNDANCE
- ACKNOWLEDGED
- SERIOUSLY
- BACKGROUND
- SUGAR
- INCH
- STIR
- UNIVERSE
- METHODS
- STEAM
- COMPARATIVELY
- NAILS
- WILLINGLY
- OPPOSED
- PRINCES
- ALTERED
- DISPLAYED
- WAVE
- STATED
- EARNESTLY
- ACTIONS
- ELEMENTS
- PERIL
- CATTLE
- COMMISSION
- DEPTH
- OBEDIENCE
- DIAMONDS
- FRO
- SKINS
- DEEDS
- TOIL
- FLOATED
- SOLITUDE
- HASN'T
- POD
- SMOKING
- THENCE
- REFUGE
- THINE
- STEAMER
- CALIFORNIA
- MINK
- HELL
- MORLAND
- SOFA
- JERUSALEM
- EMILY
- BENNET
- GAZING
- CHINESE
- ADAMS
- TIE
- MONICA
- CETERA
- RULES
- CLIFFS
- SNAP
- HALTED
- CARLING
- MARTIAN
- WEAPONS
- ISRAEL
- WRITER
- CATERPILLAR
- TAYLOR
- BRENDA
- CHOKICHI
- GNOME
- CHAUVELIN
- SEED
- SMART
- PEOPLE'S
- THEIRS
- WITNESSED
- CAUTION
- SHAPED
- REASONING
- ARREST
- RECOLLECTION
- WEARING
- FAINTLY
- MARGARET
- APPLICATION
- ENCOURAGED
- HOLDS
- BARRIER
- SHE'D
- LIMITED
- MOSS
- AMUSEMENT
- REGARDING
- FANCIES
- APT
- GRANITE
- BOHEMIA
- PROTECT
- ANGRILY
- WHEREAS
- COMPARED
- VIGOROUS
- CLAIMED
- DELIVER
- BEATEN
- ROOT
- HEROIC
- PLEASURES
- WAVING
- BEDROOM
- CHECK
- ASSIST
- AMUSED
- ROAR
- REPROACH
- INDIFFERENT
- PERPETUAL
- ENABLED
- DEEPER
- INCIDENT
- GAMES
- LOTS
- PINK
- PATIENTLY
- BEGINS
- TRAINING
- HEALTHY
- CORRECT
- BARS
- TRACE
- CORONER
- PLANNED
- GLANCING
- OBJECTION
- ANSWERS
- CUTTING
- HIND
- CALF
- SCALE
- UNIFORM
- CAPTURE
- INQUIRY
- CENTER
- GOSSIP
- CORPSE
- FUNERAL
- OWE
- SCIENTIFIC
- B
- DISGUISE
- CROOK
- FLASHED
- COMMENCED
- SENSATIONS
- HESITATE
- TRICK
- GRIN
- TONES
- SAILING
- TREMBLE
- PREPARING
- GLEAM
- LE
- ALLIES
- PRINT
- PORCH
- COMPOSITION
- SATISFACTORY
- CONCEIVE
- REPOSE
- TIDE
- RESIDENCE
- SEIZE
- PROMPTLY
- COMRADES
- DOONE
- SHAKEN
- YOURSELVES
- GRANDMOTHER
- ANXIOUSLY
- LEISURE
- BOUGHS
- CLOCK
- COUNTY
- MILTON
- HEROES
- MACHINERY
- ENGLISHMAN
- MARS
- HALE
- HOPKINS
- PARKER
- ROBARTS
- COTTON
- RARELY
- EXPECTING
- WE'D
- TRAINED
- BEDS
- PREFERRED
- CARPET
- QUESTIONED
- TUMULT
- ANGUISH
- CLASPED
- OFFENCE
- DANCED
- REMINDED
- CARELESS
- DARING
- LIFT
- FLORENCE
- SAN
- FORTUNATELY
- GIFTS
- RECOGNISED
- COLLECT
- SHEER
- INFANT
- HOPELESS
- PHILOSOPHY
- FLAMES
- COARSE
- DEED
- KARA
- PASSES
- VALET
- DESCEND
- COMPLETED
- AGED
- BREATHED
- ADDRESSING
- HUSBAND'S
- LUNGS
- SUCCEED
- RESISTANCE
- INCLINATION
- GROOM
- COUSINS
- LAZY
- SCARCE
- RISEN
- CROWDS
- VIOLENTLY
- STRUGGLED
- HOLIDAY
- FURIOUS
- DESIRABLE
- REALIZE
- SIGHTED
- ROMANTIC
- RESPONSE
- SYMPTOMS
- FARMERS
- UNCONSCIOUSLY
- ADVISED
- REMOTE
- EMERGED
- SUBMIT
- CLAD
- GERMANY
- RAY
- RECENTLY
- PRINTED
- FAME
- CONFINED
- JOHNNY
- GAS
- EMBRACE
- SUPPLIED
- RYNCH
- LEAN
- ORGANS
- FAVORABLE
- ELEGANT
- GUIDED
- INFORM
- SINISTER
- PASSIONS
- MEDICAL
- NAMELY
- HESITATION
- PAGES
- SWORE
- BREATHE
- CAVE
- NATIVES
- CONSISTED
- MANIFEST
- EMBARRASSMENT
- HEAPS
- HURRYING
- STRING
- LOCK
- ETERNAL
- DETAIL
- ABSENT
- HOARSE
- SPECTATORS
- DISTINGUISH
- FROST
- SNOWY
- THEY'VE
- BACKWARD
- FIERY
- ILLNESS
- PRIESTS
- BALLOON
- QUIXOTE
- JAWS
- MISSION
- REFERENCE
- SHAW
- BARREL
- TERM
- BIBBS
- THEO
- FALK
- CRISTEL
- GENZABURO
- RAWDON
- LYNDE
- SLOPE
- GABLES
- SHY
- ENCOUNTERED
- EARTHLY
- BRED
- MAINTAIN
- APARTMENTS
- DAUGHTER'S
- APPLY
- RINGING
- COMMANDS
- ARRESTED
- ADVENTURES
- AMAZED
- GASPED
- STOOPED
- COUNTER
- JUDGED
- MINDED
- PROTEST
- DISAGREEABLE
- FAITHFULLY
- RESPONSIBILITY
- PEACEFUL
- PHRASE
- DESERVE
- CONSENTED
- OCTOBER
- PRESSURE
- RESPECTS
- LASTED
- INEVITABLE
- RESPONSIBLE
- BID
- YIELD
- EXCLUSION
- MAINTAINED
- SAUCE
- FORMIDABLE
- OLDEST
- WEAPON
- QUEST
- PARLOUR
- AFRICA
- DRAWER
- PANIC
- PLEASING
- DAMAGE
- WIT
- UNDERTAKE
- ENTERTAINMENT
- WINDING
- DWELT
- CEREMONY
- NET
- SUITS
- PRODUCT
- TENDENCY
- CEASE
- AVOIDED
- IMPROVEMENT
- BONE
- STOMACH
- ARRANGEMENT
- SEARCHED
- INQUIRIES
- FIX
- TRACES
- GRASP
- SPEAKER
- FACING
- CONVENIENT
- PRAYED
- TENDERNESS
- SUSPENDED
- LEARNT
- RESERVED
- SHOPS
- RULED
- UNCERTAIN
- SINK
- MARKS
- RELATIVES
- SENSITIVE
- SPAIN
- SINCERE
- DIGNIFIED
- SIGNIFICANT
- VEHICLE
- AVERAGE
- FIRES
- SUPPLIES
- ARRANGEMENTS
- TRIFLE
- REPEATING
- ADDING
- PHENOMENA
- AIM
- LIMITS
- LIP
- BOY'S
- MURMUR
- PILLARS
- BRIGHTLY
- SWIFTLY
- JOYOUS
- JEALOUSY
- WARRIOR
- CONTRAST
- EXTRA
- AWFULLY
- DEFEAT
- ENTHUSIASTIC
- INCHES
- DROPPING
- REDCOAT
- NERVES
- BITE
- CRACK
- SERGEANT
- DOCTRINE
- C
- MIXTURE
- INTERVALS
- FEATHERS
- BUFFALO
- FOLK
- OFFERING
- COMRADE
- BELLS
- STOLE
- SIGNAL
- SWINGING
- AUTHOR
- DISMISSED
- THORPE
- RELATE
- WILDERNESS
- TREASURES
- PROPHET
- FELIX
- COMPREHEND
- DARCY
- ASSUME
- FRANCES
- WEEP
- JACKET
- HERD
- ACCENT
- OPERATOR
- KNIGHTS
- LANTERN
- SIN
- METERS
- GREENLAND
- THRESHOLD
- TWAS
- GLACIER
- MACHINES
- KWAIRYO
- ASSISTANT
- BULLS
- REX
- ELK
- SHERIFF
- SPILETT
- CRAGGS
- STRONGEST
- WONT
- WIRE
- BRAND
- CHIN
- UNFORTUNATELY
- CONFESSED
- MUTUAL
- CARD
- FIRMNESS
- BLUSH
- CORNERS
- BABIES
- HELPLESS
- FRANKLY
- SURROUNDINGS
- HARSH
- INTERFERE
- RESTLESS
- BENCH
- PROPOSAL
- ORGAN
- AGITATED
- SUBLIME
- GREETED
- FEBRUARY
- PROCEEDING
- VAN
- ANGLE
- FAIRER
- PASSAGES
- PARCEL
- WASTED
- CORRIDOR
- ARTIFICIAL
- THOUGHTFULLY
- DEPARTMENT
- SPECTACLE
- AGENT
- BEHALF
- STAMPED
- OCCUPATION
- ELEMENT
- ROMANCE
- TEST
- PIG
- DEER
- FROG
- COMPLEXION
- LINEN
- RADIANCE
- CONTEST
- PARTNER
- LIABLE
- CALCULATED
- LATIN
- BALLS
- ADMIRABLE
- FOOTSTEPS
- REGULARLY
- INCLUDED
- UPWARD
- DISLIKE
- TEACHING
- COLLECTED
- SWALLOWED
- WONDERS
- FINISH
- GENIE
- EXPRESSIONS
- DESTINY
- RICHES
- CIGAR
- AMIABLE
- TRIBUTE
- BONDS
- FORMING
- HOSTILE
- BELT
- WARS
- QUIT
- FREQUENT
- IMPULSES
- INFLUENCES
- DISCUSSED
- CONSEQUENCES
- THEREBY
- BELIEVING
- SCHEME
- COMPLEX
- OUTWARD
- CLOAK
- TERRIFIC
- AMBITIOUS
- VANITY
- IMPROVED
- STROKE
- WHITHER
- LOCKS
- STRICTLY
- CHILD'S
- FRIDAY
- CHARGED
- MONDAY
- SHINE
- SONGS
- ENDURED
- EMBRACED
- BOWING
- POLE
- CART
- POPULACE
- VISITORS
- HERE'S
- CIRCUS
- SISTER'S
- STOVE
- SWOLLEN
- JAPAN
- ABOARD
- LADDER
- MILD
- BOILING
- ATTEMPTS
- AFFECT
- MURDERED
- SNAKES
- LACE
- APPETITE
- GENERATIONS
- GALLERY
- JOSEPH
- HURSTWOOD
- DANDY
- WHEREUPON
- ENTERTAINED
- PULLING
- MOSCOW
- POLITICS
- TOOLS
- MONSTROUS
- WOUNDS
- DOTH
- ANTS
- NICHOLAS
- DORA
- ACADEMY
- AIRSHIP
- CYRUS
- SEXUAL
- JOSIANA
- AVONLEA
- BARELY
- SITUATED
- PARLOR
- RIGID
- HUMOR
- HIRED
- BURNS
- STOLEN
- HORRID
- GLOVES
- REGRETTED
- SEEMING
- BETRAYED
- MOURNING
- SWEAR
- FEVERISH
- MURDERER
- LIKES
- INVENTION
- RECOMMEND
- PROTESTED
- TUNE
- DESTINED
- REMEMBERING
- NINTH
- OVERWHELMED
- CONSIDERABLY
- TENTH
- INDUCED
- INSIST
- ASSENT
- BUNCH
- DELICIOUS
- UNNECESSARY
- GROAN
- VERSES
- COWARD
- RECOGNITION
- ADJOINING
- ENCOURAGEMENT
- RIDICULOUS
- INTEND
- GREEK
- ATTRACTED
- OBVIOUSLY
- VOLUMES
- GRASPED
- NEIGHBOURS
- CARDS
- ADMIRE
- EXCHANGED
- ROWS
- REMARKS
- STRINGS
- LADEN
- DETERMINATION
- OCCUR
- LIVER
- WHALE
- BLOCK
- COMPLICATED
- DISTINCTLY
- UPRIGHT
- OPENLY
- PROMINENT
- GUARDED
- UPSTAIRS
- P
- VICTIMS
- PURCHASE
- CHERISHED
- COMPASSION
- MORALITY
- MERCHANTS
- WARMLY
- WELCOMED
- AMUSING
- FLOWED
- AVENUE
- ORGANIZATION
- LEAGUES
- UNEASINESS
- SNAPPING
- ROARING
- SMELL
- RIVERS
- ROUNDED
- EXAMINE
- AMERICANS
- COUNTING
- PLANTED
- REPORTS
- GRAVITY
- CITIZEN
- PANTING
- STRETCHING
- PROMISES
- ARMIES
- OBTAINING
- SUGGESTIVE
- SUGGESTIONS
- CRITICISM
- STRIVING
- WINNING
- STUDENTS
- GIGANTIC
- SILVERY
- BENDING
- FORGETTING
- HAIRED
- EXQUISITE
- EXCESS
- TORRENT
- POLICY
- NIECES
- THOUGHTFUL
- STABLE
- FLOATING
- HIGHNESS
- PROVIDENCE
- HASTY
- CANADA
- ROCKY
- SEEMINGLY
- MASSIVE
- RUBBING
- MIRANDA
- BRONZE
- UNDERNEATH
- PACK
- BURN
- ONLOOKER
- HORSEBACK
- KEEPER
- EUROPEAN
- CHAINS
- HAIL
- PLAYS
- STORMS
- DASHED
- MINES
- DRAG
- DARTED
- STICKS
- SIMON
- SLOPES
- DESCENT
- LILIES
- TEACHERS
- LAYING
- DETECTIVE
- LADY'S
- TRACK
- PRECEDING
- JEW
- BEWILDERED
- BUNDLE
- ALBERT
- BRIEFLY
- HYPNOSIS
- NOVEL
- BOLDLY
- CHARACTERISTIC
- PRIMITIVE
- ABANDON
- H
- MUSCLES
- PROVIDE
- NAPOLEON
- LAIN
- BORODINO
- SUPPOSING
- DURHAM
- DEMOCRACY
- HEROD
- BATES
- PEER
- STEPHEN
- ANTHONY
- PYE
- CHARLEY
- KOYO
- CONSTANCE
- CONNISTON
- BARGAIN
- PRESSING
- VISITS
- PRECISE
- DOCTOR'S
- ORPHAN
- DREADED
- SHE'LL
- FADED
- SPARED
- PHANTOM
- BLESSING
- CONDEMNED
- TWIN
- GAILY
- PRETEND
- HULLO
- QUICKER
- MOSTLY
- TRAGEDY
- OPPRESSED
- WANTING
- DECENT
- NEIGHBOUR
- INFERIOR
- EXISTING
- STROLLED
- PUNISH
- COMMERCE
- PROVINCES
- TROMP
- TERRIBLY
- MONK
- FIERCELY
- CONSULTED
- THREATENING
- STRAIN
- STIRRING
- MELTED
- INWARD
- DWELL
- RUNS
- ATTRACTION
- ESTEEM
- REPLACED
- ANSWERING
- YIELDED
- LIFTING
- CONFIRMED
- ELBOW
- SORE
- WHO'S
- SNEER
- STAINED
- STRUGGLING
- SWEEP
- COLUMBIA
- BANNER
- DOCTORS
- FINER
- NEEDN'T
- SWALLOW
- SUITED
- BIDDING
- PROBLEMS
- RESTORATION
- PROFIT
- WIVES
- PRODUCING
- ASSISTED
- INJURED
- HARVEST
- BEHAVIOUR
- OBSCURE
- JAIL
- SUITABLE
- ROOFS
- FORBIDDEN
- SALVATION
- WITS
- GHOSTS
- DOWNWARD
- DUG
- W
- AFFECTIONS
- RESTORE
- CONTAIN
- PIERCED
- EXCITE
- ENDEAVOURED
- SIRE
- TOBACCO
- GENERATION
- INSTITUTIONS
- SOUP
- SCHOOLS
- COURTEOUS
- WHEELS
- GRACIOUS
- ASSERTED
- DIFFERENTLY
- COLORS
- LUXURY
- RECEPTION
- MONTE
- CONSOLATION
- PAVEMENT
- ROTTEN
- HAILED
- ARDAN
- TOMB
- TRAVELING
- FOLLOWERS
- DRIFTED
- HEATHERSTONE
- FORTUNES
- HUMPHREY
- ATTENDANT
- SURRENDER
- LOVERS
- PARTICULARS
- CONFLICT
- DANGERS
- CLIMBING
- CRUELTY
- INJUSTICE
- BLANK
- INCAPABLE
- CONTINUAL
- AWKWARD
- TIMID
- TRADITION
- SWIMMING
- SWAM
- CONSTANTINOPLE
- TURKEY
- APPLES
- ACRES
- CAESAR
- PRACTISED
- CREEP
- PIPES
- SLAIN
- MEETINGS
- SEAL
- IRRESISTIBLE
- CROP
- ACCORD
- KILLING
- SYNDIC
- RESEMBLANCE
- DEAF
- BLOSSOMS
- DRINKING
- EDUCATED
- DETERMINE
- REVENGE
- MASK
- TWILIGHT
- AMIDST
- BLOWN
- DRAKE
- CHARLOTTE
- UNDOUBTEDLY
- LOGS
- OWL
- EXERTION
- DERIVED
- CIGARETTE
- LEADS
- ENABLE
- THIRST
- PERFORMANCE
- INTERVAL
- CONFIDENT
- DAT
- PROCURED
- APOLOGY
- ADMISSION
- ATLANTIC
- PERSONALLY
- FOUL
- THREAD
- MUSKETEERS
- DISTURBANCE
- RUINS
- HUNTER
- MOTOR
- PULSE
- V
- ROUTE
- EARLIEST
- BLOT
- GRANDCOURT
- GLEAMING
- COACHMAN
- ONWARD
- REVIEW
- WAGES
- CUPID
- GREATNESS
- BRIG
- FRERE
- WIRELESS
- MERRIWIG
- WHISTLER
- FERRIS
- CUTHBERT
- KNITTING
- MARE
- NOTION
- MAIL
- THEY'LL
- HALLS
- GLANCES
- PERFECTION
- CONTRACT
- WRETCH
- HONORABLE
- RECALL
- REMEMBRANCE
- SUSPICIOUS
- APPRECIATION
- VEIN
- DISCUSSING
- REGARDS
- SMALLEST
- REVIVED
- BASED
- ADMIRAL
- DESPITE
- SUBMITTED
- LARGELY
- BLOWS
- ENEMY'S
- YOUTHFUL
- COMPLAINED
- DEFENCE
- TEMPTED
- RADIANT
- DISTURB
- COLDLY
- SLEEVE
- SERVING
- EXAMINING
- PATRIOTISM
- FOLDS
- PASSIONATE
- OFFERS
- NIECE
- VEXED
- LEAP
- CROSSING
- POUND
- DRESSES
- PUSH
- TAP
- UNIQUE
- CONTINUING
- REQUIRES
- HAUNTED
- ECHOED
- REFLECTIONS
- MANAGER
- ACCOMPLISH
- STUMP
- MINISTERS
- POLISHED
- PERCEIVING
- COMMUNICATE
- BANQUET
- FACTORY
- STUDIO
- CHUCKLED
- DIGGING
- TUNNEL
- INSIGNIFICANT
- ALTER
- CRISTO
- ENTERS
- PROPOSITION
- MAGPIE
- MARCHING
- NICHOLL
- OCCUPY
- MATERIALS
- BET
- NEEDLE
- PERIODS
- RELATIVE
- WORLDS
- INTENT
- RECOLLECT
- STANDARD
- ACCEPTING
- HYPNOTISM
- HYPNOTIZED
- MYSTERIES
- DISPLAY
- CREATE
- POISON
- STUDIES
- NON
- NEGATIVE
- UNEXPECTEDLY
- GLITTERING
- ANALYSIS
- DISMAY
- ZEAL
- PROPRIETOR
- STOCKINGS
- CRACKED
- ENVELOPED
- GRANDEUR
- PLENTIFUL
- SUSTAINED
- MAGUA
- EXTREMITY
- PACIFIC
- ERECT
- CRIMSON
- HARBOR
- PORTER
- PROCEEDINGS
- DISGRACE
- CLOSET
- ROBIN
- RESEMBLED
- EIGHTEENTH
- TALENT
- SHOOTING
- DEVOTION
- SINS
- CANOE
- CABLE
- TRAVELLED
- TEMPTATION
- PIT
- CORRAL
- JEST
- TRIGGER
- BASIN
- QUEEN'S
- MARRYING
- SEPTEMBER
- PATTERN
- ERRAND
- QUANTITY
- CREAM
- ALLOWING
- SPARKLED
- BOAST
- EQUIPPED
- ELECTION
- ARTS
- MOUTHS
- WHARTON
- INTERRUPTION
- HORSEMEN
- INDIA
- REACTION
- DRUNKEN
- DROUET
- CAUTIOUSLY
- UNREASONABLE
- WOLF
- SCREAM
- ENDEAVORED
- BEATS
- CHAP
- SOURCES
- GULF
- LIONS
- FISHERMAN
- SALOON
- SLEDGE
- MARTIANS
- CHEERING
- PISTOL
- RAIL
- MANAGEMENT
- COPY
- WRITES
- GUINEAS
- SWELL
- SANCHO
- TARS
- TUESDAY
- SCOUT
- AGNES
- RIFLE
- DANTES
- MORTON
- BARRY
- PINES
- BORG
- NATHAN
- CARSON
- DEASEY
- LYRA
- CUCUMETTO
- ABUNDANT
- KNOT
- SAVING
- SPENCER
- EASIER
- RICHARD
- DOUBTS
- DEARLY
- PLUMS
- NOD
- ATTENDING
- AWAITED
- VICE
- ROUGHLY
- FEROCIOUS
- ABANDONED
- GRATIFIED
- TWENTIETH
- PAINS
- RESOLVE
- BEHAVED
- FRIEND'S
- DELICACY
- BEECH
- ANTICIPATED
- HUSH
- REPUBLIC
- ORDERLY
- AFFORDED
- RESENTMENT
- UNDERTAKING
- THIRTIETH
- DEPENDED
- NAVY
- SCOTLAND
- PROTECTED
- ANCESTORS
- OWED
- DEBATE
- LIQUID
- POUR
- STRAINED
- INTRODUCTION
- CARRIES
- ASSOCIATED
- SIGHTS
- APPREHENSION
- VULGAR
- GROTESQUE
- PRIVILEGES
- REVERENCE
- DISMAL
- CHIMNEY
- GRIM
- SPECIMENS
- EMINENT
- MIRTH
- REFLECT
- TRANSFERRED
- WANDER
- WAIST
- ENVY
- COWS
- INTIMACY
- PERSONALITY
- BASIS
- SELFISH
- SPOIL
- FOUNDATION
- PEAKS
- SPOTS
- VEXATION
- CLOTHED
- BARBER
- MALE
- HONEY
- BRIDLE
- DELIBERATELY
- PATCH
- WEARINESS
- THICKET
- OHIO
- TOTALLY
- PILES
- RELIEVE
- WAKING
- CURE
- SURPRISING
- FOUNTAIN
- CELEBRATED
- INJURY
- RETIRE
- MIRACLE
- FIST
- COMMERCIAL
- GOPHER
- LANDS
- PATHS
- SUFFRAGE
- CLIMB
- COMPARISON
- PENCIL
- UNWILLING
- PROCESSION
- INSULT
- TRAVELERS
- STRETCH
- CLUNG
- RETREATED
- HARNESS
- SCENT
- COUNTLESS
- BELONGS
- PERPLEXITY
- GENEROSITY
- CHARMS
- READERS
- ARGUMENTS
- TESTIMONY
- EXPERIMENTS
- VITAL
- OCCASIONAL
- CLINGING
- BROWN'S
- RESISTED
- KNOCKING
- CASTING
- SWEEPING
- SUBDUED
- SUBTLE
- APPLAUSE
- MARVELOUS
- ESTABLISH
- BLOWING
- BRUTAL
- SPARKLING
- CONFOUNDED
- RACES
- OFFENDED
- BITS
- EGYPT
- MICE
- SAVAGES
- MOOSE
- AREA
- BOTHER
- CAPITALIST
- MISSISSIPPI
- JAR
- NEWLY
- PERISH
- ANGELS
- PICKING
- HAWK
- HONESTLY
- USES
- HEED
- REGIONS
- SHOTS
- HOMEWARD
- PILOT
- BORROWED
- TASTED
- FURNISH
- EXHAUSTION
- KEYS
- ALLEN
- WEALTHY
- FORTNIGHT
- MEMORABLE
- MEN'S
- S
- ORLEANS
- RESEMBLING
- DECAY
- BLAZE
- UNUSUALLY
- PACES
- ROGER
- PICTURESQUE
- CHECKED
- HUNTED
- THEREUPON
- EXTENSIVE
- BROTHER'S
- PREVAILED
- ARISE
- COMMONLY
- COMMENT
- SOBER
- STATIONED
- THEREAFTER
- WALLACE
- FRAGMENTS
- ACCOUNTS
- PLACING
- LEADERS
- STRUCTURE
- SUBSEQUENT
- MYLES
- SUBSTITUTE
- RAFT
- FORMATION
- DEFEATED
- NEIGHBORING
- PUDDING
- AMPLE
- APPOINTMENT
- LOCATED
- SICKNESS
- TIGER
- SHALT
- JUDITH
- HULL
- RIVAL
- UPROAR
- WI
- EVERLASTING
- BUTTERFLY
- PARRY
- SONYA
- SPEAR
- TOBY'S
- CONVICTS
- MACKINSON
- GERMANS
- LEGAL
- CHEE
- OGLETHORPE
- PHRONSIE
- GIMBLET
- CAVELL
- PASTRINI
- BADGER
- TURTLES
- TRAVERSED
- THEREOF
- FLUSH
- J
- FOUNDED
- ASYLUM
- STRICKEN
- ALEXANDER
- MISTS
- DEN
- EXTENDING
- OBSERVER
- BARONESS
- PRODUCES
- CAVALCANTI
- GUILT
- INVOLUNTARILY
- WHISTLE
- MOURNFUL
- PURSUE
- CRIMES
- HANDFUL
- GRIP
- CLEANING
- BERRIES
- HEROINE
- ASSERTION
- ENCOURAGE
- VELVET
- LIKING
- FOLIAGE
- OBSTINATE
- ADVISE
- SUMMON
- LORDSHIP
- BIND
- RIPE
- BOARDS
- PROVINCE
- DECEMBER
- PORTIONS
- OFFICIALS
- RECESS
- MOMENT'S
- MARVELLOUS
- OYSTERS
- FELICITY
- VARIED
- IMAGES
- VIOLET
- STANDPOINT
- COVE
- JUNIOR
- IMPATIENTLY
- EH
- TRIUMPHANT
- SUSPICIONS
- REMARKABLY
- EMBARRASSED
- JUDGING
- HOSPITALITY
- MIXED
- INCIDENTS
- HINT
- REMIND
- HOUARN
- HASTEN
- TEMPEST
- PAWS
- SHELF
- MOMENTARY
- SLIPPING
- HELPING
- COMBINATION
- STRIP
- MAP
- TROUSERS
- SARAH
- BRASS
- COUCH
- INEVITABLY
- DEPOSITED
- JURY
- CLEARING
- PERSISTED
- WHERE'S
- GREETING
- TELEPHONE
- SMOKED
- LIMIT
- SLEEVES
- STARTLING
- RESOURCES
- REVOLT
- SPEAKS
- PHYSICIANS
- CURED
- MEDICINES
- COMPLIMENTS
- BISCUITS
- PROCURE
- AFFECTING
- LIBERAL
- DEPART
- RECOMMENDED
- DESERVES
- HARRY
- EFFICIENT
- ELECTRIC
- COOKING
- COLUMNS
- EVENINGS
- IMAGINARY
- COURTESY
- MILLIONAIRE
- G
- MINING
- CLAWS
- EXECUTED
- ASCERTAIN
- PREPARATION
- EXPENSIVE
- PROJECTILE
- ACHIEVEMENT
- CONCEIVED
- INTENTLY
- PUPIL
- TENTS
- OUTLINE
- BRINK
- SUPPRESSED
- ADVERTISEMENT
- PSYCHOLOGICAL
- DOCTRINES
- TWINKLING
- STEAL
- HEN
- EXAMPLES
- HESITATING
- BARBAROUS
- FERRALTI
- DECEIVE
- OBJECTED
- ELIZA
- REPRESENTATIVES
- OBSERVATIONS
- ORIGINALLY
- CIVILIZED
- CONCLUDE
- SALE
- ATTENTIVE
- DEPENDENT
- BESTOWED
- VILLAGES
- RETURNS
- STOOL
- PRAYING
- RUBY
- HEAVENLY
- LUMBER
- PITCHED
- PARADISE
- CHANGING
- NOSES
- REPAIR
- UNWORTHY
- TOMORROW
- PUBLICLY
- SOBBED
- CARTER
- LANDLORD
- EX
- GLACIERS
- CHALK
- FAMINE
- RISES
- PROPRIETY
- ALONGSIDE
- CHOKED
- INGENIOUS
- REVELATION
- REPRESENT
- CARVED
- FEATURE
- ASSOCIATIONS
- CERTAINTY
- DRAGON'S
- SIEGE
- CRICKET
- COMMUNICATION
- TERRIFIED
- MONKEY
- BATHING
- CRAZY
- RULERS
- TUMBLED
- ROBBED
- GWENDOLEN
- PORTRAIT
- TEMPERANCE
- MONKEYS
- ERECTED
- COMBAT
- RANKS
- HAUGHTY
- CHAMPION
- MOB
- GROSS
- BANNERS
- FAILING
- RAVEN
- MAGICIAN
- WOLVES
- ROBBERY
- JEWEL
- FORE
- PIN
- RECORDS
- ROPE
- KIN
- SOB
- SEPARATION
- MOHAMMED
- CHURCHES
- ULTIMATELY
- SPECIALLY
- HISTORIAN
- BACKWARDS
- LUSH
- SECTION
- DENSHER
- HONORS
- MOTIONED
- BIGGEST
- ICY
- LEVISON
- LEAPING
- KEATS
- AWAITING
- TARKAS
- SKELETON
- OAR
- MANUSCRIPT
- PITI
- GAMBLING
- ARISEN
- RUSSIANS
- REDOUBT
- COLLINS
- STEAMERS
- WEIGHED
- PAINTING
- GERARD
- SOCIALIST
- THEODORA
- ZVERKOV
- JEWISH
- ETHEL
- LUFTON
- KEMP
- KAVIN
- HARDQUANONNE
- WINGFOLD
- O'SHAUGHNESSY
- TEMPLETON
- AUGUSTINE
- CONCERNS
- HANGED
- DUMB
- PUTS
- PERSONAGE
- LACKING
- GROANED
- PECULIARLY
- WORLDLY
- MODEL
- ASCENDING
- ROBBER
- DESOLATE
- MANSION
- COMPLAINTS
- MINOR
- TALKS
- HOOK
- WIG
- NURSERY
- FLIES
- ASKS
- STRICT
- DEFINED
- THRILL
- UNDERTAKEN
- COMMUNICATED
- UNCLE'S
- SEVERELY
- DEEMED
- OPPORTUNITIES
- TERRITORY
- CONSIDERATIONS
- COMFORTED
- SWEETEST
- ENCLOSED
- BROODING
- ASSEMBLY
- ATTACKS
- PREY
- CROMWELL
- GALE
- STORMY
- FAVOURABLE
- CONQUEST
- DISCOURAGED
- CO
- BETRAY
- EGG
- PARTIAL
- SPED
- INTERCOURSE
- BROWS
- WHEREIN
- CONTRIVED
- INVITE
- PITIFUL
- JUSTIFIED
- VIEWED
- SHIVERED
- TRAVELLERS
- LATEST
- STAMMERED
- CROOKED
- PLEADED
- EMPLOY
- HATEFUL
- INFERNAL
- NIGHT'S
- RAGGED
- TRAVELLER
- FLOAT
- REFRESHED
- CATHEDRAL
- COTTAGES
- THATCHED
- SPENDING
- LODGING
- BLUSHING
- CRADLE
- JUMP
- SPELL
- PROUDLY
- AMUSE
- HEDGE
- APRON
- DECLINED
- SCREAMING
- DEVELOP
- UNITY
- INTENSITY
- HOTELS
- VICINITY
- BATHED
- PLEASANTLY
- TRIFLING
- APPROPRIATE
- THICKLY
- CARES
- LADS
- DRUG
- HEEL
- DAINTY
- DISPATCHED
- REMAINDER
- MULE
- ENRAGED
- JOYFULLY
- ENGAGE
- MONARCH
- RESPECTFUL
- FACTORIES
- ASHES
- BLOCKS
- LAMPS
- ACQUAINTANCES
- DIVISION
- WAVERING
- SQUIRREL
- CEILING
- EXPERIMENT
- INDESCRIBABLE
- FORMAL
- EMPTIED
- INVARIABLY
- DISGUST
- CRANE
- CAGE
- APPARATUS
- INCREDIBLE
- ADVERTISING
- IRREGULAR
- BLUNT
- VINE
- GOAL
- SALUTED
- DEPENDS
- REPAY
- CIRCLES
- HARVARD
- DISCIPLINE
- PSYCHOLOGY
- STICKING
- NAUGHTY
- CONTINUOUS
- WONDERFULLY
- STAGGERED
- REALM
- THEORIES
- COMMANDING
- TERRACE
- NOBLEMAN
- NOBILITY
- JESSE
- WINESBURG
- HISTORICAL
- EXTINGUISHED
- HEARTY
- ESTIMATE
- SHARED
- NOSTRILS
- CONVINCE
- STATUE
- ENTITLED
- WARMED
- AY
- BABE
- MUSTN'T
- INTRODUCE
- ROSY
- REFINED
- R
- FAILS
- BREATHLESS
- CHICKEN
- CONCERT
- RAGS
- DISORDER
- FLUTTERING
- BLEEDING
- FLUTTERED
- BEGGAR
- WRATH
- RESPECTFULLY
- COMBINED
- FULFIL
- DESPISE
- NOWADAYS
- TYPES
- NINETEENTH
- DEMOCRATIC
- RIDER
- FUEGIANS
- STRAIT
- ADMIRING
- CANOES
- HURLED
- SPEECHES
- COMPARE
- LOWEST
- BRUTE
- SHELTERED
- MARTHA
- TIDINGS
- MAST
- CANNON
- DRAMA
- ARMOUR
- BIGGER
- HURRIEDLY
- WAISTCOAT
- BACKED
- CONTINENT
- ARROW
- DESPERATELY
- ATTAINED
- FELLER
- ONTO
- JUMPING
- WRIT
- CHANCED
- ANTI
- N'T
- SPRINGING
- HISSING
- SERENE
- ENGINE
- CROWNED
- DINAH
- EELS
- RASPBERRY
- DEVICE
- BOUNDS
- INDICATE
- HARVEY
- HOWL
- FLASK
- BATTLES
- PURCHASED
- CLUBS
- JOHN'S
- SETTLING
- TRACED
- ENERGETIC
- FEARING
- OBJECTIVE
- ARTILLERY
- MESS
- CASTLES
- GRATIFY
- HOBBS
- ELECTED
- LIFELESS
- LAWRENCE
- MAJESTIC
- CARTHAGE
- ANTIQUITY
- BEER
- SUPERINTENDENT
- DRIFTING
- HITHER
- EXILE
- STRINGHAM
- BEND
- GRADUATE
- FORTRESS
- SHOE
- BLESSED
- WORKERS
- ATTRACTIVE
- BRISTOL
- COSSACKS
- STEPPING
- VOTES
- VOTED
- TROUT
- DATA
- DURATION
- PICKETS
- WORKHOUSE
- DUDLEY
- WHITTAKER
- NORHALA
- CATHOLIC
- LAURA
- BARTON
- ARMAND
- MUNGER
- WESTON
- RECTANGLE
- NEWBERRY
- LEGISLATURE
- DRAMATIC
- MEDEA
- BRAZEN
- ROBY
- BARTHOLEMY
- REHNHJELM
- FALANDER
- SELLEN
- JEAN
- VON
- GLOODY
- NORWAY
- ALLAH
- TEAPOT
- RUGGLES
- WIGAN
- CLAVIER
- CITOYEN
- LOKI
- CHANNING
- SPOILED
- NODDING
- PLATES
- EXCLAMATION
- LOBSTER
- WARN
- COMFORTABLY
- GRASPING
- CHEERFULLY
- PLUM
- MISSIONARY
- DEBRAY
- BOND
- WITHDRAW
- REJECTED
- EXCITING
- CLEARER
- FASHIONABLE
- CONTRACTED
- PURSUING
- EXPRESSING
- REFER
- CODE
- FAULTS
- JOYFUL
- HATS
- TWINS
- SHOCKED
- DOUBLED
- FAIRIES
- ARCH
- SHIVER
- PETER'S
- OBSTACLE
- IMMENSELY
- SCORN
- DREARY
- SYMPATHETIC
- DIFFER
- FRIGHTEN
- DENSE
- READINESS
- ENVOYS
- NEIGHBOURING
- WALTER
- ALLIANCE
- STEWART
- SQUADRON
- INTERFERENCE
- SOLUTION
- WELFARE
- SIXTEENTH
- EFFECTED
- ADVERSARY
- PROSPERITY
- UNEQUAL
- PERPLEXED
- PROFESSED
- OPPONENT
- INDIGNANTLY
- ACHIEVED
- OBSTACLES
- BOILED
- OYSTER
- BOIL
- INSTRUCTION
- MOTIONS
- PEEPING
- STAKE
- EMPLOYMENT
- CASH
- ROARED
- CELLAR
- POLICEMAN
- WRIST
- GRINNED
- CRITICAL
- GRIMLY
- WALKER
- ALE
- PATCHES
- ANNOYED
- HINDER
- WINES
- BOWL
- TASTES
- DISPLEASURE
- CHAOS
- FACTOR
- DASH
- BEHAVE
- FARE
- CONVENTION
- SHADY
- CEMETERY
- ILLUSION
- HAPPIER
- CRUSH
- SHRANK
- STUDYING
- RECKONING
- CATASTROPHE
- PROMPT
- EFFECTIVE
- BOTTLES
- COMPOUND
- WIPED
- BETWIXT
- INHABITED
- PROMISING
- SON'S
- ENCHANTED
- MACE
- COURTIERS
- PURITY
- VIGOR
- SORROWFUL
- STRETCHES
- FURIOUSLY
- MAUD
- DISCIPLES
- CHUCK
- WHISKERS
- VEGETABLES
- SORROWS
- DUCHESS
- INVOLUNTARY
- CALAMITY
- RESTRAIN
- AWAKENING
- WORRIED
- STUPIDITY
- BOOT
- WOOL
- CARS
- L
- ALERT
- GESTURES
- MID
- GRAVEL
- STEWARD
- IMITATION
- ROB
- EXTEND
- POSSIBILITIES
- URGE
- BITING
- BRAINS
- GOTTEN
- SUNNY
- SCENERY
- YIELDING
- ANIMATED
- SHOUTS
- SHRILL
- FITS
- UNLUCKY
- INSPIRED
- DEEPEST
- VOID
- DROWSY
- SOBBING
- SHRIEK
- DISTRACTED
- HOSTS
- ACCOUNTED
- SIMULTANEOUSLY
- REIGNED
- SIMPSONS
- CRISIS
- RIGHTLY
- MODESTLY
- OPERATIONS
- MAPLE
- GOVERNED
- PACKING
- POLITELY
- EXHIBITION
- DREADFULLY
- BUTTON
- AL
- RESPECTED
- SYRIA
- CAUSING
- POURING
- ABBE
- EPOCH
- LEGITIMATE
- WOE
- FOOLS
- SPECTATOR
- WIDELY
- BORDER
- SOUTHWARD
- SHIFTED
- DIVE
- SLAUGHTER
- ENSUED
- MUTE
- CAPTAIN'S
- HUMMING
- TEDDY
- DAN
- CELL
- SCRAPPER
- WORKER
- WORM
- CHARACTERISTICS
- FERTILE
- RESULTED
- MUSKRATS
- BLAZING
- EDITION
- TORTURE
- CARRIAGES
- TRICKS
- URGENT
- CRYSTAL
- FOXES
- COPPER
- DOWNSTAIRS
- DEVELOPING
- SINKING
- TRAVELED
- SLIPPERY
- ABYSS
- INDULGED
- BUCCANEERS
- HAZARD
- MUFFLED
- FASCINATED
- DOUBTED
- CLAIMS
- LAUNCHED
- HAMLET
- CRAYFISH
- THORNTON
- DEW
- MARIANNE
- DISGUSTED
- ZADIG
- ATTENDANTS
- REQUESTED
- GENTEEL
- AXE
- ADAPTED
- MONTONI
- HOOD
- ASH
- FLOCKS
- FERNANDO
- FALSEHOOD
- ATTACHMENT
- LOAF
- DOOMED
- HOUNDS
- UTTERING
- NARRATIVE
- REJOICING
- INSTINCTIVELY
- ROPES
- ACTIVITIES
- ARTISTIC
- CUSTOMARY
- EMPHASIS
- VANDALS
- EMPEROR'S
- NEMO
- TIGHTLY
- SLEDGES
- CHOCOLATE
- PARSONAGE
- PERISHED
- FORWARDS
- LEGGED
- WHEEL
- LARRY
- MATCHES
- JOHNSON
- OXFORD
- PREMISES
- IVORY
- PARSON
- RECKONED
- MADNESS
- MILLER
- PRESERVATION
- MAGISTRATES
- STRAYED
- CHEERS
- TREASON
- MESOPOTAMIA
- THEREIN
- FRIGATE
- BEGGING
- ARCHIBALD
- ORNAMENTS
- HORNS
- ARROWS
- TRAFFIC
- LODGED
- REBELLION
- FLANK
- GIANTS
- VENERABLE
- SIMPLETON
- SANDY
- PICKET
- LOGIC
- ARMOR
- CHIU
- VENTNOR
- SAVONAROLA
- LORENZO
- SOLEMNLY
- EURALIA
- ER
- DENIS
- KENNETH
- FORBES
- LEVIN
- SIMONOV
- GRAPES
- BAXTER
- GAVROCHE
- REGINALD
- TEBBS
- BEECHES
- CHAPEL
- KIHACHI
- MARTINEAU
- VAMPA
- CHOPIN
- ELLISON
- AMABEL
- TAD
- CROXLEY
- SECRETS
- PRIVATELY
- PECK
- CHERRY
- VINES
- WEREN'T
- TONIGHT
- FEMININE
- WISER
- STOOPING
- HOMELY
- MEDIUM
- INNOCENCE
- AFFLICTED
- LABYRINTH
- CORRUPTION
- LENT
- PEEPED
- AFFECTIONATE
- PARALLEL
- RASCAL
- ENDEAVOR
- ATTORNEY
- FASCINATING
- NOTICING
- SOBS
- ECSTASY
- APPRECIATED
- TOUCHETT
- SELECTED
- GUESSING
- HENRIETTA
- HEALING
- SPREADING
- TURF
- FACULTY
- APPRECIATE
- PERPETUALLY
- RECONCILED
- ATTRACT
- CULTIVATE
- ADDITIONAL
- CONFERENCE
- COMMANDERS
- VICTOR
- DISCONTENT
- ESCORT
- SUCCESSFULLY
- REPRESENTING
- INDUCE
- PROTECTOR
- RULER
- SHATTERED
- ANNUAL
- INTERNAL
- SUMMONS
- ASSIGNED
- CORRESPONDENCE
- PROMPTED
- PEPPER
- INNUMERABLE
- OPENS
- HARDNESS
- ATTAIN
- IMMORTAL
- PHILOSOPHER
- INSPIRATION
- HORRORS
- FROWNED
- TIPPED
- WHIM
- GLARING
- GENIAL
- DEFENDED
- ABUSE
- CLIMATE
- HANDLING
- APPROVED
- CONFIDENTLY
- INASMUCH
- PROLONGED
- COLOURS
- DWARF
- SHAPES
- NEATLY
- MOUNTING
- ALTAR
- VOW
- COURSES
- SUBMISSION
- ACCEPTABLE
- FUNCTION
- FRANKNESS
- BRAVELY
- INVENTED
- COMPLAINT
- CHILL
- MUSCULAR
- BREAKS
- SWAMP
- DITCH
- DESCRIBING
- RELEASE
- STAIRCASE
- JERKED
- RHYTHM
- COLOUR
- LAWYERS
- HARMLESS
- WALLET
- DEBTS
- ALMS
- STREAMING
- FORBEAR
- FAINTED
- RIBS
- CHAIRMAN
- AMATEUR
- MILLS
- MONOTONOUS
- PEERING
- IDEALS
- POTATOES
- HOLIDAYS
- FOLDING
- NERVOUSLY
- CLARA
- ACCESS
- PARTITION
- SPHERE
- PLANET
- EXCEPTIONAL
- LONELINESS
- CRAWLED
- VEGETATION
- DRIFT
- PANEL
- EQUIPMENT
- WITHDRAWN
- CATS
- SOUNDING
- RELEASED
- SPANIARDS
- WEARIED
- PROCLAIMED
- BEAUTIES
- ATTENTIONS
- TOAST
- REFERRED
- REWARDED
- ELDERLY
- ABNORMAL
- PERVERSE
- SMOOTHLY
- MISTAKES
- BEFOREHAND
- WITNESSES
- BODILY
- ENERGIES
- POSSUM
- SCARE
- RECOGNISE
- SCRAMBLED
- MAGNIFICENCE
- PARTIALLY
- LOVELINESS
- IMPELLED
- NOISY
- SEASONS
- INSOLENT
- SIMPLICITY
- DU
- TEARING
- HAPPENING
- BOYHOOD
- FLAMING
- HABITABLE
- INSUFFICIENT
- NOWHERE
- POLES
- TEMPERATURE
- LAPSE
- MISTOOK
- ALOFT
- ELEVATION
- PARTING
- DISAPPEAR
- EVILS
- DARKENED
- UTTERANCE
- DIES
- ABODE
- DELAWARES
- LANGUAGES
- SUBJECTED
- MUSING
- WRINKLED
- IMPOSING
- HUM
- SPLENDOR
- MAC
- CURLED
- EARN
- MUSED
- LITERARY
- SWEETNESS
- PERCHED
- EYEBROWS
- EXAGGERATED
- THURSDAY
- UNLOCKED
- BAGGAGE
- RAILING
- GANEM'S
- DAMASCUS
- USAGE
- DECLARING
- WROUGHT
- CRUELLY
- GRACEFULLY
- BUDS
- TUT
- INSECTS
- SCAMPERED
- CARDINAL
- HARDEST
- HOPPED
- GRAPE
- STEALING
- ACCUSE
- PEOPLES
- TRANQUIL
- RANDOM
- APPEARANCES
- TOLERABLY
- ECHO
- HALT
- EYELIDS
- EXCEPTING
- SULLEN
- UPWARDS
- BLINDLY
- CHANNELS
- WIGWAM
- DETAINED
- CONSTITUTE
- VACANT
- BUD
- ATTEMPTING
- SUNG
- ATTACKING
- WHISTLING
- STATELY
- SEEDS
- RESULTANT
- HATCH
- PA
- GAUNT
- PHOTOGRAPH
- TOOTH
- BANISHED
- UPSET
- EAGLE
- ABBEY
- PUBLICATION
- PETITION
- DETECTED
- REFRAIN
- TERRORS
- PROMOTE
- GARDENER
- PLANTATION
- SAMSON
- SKULL
- CUTTER
- AUDIBLE
- COATS
- BREADTH
- PREACH
- BLADE
- SHIELD
- TARLING
- LINED
- RIDERS
- CARING
- BABYLON
- SUBSTANTIAL
- JONES
- REMOVAL
- LUCAS
- TORCH
- CONTINUES
- CUB
- GEORGIA
- ANNETTE
- HEIGHTENED
- FEDERAL
- OWNERS
- WEDNESDAY
- CHATTERING
- BOAR
- OXEN
- BREECHES
- ENTREATIES
- REJOICED
- KNELT
- TREVILLE
- CHILDISH
- STEALTHILY
- CONVEY
- RESOLUTIONS
- FLINT
- MECHANICAL
- SWING
- OUTFIT
- LEWIS
- PRODUCTION
- YOKE
- DAMNED
- GRAMMAR
- SPY
- GENSERIC
- SENATE
- IMPERIAL
- UNDERWATER
- NAUTILUS'S
- PROCEEDS
- VIRGIN
- ESSENCE
- CHEAP
- GRATIFICATION
- SKI
- TROUBLESOME
- ONESELF
- MEASURED
- CULTIVATION
- VENZA
- CURLS
- MARQUIS
- DERONDA
- SUMMER'S
- CAB
- GLARE
- CREVICE
- CANYON
- FRENCHMEN
- LAMB
- STUDENT
- BLINDED
- TRANQUILLITY
- KINGDOMS
- SUPPOSITION
- KNEELING
- EXPEDIENT
- PENNSYLVANIA
- CHAMBERS
- INSOLENCE
- SELECT
- ARTERY
- ROSTOV
- MARY'S
- PROJECT
- RESIGNATION
- SPEEDY
- DECKS
- PRODUCTS
- DISTRIBUTION
- TANGLED
- COMMISSIONER
- LAMENTED
- FULFILLED
- MANHOOD
- VILLONA
- DOYLE
- BRIGHAM
- FUEL
- INVESTIGATION
- MAIDENS
- MAXWELL
- PACKET
- GUB
- FIRS
- CHANCELLOR
- SHASTA
- PHILIP'S
- FUNDEVOGEL
- JEFF'S
- INSURRECTION
- CRANES
- COULSON
- CARAVAN
- POSTMAN
- LOCH
- INVENTOR
- HENSHAW
- VERONICA
- DIETRICH
- SHALMANESER
- ASSYRIAN
- BECHAMEL
- SOUSSIO
- MINKS
- HEADLONG
- AWED
- RACHEL'S
- BEES
- GRASSY
- WILLOWS
- DIRT
- DISHES
- PRESERVES
- BRISKLY
- SETS
- TICKET
- SHABBY
- BRUSHED
- EXCUSED
- EXECUTIONER
- ASSURANCE
- OCCUPYING
- ELOQUENCE
- POLITENESS
- WOVEN
- INQUIRING
- HUDDLED
- STERNLY
- BUTLER
- FALTERED
- DISLIKED
- ORNAMENTED
- ARBITRARY
- FOOTING
- INVALID
- WARRANT
- VISIONS
- SHILLING
- WARBURTON
- CORRESPONDING
- PROPOSALS
- REPARATION
- AMSTERDAM
- ECONOMY
- GENERALS
- JOINT
- PUNISHED
- PATRIOT
- INSPIRING
- ALLY
- TWELFTH
- FANTASTIC
- TREATY
- FEAT
- SECRECY
- SECURING
- REMONSTRANCE
- ACCEPTANCE
- GUARANTEE
- ATTRIBUTES
- COMPOSE
- MOAN
- TOPIC
- DISTANCES
- RICHER
- CREED
- DISCUSS
- DRAWERS
- COPIES
- ECCENTRIC
- CLUMSY
- CULTIVATED
- TOUGH
- PRAISES
- SOMBRE
- REINS
- UNLIKE
- CONFIDED
- INDICATION
- DIVIDE
- FLOORS
- HANGS
- REEDS
- TOES
- AWHILE
- INABILITY
- IMPRESS
- LOUNGE
- PHYSICALLY
- REFRESHMENT
- COMIC
- ARTISTS
- POETIC
- MATURITY
- ADJUSTMENT
- IMPOSSIBILITY
- COURTS
- EVE
- NORTHWARD
- BLANKETS
- GRAHAM'S
- CONVENIENCE
- CHALLENGE
- RAW
- YEAR'S
- INTERPOSED
- PENSIVE
- TWIGS
- ACCUSATION
- IMPRISONMENT
- EDGES
- RHEUMATISM
- JELLY
- TIPS
- D
- SHEETS
- MERITS
- PLANT
- LUSTRE
- ALIGHTED
- SIGHS
- F
- N
- GRIEVED
- ABOMINABLE
- FESTIVAL
- MALICE
- ALMIGHTY
- PERSIAN
- PENETRATE
- SWEAT
- DESERVED
- VIRTUOUS
- UNJUST
- PENSION
- COMMIT
- CREEPING
- SITE
- BLAUSSER
- LL
- SEMI
- MASSACHUSETTS
- WISELY
- LAVA
- NATURE'S
- GRUMBLED
- DIG
- DESIGNED
- TRIALS
- RECEIPT
- PERSPIRATION
- RECEIVER
- PREFERENCE
- CORRUPT
- IMPRISONED
- LIGHTER
- COMPASS
- EXPENSES
- ANKLE
- ECHOES
- QUIETED
- CROUCHED
- TUBE
- WHIRLING
- PENETRATING
- NOBLES
- CEREMONIES
- PROPORTIONS
- ARDENT
- MESSAGES
- CORDIALLY
- LOYALTY
- PSYCHOTHERAPEUTIC
- DEPRIVE
- CRITICS
- STRUGGLES
- TYPICAL
- SUPPRESS
- PROBABILITY
- REFORM
- OL
- LANGUID
- INTENSELY
- QUIVERING
- RICHLY
- GARMENT
- INDISTINCT
- RESOLUTE
- HABITUAL
- CONJECTURE
- GREEDY
- APPROVAL
- INTOLERABLE
- LEND
- OMINOUS
- DANCERS
- CLUTCHED
- NIGH
- SHUTTING
- PLUNDER
- TENDERLY
- CURVE
- SCREEN
- TEMPERED
- INDEFINITE
- CRUST
- HINTED
- FRUITS
- HUMBLY
- HURON
- CAPTIVE
- BLANKET
- PRIVACY
- DELAWARE
- DEVOURED
- INHERITED
- MARGIN
- PATENT
- CORRECTED
- OAKS
- SLIPPERS
- ASCRIBED
- ROCKING
- WASHING
- PROFITS
- CUSTOMERS
- TUCKED
- MORTALS
- TOM'S
- IMPROVE
- LADD
- SHIRTS
- GLOWED
- CONVEYED
- GLEE
- LID
- SATIN
- BELIEVERS
- COMPLAIN
- CORDS
- INSENSIBLE
- ALLIED
- COLOSSAL
- NUT
- PRETENSIONS
- CORPSES
- SPIED
- ERRORS
- FURNACE
- SHAVE
- DEVILS
- WEB
- ACCORDANCE
- DISCOVERING
- WORLD'S
- ACCOMPANYING
- TENSION
- E
- RAPIDITY
- FEEDING
- JEMMY
- WHITENESS
- SCRAP
- BURY
- WARFARE
- BATTERY
- SWAYED
- RAPTURE
- HEART'S
- LOVELIEST
- CRESTY
- WEE
- SHRIEKED
- KICKED
- TOMMY
- LONGBILL
- SPOTTED
- FRANCISCO
- SALMON
- ASHORE
- CONSTRUCTED
- SIGNIFICANCE
- ASIA
- SOLOMON
- DISPLEASED
- SAFER
- CROWNS
- CREST
- HOSS
- SHAPELESS
- ASCENT
- FIEND
- BENEVOLENT
- NORTHANGER
- MODERATE
- VOLUNTARY
- CONTRADICT
- DIRECTING
- MELODY
- SHAWL
- FRIGHT
- BRUTALITY
- DESPAIRING
- ARAB
- DESCENDING
- CARGO
- MOANING
- STARE
- SOOTHING
- RESENTED
- KEENLY
- JACKSON
- WHISPERING
- HENCEFORTH
- DARKER
- ILLUSTRIOUS
- COMBATANTS
- TAX
- TROOPER
- DESTROYING
- REBEL
- DOST
- INFECTION
- GROVES
- STARVATION
- COMMUNITIES
- JEFFERSON
- SHREWD
- HIGHWAY
- PRETENCE
- TREACHERY
- DAGGER
- STROVE
- CORPS
- EXCELLENCY
- BUCKINGHAM
- SEALED
- PLANK
- MECHANICALLY
- RUSK
- WILDLY
- SHADED
- LOWERED
- PHENOMENON
- WHIRL
- RAILWAY
- POSITIONS
- MIRRORS
- BAGS
- INVASION
- WAGONS
- POPE
- FLEE
- TIGERS
- TANKS
- MOCKERY
- SACK
- CIRCULATION
- SPECTACLES
- CONTRIBUTED
- STEVENTON
- APPROPRIATED
- CONVICT
- SHOVED
- CURSED
- HARDSHIPS
- JORDAN
- FUGITIVES
- REFRESHING
- REPUBLICAN
- APPROBATION
- DESPISED
- SAINTS
- DISASTER
- SERPENT
- IRWINE
- THEY'D
- DIVORCE
- RHYMES
- PRINTING
- EDITOR
- LUNCHEON
- CAVITY
- DECREE
- SITUATIONS
- BANDS
- RUBBISH
- SPIDER
- AVAIL
- CONTAINS
- APE
- BLOODY
- DEJAH
- THORIS
- HARROW
- WINCHESTER
- SERMON
- DAM
- NOME
- JOURNAL
- CUBA
- BUSHY
- MALEAGANS
- WILT
- SLAY
- HORACE
- MING
- DARKENING
- WADED
- SWITCH
- FARLEY
- HO
- COMSTOCK
- INLAND
- FINN
- CHURCHILL
- OVEREND
- CAREY
- ORPHEUS
- ENGAGEMENTS
- SANCH
- BULLFROG
- LINA
- COLONY
- JUNIORS
- DISTORTED
- ABOLITION
- USHANT
- TWENTYMAN
- REMSEN
- HOMO
- LYRE
- GIDEON
- ASHUR
- PITT
- BOOKSTALL
- GROCER
- YORKE
- PATRICIUS
- GETTYSBURG
- DIZZY
- TWITCHED
- THERE'LL
- ADJUSTED
- AMAZING
- RISKS
- AGONIES
- FIR
- COPE
- MOODS
- EXPRESSIVE
- ELBOWS
- OVERBOARD
- PRETTIEST
- TIMIDLY
- BANKER
- TACIT
- RECOLLECTED
- BASKETS
- PROSPEROUS
- ASSASSIN
- PROCUREUR
- PRECAUTION
- SINCERELY
- CRIMINALS
- BENEDETTO
- HEAVEN'S
- CURLY
- MICHAEL
- IMITATE
- TRAGIC
- STACKPOLE
- TACT
- SERENELY
- FLICKERING
- ALBANY
- TREAD
- PATIENTS
- HUMILIATION
- HUMILITY
- DEFINITELY
- INSIGHT
- VETERAN
- BINDING
- FISHERIES
- BOTTOMS
- PENSIONARY
- GOVERNMENTS
- INDIES
- BLAKE
- ADMIRALTY
- PORTLAND
- STATEMENTS
- INSISTING
- DOCUMENT
- EXPECTATIONS
- DEALING
- CRUMBS
- PINT
- MULTITUDES
- INSTINCTS
- CLOVER
- RELATIONSHIP
- TYRANT
- FRED
- GROWLED
- BRICK
- HILLSIDE
- MUTTERING
- SNEERED
- FOUNTAINS
- NEGLECT
- IRRITATED
- QUICKENED
- PHAETON
- WHEELED
- SPECIMEN
- LOWERING
- TOUCHES
- BELLAH
- IMPLORED
- NECKS
- BEGGARS
- DAZZLED
- BECKONED
- DAIRY
- FEATHER
- JEGU
- BEAMING
- GAINING
- PICTURED
- SOLICITUDE
- EXERT
- FUNDAMENTAL
- HANDLED
- GAIT
- RECEIVES
- BAT
- DOINGS
- LITERALLY
- DISAPPEARANCE
- FUNDS
- COURTYARD
- WEARILY
- AUTOMOBILE
- FOREIGNER
- CHART
- EVENTUALLY
- VISITING
- UNNATURAL
- LEISURELY
- RETAINED
- CLERGYMAN
- GLEAMED
- GOVERN
- COUGH
- HANDY
- POPULARITY
- LAMENT
- QUITTING
- CONVERSING
- JUSTLY
- AVERSION
- PRETENDING
- DEFERENCE
- OPPOSE
- GREECE
- PRESCRIBED
- VIZIER
- KENNICOTT
- NAIL
- LAKES
- LUXURIES
- DICTATED
- LEAFY
- RECITAL
- CURTAINS
- DRIPPING
- THANKFUL
- DAWNED
- CHATEAU
- AFFLICTION
- SCRAPS
- BAH
- ACUTE
- OUTLINED
- GHASTLY
- MENACE
- DELIGHTS
- TACKLE
- STORED
- SUFFICED
- ADVANCES
- CLASPING
- HEARS
- RUGGED
- SILKS
- CONCEPTIONS
- ENGINES
- ELECTRICITY
- CONTRIBUTE
- CONCEALMENT
- SENTRY
- SQUEEZED
- SPLIT
- SURVEY
- CLUMP
- SPARKS
- ANCHOR
- TOSS
- RUSTY
- JOINING
- PLEDGE
- ENFORCED
- RESIGNED
- ROYALTY
- QUITTED
- MYSTIC
- FLUID
- OBJECTIONS
- INCLUDE
- CRAWLING
- REASONABLY
- AWAKEN
- PILGRIMAGE
- DAZZLING
- ARCHWAY
- FUTILE
- HARDENED
- EXPOSING
- SPECULATION
- OPPONENTS
- EXALTATION
- FAIN
- INGENUITY
- PRODIGIOUS
- NICELY
- MISUNDERSTOOD
- GUARDIAN
- APPEARING
- OVERHEARD
- DISSATISFIED
- BEDSIDE
- EARNESTNESS
- ATTRIBUTED
- HYPOTHESIS
- TERRESTRIAL
- PROVES
- GRAVITATION
- COOLLY
- SLEEPY
- BISCUIT
- PROVOKED
- PRESERVING
- ENTERTAIN
- SEVERITY
- FOE
- TERMED
- SUBSIDED
- RIVERBORO
- SIMPSON
- SHAN'T
- UNBOUNDED
- CONSCIENTIOUS
- CUSTOMER
- PRICES
- ARGUE
- IT'LL
- BEGINNINGS
- SLATE
- RASH
- INSTRUCTED
- FURS
- ADAM'S
- DISGRACEFUL
- FLUTTER
- ERRANT
- STREWN
- EMBRACING
- INTERRUPTING
- REVIVE
- AFRESH
- DESIRING
- ROSEBREAST
- PROJECTED
- BONY
- PORTRAITS
- FRENCHMAN
- BALANCED
- MARINE
- PLUCK
- CONCURRENCE
- TRUNKS
- SHALLOW
- NATURES
- DENIAL
- VOLUNTARILY
- UTILITY
- LUDICROUS
- TEMPT
- ALTERNATELY
- VALLEYS
- DISASTROUS
- VEILED
- RULING
- ROBBERS
- TALENTS
- O'ER
- ELMS
- DISCERN
- SEAMED
- SEVENTEENTH
- SCANTY
- VARYING
- WRECK
- BEE
- PRAISED
- FERNS
- SEPARATING
- STUFFED
- CHASED
- SLAP
- MOVES
- COOLNESS
- SWELLING
- ISOLATED
- STIFFLY
- SPLENDOUR
- TEND
- SNATCHED
- HOPELESSLY
- TUMBLE
- RAINBOW
- RUSTLING
- SHIFT
- CATHERINE'S
- DESIROUS
- ALACRITY
- ADVANTAGEOUS
- PRONOUNCE
- DECLINE
- ACTORS
- INCESSANTLY
- ANTONIA
- BATTERED
- STRIPED
- SHADES
- FASCINATION
- MOUSTACHE
- STREAMED
- POPULOUS
- BOWS
- SLID
- PALM
- BOYISH
- GLAMOUR
- CARTRIDGE
- COINS
- STRODE
- DINNERS
- PAT
- INDIGNANT
- FEARFULLY
- ELEGANCE
- ODDS
- CONQUER
- PROPOSE
- LANCE
- BRACELETS
- COWARDLY
- EXCESSIVE
- NEPHEW
- SUPERIORITY
- AZURE
- INHERITANCE
- SCHEMES
- BRETHREN
- REVERSE
- MATCHED
- ARISTOCRACY
- RENTS
- RECORDED
- MA'AMSELLE
- RESTAURANT
- POLITICIANS
- OPERA
- FINANCIAL
- FUNCTIONS
- SPORTING
- COMMENTED
- DECORATED
- MORN
- SMOTE
- VILE
- PLEASES
- OATHS
- ESCAPING
- BENEFACTOR
- DROPS
- GUARDSMEN
- INTOXICATION
- PISTOLS
- INTERPRETED
- CONFIDENTIAL
- FIDDLE
- REGAINED
- GASPING
- LIKENESS
- COACH
- DAME
- LEE
- BARLEY
- QUAINT
- CRASH
- DISCOVERIES
- BEHAVIOR
- JEROME
- ASSOCIATE
- EFFICIENCY
- FILM
- PENNY
- GILBERT
- IMPROVEMENTS
- EXCEEDING
- EMPRESS
- TEMPLES
- LARGEST
- WRITERS
- COFFIN
- FISHERMEN
- ARRIVING
- SHARKS
- WORTHLESS
- BELLY
- QUANTITIES
- WILLARD
- INTERNATIONAL
- SECTIONS
- INTERFERED
- ADDER
- BOUNDARY
- ARCHITECTURE
- REQUISITE
- RIVALRY
- IGNORED
- ANITA
- STRIPPED
- INTERPRETATION
- FLORA
- SHAW'S
- GLADNESS
- COTTAGERS
- MURDERERS
- RATTLE
- SNUFF
- INK
- DISPOSE
- DUCK
- AMMUNITION
- IDENTICAL
- MUSEUM
- SPUR
- FORTS
- ZONE
- STUNNED
- BLACKNESS
- ARTHUR'S
- WORRYING
- EMANCIPATION
- MISCHIEVOUS
- HEIGHTS
- HUNTERS
- ARTERIES
- MUSO
- LAWTON
- DUNWOODIE
- INTERVENTION
- GAYETY
- GRACES
- TERESA
- OX
- SENTIMENTS
- APPARITION
- EXCEED
- SHEVARDINO
- SINGER
- SHOWER
- MINNOW
- DISCHARGE
- DUCKS
- PICKETING
- PHILADELPHIA
- CELLS
- SUPERB
- COLONISTS
- SLEW
- TEXT
- STEPMOTHER
- MOUNTAINEER
- LATENT
- DILWORTHY
- BUFFALOES
- REGIME
- SYBIL
- FLIGHTS
- MORLEY
- ANNA
- BOURGEOIS
- BELVANE
- GOWER
- THROCKMORTON
- HORNBY
- SPRUCE
- SCIENTIST
- CONSUMPTION
- LOS
- ETERNITY
- HOVEL
- CUBAN
- BOULEVARD
- WEYMOUTH
- BOB
- ADELA
- KINRAID
- FAUCHELEVENT
- JESSICA
- CHUMS
- GARTER
- MENSTRUATION
- MENSTRUAL
- MENSTRUATING
- PHYSIC
- EAMES
- O'TOOLE
- MORGESON
- JOST
- GERTRUDE
- CLANCHARLIE
- WITHAN
- AGATHA
- BACH
- HAYNES
- FOX'S
- PEERS
- ADDERS
- AUGUSTA
- BROWNRIGG
- INTRICATE
- CARMODY
- STRAY
- PAINFULLY
- MELLOW
- ANGLES
- TELEGRAM
- SANDS
- COMFORTING
- OFFEND
- ACCOMPLISHMENT
- SHRINK
- REFLECTING
- SNUG
- PREPARATORY
- BRISK
- WHISTLED
- INFANCY
- BLISS
- CHEERED
- IMAGINATIONS
- MOMENTARILY
- IMPLY
- MERCILESS
- DISGRACED
- INDULGENCE
- INFECTED
- FOOTMAN
- PRECAUTIONS
- ASCENDED
- STRIKES
- DRAGOON
- STAIR
- FROCK
- THRILLING
- BEARDED
- SWEETHEART
- WOKE
- BAFFLED
- GLITTER
- SOWN
- PASSIVE
- DRUM
- HARBOUR
- DEALT
- CASUAL
- FOREMOST
- OFFENSIVE
- GRIEVANCES
- MISUNDERSTANDING
- NIGHTFALL
- UNIVERSALLY
- JUNCTION
- EXPERT
- MEDITERRANEAN
- CHAPTERS
- TEMPERAMENT
- BOLT
- CLERKS
- PERCH
- SIMMER
- PINCH
- BAKED
- SALAD
- SENIORS
- BEAMS
- DISFIGURED
- NOURISHMENT
- MEETS
- ABJECT
- RENDERING
- ENVELOPE
- JUSTIFY
- SHRUGGED
- KARA'S
- SURVEYED
- SECURELY
- OVERCOAT
- DOUBTFULLY
- DELIBERATION
- OCCUPANT
- DEL
- DAMN
- PROFOUNDLY
- PINS
- INFLICTED
- TOLERABLE
- HABITUALLY
- BORED
- GLIMPSES
- PIGS
- HEARTH
- CABBAGE
- BARBAIK
- HORSE'S
- TENDS
- FLEETING
- IMPLIED
- DISADVANTAGES
- RESTS
- PETTY
- MENTALLY
- BREED
- WIDER
- PENETRATED
- RESPONSIBILITIES
- CONCENTRATED
- ENDURING
- BUCK
- ADMINISTERED
- SENIOR
- BICYCLE
- BIRTHDAY
- WHEREBY
- BOWER
- HEREAFTER
- ULTIMATE
- SLUMBER
- ASCERTAINED
- CONVERSED
- RETAIN
- PEERED
- CONSPICUOUS
- BATHE
- ROBINSON
- DARESAY
- CONQUERED
- WEAVING
- DAZED
- SCRATCHED
- COLONIAL
- RAG
- FOREFINGER
- TOKEN
- RUB
- SAMPLE
- EXCELLENCE
- STOPS
- RETIRING
- TOILET
- RECKLESS
- RELATING
- CARESSES
- APPREHENDED
- SURPASSED
- SOLELY
- CONVERSE
- SACRIFICED
- SIGNIFY
- PLUNGE
- STALL
- MAJESTY'S
- VILLAIN
- BOLDNESS
- WIPE
- WINDY
- NURSING
- REARED
- ELEPHANT
- GOAT
- REFINEMENT
- INTEGRITY
- GINGERBREAD
- CAUTIOUS
- INSTITUTION
- ESTEEMED
- JERSEY
- NEGROES
- SETTLEMENT
- SLANTING
- RIDICULE
- UNLIMITED
- STUDDED
- BANG
- GRAB
- DECIDING
- IMMORTALITY
- DOME
- STARTS
- CHICKENS
- RELATIVELY
- GRADUATED
- IDENTITY
- TRUTHS
- MENACING
- LUMINOUS
- PLAYERS
- OBLIGE
- EXPLAINING
- AUNTS
- COSTUME
- ILLUSTRATION
- WHISKEY
- CURES
- GAINS
- FREED
- WARMER
- VAGUELY
- REALITIES
- PERCEPTION
- RIDDLE
- PANE
- PRICKED
- HAZE
- MERRIMENT
- ARDOR
- ETERNALLY
- FINELY
- FRESHNESS
- PATHETIC
- CLEARNESS
- PLAUSIBLE
- CONSISTENT
- RIGHTEOUSNESS
- BUSILY
- STROLL
- HARDY
- STROKED
- POSSESSIONS
- SHUDDER
- STROKING
- BRILLIANCY
- UNANIMOUSLY
- MANTLE
- WANDERINGS
- RADICAL
- PRINCESS'S
- ROWED
- PANGLOSS
- PRUDENT
- CAPTIVITY
- PLOT
- HOVERED
- ENDEAVORING
- BREEDING
- CHIEFS
- ANNOUNCE
- FORTIFIED
- WHITES
- KNIVES
- SCENTED
- FLATTERING
- FORESEE
- POWERFULLY
- LABORS
- CONSULTING
- PURSUITS
- CEDAR
- THANKSGIVING
- MEALS
- BARNS
- SUSAN
- TROTTED
- FITTING
- ASTONISHING
- BOUT
- DRIVES
- MORTIFICATION
- TURNIPS
- FLASHES
- YALE
- GASP
- INVADED
- JOURNEYS
- DEVOTE
- TYRANNY
- THIRDS
- ATTENTIVELY
- RECOVERING
- PROVISION
- WINKED
- MEEKLY
- CRAWL
- CIRCUIT
- TAVERN
- BLAMED
- SWITZERLAND
- EXPOSITION
- MECHANISM
- MORALLY
- ENDOWED
- MORALS
- HAPPIEST
- EXPRESSLY
- REPULSIVE
- COLDNESS
- MODES
- HONESTY
- GILDED
- ANCHORED
- CONSISTS
- WESTWARD
- NOTIONS
- DISTRESSED
- SINGULARLY
- FREIGHT
- RUSHES
- SLICES
- BELIEVES
- ONWARDS
- SENSELESS
- ASSAULT
- SPACES
- JO
- BULLY
- ROAMING
- OFTENER
- BARRIERS
- ALASKA
- DIAMETER
- BEAK
- GALLERIES
- OVERTOOK
- MUSKRAT
- WINGED
- TROT
- JAW
- THREATS
- AIRS
- POMP
- ANIMATION
- WARD
- IDLENESS
- ENDEAVOUR
- PITIED
- ALLEN'S
- BLUSHED
- EXPOSE
- COMMUNION
- DESIGNATED
- SUNKEN
- WHIPPED
- FISTS
- APPALLING
- AFT
- RIM
- JUNGLE
- STARBOARD
- DELIBERATE
- ILLUSIONS
- EXALTED
- ANOTHER'S
- EEL
- GAPING
- INTERRUPT
- SCRIPTURES
- CARLISLE
- DISGUISED
- KISSING
- POETS
- AUDACITY
- LODGINGS
- STROKES
- RECOURSE
- CONCEALING
- AMENDMENT
- KATY
- CANST
- FLOURISH
- HABITATION
- SINGLY
- LOSSES
- POSSESSOR
- BUTCHER
- OVERCAME
- OCCURRENCE
- CONTEMPLATION
- SIMILARLY
- INSECT
- TANNER
- GRIPPED
- KNAVE
- SAUCY
- THYSELF
- UNAWARE
- SIGHING
- TRAILING
- ENCOURAGING
- FIREPLACE
- RESUME
- LOUIS
- GLITTERED
- CONQUEROR
- STAG
- DISPUTE
- NOBLEST
- WONDROUS
- SHADOWY
- BRANDY
- VAULT
- DEJECTION
- VIGILANCE
- COCKE
- VIGOROUSLY
- ALFRED
- CONTEMPLATE
- SECONDLY
- HUSKY
- MASTS
- WHARF
- LORD'S
- URGING
- COLUMBUS
- PACED
- STAGES
- NERVE
- FOUNDATIONS
- BELISARIUS
- CHRISTIANITY
- ENCLOSURE
- DIVERS
- EXTRACT
- SCALES
- WOMEN'S
- BUYING
- ACHED
- STRIFE
- IMPROPER
- HANS
- PAIRS
- INSPECTION
- MAKERS
- SPECK
- VIVIDLY
- TOUR
- SLIDING
- IRONY
- FARTHEST
- CARLING'S
- OB
- DEAN
- CONTEMPORARIES
- APPROVE
- RATIONAL
- RESCUED
- DISTURBING
- ENDING
- PROSPECTS
- MURDEROUS
- BONNET
- SUNBEAMS
- EXCUSES
- OMITTED
- YOSEMITE
- TENAYA
- WESTMINSTER
- BUSIED
- BRIDE'S
- REFORMATION
- SUBURBS
- CELEBRATE
- CAMPS
- COIN
- HANDING
- RANCH
- PLAZA
- FOSSIL
- SHIP'S
- HEADING
- RACING
- BACON
- REPAIRS
- TOADS
- REMORSE
- HANDWRITING
- BUTCHER'S
- ABRUPT
- SUMMITS
- IMPENETRABLE
- PLUNGING
- SHEPHERDESS
- DIMLY
- YELLED
- MODESTY
- HUNGARIAN
- ROOSEVELT
- DESOLATION
- CONVERTED
- YELL
- RITES
- JAGGED
- ARUJI
- SITGREAVES
- SENOR
- HENS
- MUTINEERS
- MUSKET
- CAMBRIDGE
- POORER
- SHAVING
- BLAST
- KITTY
- APPROACHES
- PECULIARITIES
- PREJUDICE
- MANLY
- OPERATE
- PLASMOID
- OFFENSE
- VACATION
- GARRISON
- CAPRICE
- KAY
- MARVEL
- MAPS
- RUTH'S
- TONGUES
- SEGOUIN
- COLLECTOR
- DOCUMENTS
- SHAMEFUL
- PREACHER
- ACTOR
- WADMAN
- AVAILABLE
- PHINEAS
- RAYMOND
- CHURCHWARDEN
- TRENCHES
- RAWLE
- ALEXEY
- ALEXANDROVITCH
- COLCHIS
- BEAM
- PATSY'S
- HUMOROUS
- DADDY
- CLAIR
- CONGREGATION
- GANGWAY
- FUSS
- GODMOTHER
- BARRICADES
- INSURGENTS
- MOLIERE
- ROYLAKE
- THOR
- SYLVIE
- MASKERS
- CHELTENHAM
- TILBURY
- AYRTON
- NATSIR
- BANDITS
- COONSKIN
- LYNDE'S
- NOTABLE
- KNITTED
- TURNIP
- BUGGY
- IRISHMAN
- UNUSED
- SCOTIA
- EXPLANATIONS
- MOUTHFUL
- AMOROUS
- SCANDAL
- EUGENIE
- INSURE
- ACHING
- MUSE
- SHUDDERED
- PIERCING
- WICKEDNESS
- PASSIONATELY
- NESTS
- TOYS
- PAN
- OBLIGATION
- BOARDING
- FAITHLESS
- GOODWOOD
- RIBBONS
- HARMONIOUS
- MAINLY
- ENTERTAINING
- DYED
- INQUISITIVE
- FASTEN
- LONGEST
- ODIOUS
- DIET
- MATTERED
- SAILS
- MAZE
- OBLIGATIONS
- LIVID
- PRONE
- FLATTERED
- LESSER
- CONSTERNATION
- DEMANDING
- SALUTE
- OUTBREAK
- DOVER
- HUMILIATING
- FORCING
- SUCCESSES
- CONDUCTING
- NUMBERED
- AMBIGUOUS
- AFFIRM
- SESSION
- PRINCESSES
- DULY
- ASSENTED
- CHOPPED
- JUICE
- JOYS
- MATURE
- BETRAYING
- LOVE'S
- LIFE'S
- TRIVIAL
- AMBER
- RELISH
- CONSUMED
- REMNANTS
- INSERTED
- PRESUME
- PRETEXT
- BLEAK
- DILIGENCE
- SALARY
- APPEALING
- BUREAU
- LATCH
- FRAMEWORK
- ACCIDENTALLY
- RELUCTANTLY
- ADVISABLE
- DISAPPEARING
- ANNIVERSARY
- GALLOWS
- DANGLING
- GREEKS
- CONFERRED
- SCORCHED
- PEAR
- SURVIVE
- REMNANT
- EDIFICE
- HONOURS
- LANES
- NEEDLES
- TENDED
- HOSPITABLE
- DELAYED
- INDICATING
- RINGS
- BESOUGHT
- OBSTINACY
- ENVIED
- SPOILT
- LO
- COALS
- LASTING
- CENTERED
- WILLINGNESS
- SATISFYING
- STITCH
- EXPOSURE
- CUTS
- POSSESSING
- SMELLS
- BULK
- SYSTEMATIC
- TRACT
- EXPLORED
- MIRACLES
- VANISHING
- ENMITY
- DILEMMA
- SHARPER
- ALARMING
- UNSCRUPULOUS
- CONTROLLED
- FETCHED
- LESSENED
- DRAWS
- PEBBLE
- BANKERS
- BOWELS
- DISEASES
- TOE
- DOSE
- NOISES
- TISSUE
- ANNOYANCE
- PROMOTED
- STARVE
- DIDST
- SULTAN
- SCHEHERAZADE
- RELY
- BEFALLEN
- CREATOR
- CONFIDE
- REVEAL
- TRAITOR
- DOMINIONS
- REPENT
- CONTRADICTION
- FRONTIER
- MOUND
- PAW
- ALIEN
- RICHEST
- EXPANSE
- DES
- POSTS
- WOODED
- BASS
- FAVORABLY
- NECESSITIES
- LOGICAL
- ROUTINE
- SPACIOUS
- CONVERSATIONS
- BOASTED
- STOCKS
- DEPRIVED
- PIOUS
- RELIGIONS
- GENEROUSLY
- SLEEPS
- PAVED
- FESTIVITIES
- PHILOSOPHERS
- CREDITED
- CONVENT
- EDGED
- SHRIEKS
- TRANSFORMED
- SUICIDE
- MATRON
- DIALOGUE
- ROSALIND
- TEASING
- COMPETITION
- OCCURS
- SHAFT
- ARGUING
- FROZE
- BRIGHTER
- LURKING
- DOTTED
- PRINCE'S
- ENLISTED
- THANKING
- FIFTEENTH
- FAVORED
- ARNWOOD
- PRINCELY
- DISCRETION
- ELOQUENT
- OPIUM
- ELDERS
- CREATING
- PSYCHOLOGIST
- FACTORS
- SUPPRESSING
- DISPOSAL
- WHITISH
- POPPED
- LIPPERTY
- RAT
- MUSCLE
- TRANSPARENT
- ORNAMENT
- BALCONY
- CONTROVERSY
- CURRENTS
- RESOURCE
- VENT
- RESTRAINT
- FROWNING
- ACCENTS
- COMMONPLACE
- PARALYZED
- IMPORT
- POSTED
- BENTLEY
- FARMHOUSE
- STABLES
- JOVE
- SOLVE
- CEASING
- CLING
- DIVING
- PARROT
- AFLOAT
- FEEBLY
- FRANTIC
- HORRIBLY
- PIASTRES
- PRE
- STABBED
- UNNOTICED
- WATCHFUL
- INWARDLY
- NEIGHBOR
- GIRDLE
- HEREDITARY
- PETTICOATS
- WIGWAMS
- SHIFTING
- OFFERINGS
- SPIES
- SIGNIFIED
- EXCEEDED
- SPORTS
- PRECEDED
- ISSUING
- ALTERATION
- TURKEYS
- SUNRISE
- PARENT
- BUFF
- GORGEOUS
- PETTICOAT
- TRIMMED
- ALADDIN
- THEME
- SMASH
- SITS
- PICKS
- GRUDGE
- SPLASH
- LOOSENED
- RECREATION
- SWARMED
- IRENE
- TENNIS
- CHORUS
- JOKES
- TRUDGED
- SENSIBLY
- DISTRIBUTED
- GRIEVOUS
- ENGAGING
- HASTENING
- PROCLAMATION
- REPAIRED
- VIEWING
- DISDAIN
- CLASSIC
- SCAMP
- CLAW
- ANCIENTS
- ARISES
- MINGLE
- BITTERNESS
- PURITAN
- STOREROOM
- CARNIVAL
- IMPERFECT
- ACQUISITION
- SHAKESPEARE
- SUPER
- BRAVERY
- ROASTED
- STALK
- STALKED
- WATER'S
- DEMONSTRATION
- UNCOMMON
- NOTORIOUS
- ROY
- DETERMINING
- DEGRADED
- ADORNED
- TINGE
- EXCURSION
- COMPACT
- TREACHEROUS
- SUCCEEDING
- FAVOURED
- DIMENSIONS
- SPRAY
- DEVOUR
- RAGED
- CHOOSING
- CONVENTIONAL
- INCESSANT
- HAROLD
- SCORNED
- WHIRLED
- HARP
- LEAPT
- AIRY
- TRIUMPHANTLY
- SIDEWAYS
- CHUCKLING
- PET
- DEVICES
- THORNY
- MATES
- WORMS
- REDTAIL
- DARTING
- HOOKS
- PEWEE
- MUDDY
- TEETER
- SETTLERS
- SCREECHER
- JOHNNIE
- LICK
- EGOTISM
- FRAGRANCE
- EMBODIED
- GODDESS
- STRAIGHTWAY
- NASH
- TENDERFOOT
- MIDDAY
- FAMILIARITY
- AUTOMATIC
- SIDEWALK
- CRUSHING
- CONGRATULATE
- ASS
- WIRES
- ENTRUSTED
- CONFINEMENT
- VAULTED
- EAGLES
- ADVERTISED
- PROOFS
- MONUMENT
- SKETCH
- FULLERTON
- IRRITATION
- BRIGHTENED
- PERSUASION
- GENERAL'S
- CONTINUANCE
- PERFORMANCES
- HENRY'S
- COSTLY
- LILY'S
- ACHIEVEMENTS
- OBLIGING
- EQUILIBRIUM
- SLEIGH
- KANSAS
- BOOM
- CHOKING
- RECESSES
- FEARLESS
- GLIDED
- TRUTHFUL
- CONFOUND
- HERBS
- MIX
- LIQUOR
- CHILLED
- MILBURGH
- DOCK
- MINIATURE
- FORGIVENESS
- UNIT
- WAIL
- THIEVES
- CHIVALRY
- MOONLIT
- DROOPING
- FLATTERY
- DREAMILY
- SKIRTS
- MAGICAL
- FLOURISHING
- CONCLUSIONS
- CONTRIVANCE
- SPRUNG
- FONDNESS
- DEPENDENCE
- GALLANTRY
- FORTIFY
- VIGOUR
- DISAPPOINTMENTS
- COMPLY
- PEYTON
- PROFUSION
- BIRCH
- BALMY
- FORSAKE
- COMMUNICATING
- LIMB
- DEGRADATION
- OCCUPATIONS
- REMEDIES
- SUPPRESSION
- OBSCURITY
- DIMINISHED
- RESORT
- AGRICULTURE
- VALANCOURT
- DREAMT
- ATTENDANCE
- IMPRUDENT
- ASSUMING
- INJURE
- BLAND
- THIEF
- LUSCINDA
- TRESSES
- DAMSEL
- SINCERITY
- SWOON
- AIDED
- DESIGNS
- HOWLING
- PEACEFULLY
- CAULDRON
- SCREAMED
- EMINENCE
- MUSKETEER
- GUARDSMAN
- COMPANIES
- HOSTESS
- GOVERNESS
- MELTING
- FINISHING
- SUPPORTING
- MANIFESTATIONS
- DESCRIPTIONS
- EARL'S
- BARRELS
- STILLED
- CEDRIC
- LUMP
- SPANIARD
- DICK'S
- LATITUDE
- CAPTAINS
- POSTURE
- PLUS
- DESTINATION
- SPECIALIST
- ALIGHT
- SPAN
- MARKHAM
- SOLEMNITY
- ATTILA
- VICTORIES
- COMPEL
- TRADER
- SUMS
- MECCA
- CLUSTER
- CEYLON
- WIDTH
- FEARSOME
- SNAILS
- VALVES
- YEARLY
- TIES
- MUNICIPAL
- ARDENTLY
- UNDO
- GEAR
- UNDERGO
- MANOR
- CASSANDRA
- MONTHLY
- SCRAMBLING
- INDICATES
- AISLE
- EXECUTE
- RELAXED
- DALE'S
- COMPOSURE
- UNEASILY
- CUR
- SALON
- WARLIKE
- PERFUME
- STATEROOM
- COMMOTION
- MURMURING
- INERT
- MATRIMONY
- NOVELS
- PERCEPTIBLE
- KICK
- MIDWAY
- ARISING
- PLACID
- ADVENT
- BRIGHTNESS
- SKIES
- HUE
- SHAFTS
- SHROUDED
- DART
- FROSTY
- CHOIR
- CANDY
- PARISHES
- ANDREW'S
- CARTS
- APPREHENSIONS
- VICES
- OVERWHELMING
- MURPHY
- TWIST
- AUSPICIOUS
- SHIELDS
- HEAPED
- STRATA
- NARWHALE
- FARRAGUT
- OVERFLOWING
- DECISIVE
- DENYING
- BALD
- SARCASTIC
- CHIMNEYS
- PROTESTANT
- LYNNE
- DILL
- FISSURE
- MORNING'S
- MYRIADS
- PRINTER
- SWEETLY
- CLAPPING
- SWAYING
- SALLOW
- SHRIEKING
- NOVELTY
- PLUTO
- K
- CHESHIRE
- TIN
- REVEREND
- ASSOCIATES
- JUDICIOUS
- SPECIFIC
- LIVERY
- DISPERSED
- FORBID
- HISTORIES
- PIGEON
- PILLAR
- SCIENCES
- TOWERING
- BUTTONS
- LEAGUE
- JARS
- JEDDAK
- COMAS
- BLOCKED
- LOAN
- SLICE
- CRUISE
- BLACKENED
- RESPECTING
- MEMOIR
- TITLES
- TUTOR
- SCHOOLFELLOWS
- RAZOR
- STUPOR
- INFLAMMATION
- REMEMBERS
- CONSTRUCTION
- CABINS
- PETERSBURG
- NAPOLEON'S
- PLAYFUL
- ACCENTED
- KISSES
- HURRICANE
- MUTILATED
- ASSYRIA
- LOCALITY
- DECEASED
- MANTELISH
- PAL
- OKLAHOMA
- FURTHERMORE
- BUCCANEER
- ASSERT
- DOUGLAS
- SWEEPS
- ACQUIRE
- RUFFIANS
- GOBLET
- DINSMORE
- DAD
- TOW
- DUBLIN
- FLASHING
- MASKED
- VICKERS
- SCOUNDREL
- SIMIAN
- POLITICIAN
- ACTUATED
- WOODHOUSE
- HIGHBURY
- CORNY
- PUTNEY
- HOSKINS
- ANTENNAE
- METER
- PEAK
- POKING
- BLOUNT
- TRUMPETS
- PHILLIPS
- PREJUDICES
- ANNE'S
- JOSHUA
- PLAYMATES
- PULPIT
- PUGWASH
- BEARERS
- MINISTRY
- SURVEYING
- BRAG
- MARSH
- L'OLONNOIS
- LICKED
- PROPOSITIONS
- STURDY
- CHILLY
- CLUCK
- STICKEEN
- TOLLER
- COSETTE'S
- MIRIAM
- CONSTITUTED
- EBONY
- LOWESTOFT
- HARMON
- SOU
- SURREY
- BAILEY
- BINNY
- YORITOMO
- ZEPPELIN
- PUBLICAN
- MACMURDO
- SEYFFERT
- WHITLOCK
- SIDNEY
- STRUBLE
- MON
- TED
- DIPPED
- SEWING
- UNSEEN
- BRIDAL
- HUMMED
- MYRIAD
- MEEK
- RETREATING
- BIDDEN
- EVERYDAY
- NOVA
- BURNT
- CRISP
- ROBERT
- EJACULATED
- JOGGED
- NOTING
- ORPHANS
- ORDEAL
- PLUMP
- WITCH
- PROCESSES
- CONTEMPLATING
- OCCURRENCES
- LOYAL
- SHUTTERS
- INSULTING
- CALMNESS
- IMPOSTOR
- READS
- DEPRESSED
- REPULSED
- PLAINTIVE
- UNTRUE
- UMBRELLA
- EMBARKED
- EASIEST
- LIBERTIES
- CORRESPONDENT
- BREACH
- MIDDLING
- STROLLING
- AUTHORS
- GENTLEMAN'S
- EXCEPTIONS
- KINDLED
- CONTEMPTIBLE
- IMPERFECTLY
- PRELIMINARY
- MERLE
- ENLIGHTENED
- PANG
- COMMISSIONERS
- BOUNDARIES
- ADHERENTS
- AGREEMENT
- MAINTENANCE
- SOVEREIGNTY
- AYSCUE
- FLEETS
- PROTESTS
- WITT
- COLONIES
- CONVOY
- NORTHERLY
- SUFFERER
- INTRIGUE
- SWORN
- UNAVAILING
- INFORMING
- ALTERNATIVE
- PATRIOTIC
- DIP
- VINEGAR
- CORNS
- ENTRY
- INFINITELY
- ANEW
- CLOWN
- X
- RACK
- BALANCING
- FAVOURS
- BEASTLY
- CHEQUE
- CHARITABLE
- INVESTIGATIONS
- SCREWED
- FROWN
- PILLOWS
- MATERIALLY
- HAIRS
- BOOMING
- WARRANTED
- MASTERED
- PARCHMENT
- OUTLOOK
- GRATIFYING
- REGRETS
- MIDSUMMER
- REGISTERED
- ILLUSTRATED
- ROWING
- ACCOMPLISHMENTS
- VIGIL
- ABOUNDED
- CORAL
- ENTREAT
- HATCHED
- OVERJOYED
- CAVES
- REGARDLESS
- OVERNIGHT
- BESET
- ISSUES
- LIFETIME
- ESSENTIALLY
- SELFISHNESS
- SKIRMISH
- HEADSTRONG
- WHINING
- TABOO
- RELIEVING
- MARKING
- DISSATISFACTION
- INSISTS
- DISHONEST
- STEER
- SAVAREEN
- UNACCOUNTABLE
- SHORTEST
- ADJACENT
- DIGGERS
- SILAS
- DIVERTED
- EXPLORE
- BURIAL
- CONGENIAL
- INFLUENCED
- MISUNDERSTAND
- REDDISH
- CIRCLING
- BECKONING
- AUTOMATICALLY
- ENTANGLED
- CANDLES
- POEMS
- PAIL
- DISCOMFORT
- NEEDLESS
- WAXED
- DATES
- GROANS
- DEMONSTRATIONS
- EXPIRED
- FORTITUDE
- RESISTING
- CORD
- HEAL
- ACRE
- CUPS
- THREATEN
- ACHIEVE
- FAIREST
- INSTALLED
- MODELS
- RENOWNED
- ENDURANCE
- FLITTED
- EXPERTS
- SCORES
- EXCITEDLY
- FARMING
- SYSTEMS
- BRIAR
- SIGNATURE
- CONSOLED
- IMMEASURABLE
- BANKER'S
- GILT
- FUND
- SIGNATURES
- FREEZING
- INCREDULITY
- MA
- RESTRAINED
- TOWERS
- PINCHING
- COOLING
- STEAMBOAT
- BEATRICE
- WILDER
- HURTS
- SPARK
- INVESTIGATE
- IMAGINABLE
- FABRIC
- FEMALES
- TOLERATE
- SLIDE
- PERFECTED
- STATIONARY
- ELABORATE
- PRINCIPALLY
- CURVED
- PITS
- TERRIFYING
- RUSTLE
- THICKER
- BEWILDERMENT
- CONDE
- CALCULATE
- COUNTRYMEN
- CHALONER
- GRENVILLE
- HANDKERCHIEFS
- INDEBTED
- TIMIDITY
- TALLER
- RESIGN
- MOURNFULLY
- ARGUED
- HYPNOTIZATION
- HYPNOTIZER
- PAMPHLETS
- INSANE
- SUPERFICIAL
- SERVES
- STATUS
- HYPNOTIZE
- REMOVING
- SUCCESSIVE
- LABORATORY
- QUOTE
- DAYTIME
- ETHICAL
- STRENGTHEN
- OVERTHROW
- PERSISTENT
- SUPERSTITION
- THOROUGH
- ABSURDITY
- VARIETIES
- WINK
- SUBSEQUENTLY
- DROWNING
- GIDDY
- MATTRESS
- PILED
- GESTICULATING
- INCOMPLETE
- JOYOUSLY
- ENGROSSED
- FRENZY
- IMPRESSIVE
- ORDINARILY
- INDULGE
- UNCERTAINTY
- VICIOUS
- ELEVATED
- MULTIPLY
- CUSTOMS
- WEARS
- LINKS
- SUBJECTIVE
- STRESS
- ADOPT
- HESITATINGLY
- INLAID
- CLAPPED
- NETWORK
- INEXPLICABLE
- ORGANIZED
- EXTINCT
- SPECULATIONS
- AERIAL
- ZERO
- EARTH'S
- WITHERED
- TRANSPORTED
- RESENT
- DROWN
- PEEVISH
- UNDERGONE
- SENDS
- ENRICHED
- REAPPEARED
- MEDITATION
- LINGERED
- STAGGER
- TRUSTING
- FORLORN
- DEFECTS
- COMFORTS
- PLUNDERED
- SELECTION
- TRANSLATED
- APATHY
- MESSENGERS
- EXCLAMATIONS
- RENOWN
- CONSULTATION
- ELASTIC
- ANKLES
- PRESUMED
- BURSTING
- CHORD
- MAPLES
- SASH
- GATEWAY
- TOSSING
- SUPERHUMAN
- VENICE
- MONUMENTS
- HARRIET
- APPEALED
- CHIPS
- MILKING
- PANTED
- WICK
- HANDSOMELY
- APIECE
- GLIMMER
- CONNY
- RATTLED
- GYMNASIUM
- GREET
- BREATHLESSLY
- MASCULINE
- LUGGAGE
- EDNA
- STRAINING
- SHEW
- MULES
- SINGERS
- ROBIN'S
- POTATO
- TWIG
- PAVING
- SPLENDIDLY
- FARTHING
- BRUSSELS
- PROWESS
- CROSSES
- WRONGS
- COINCIDENCE
- EUROPEANS
- PRIVILEGED
- NOTICES
- CLOTHE
- DOMAIN
- SECONDARY
- CLOAKS
- ATTITUDES
- MOCK
- LASTLY
- SHORTER
- FLOODS
- RAVINE
- INTERVENING
- RESEMBLES
- FAMISHED
- NEUTRAL
- ERRONEOUS
- CANNONS
- SUNDAYS
- BISHOP
- DEMONSTRATED
- ABIDING
- CONCESSIONS
- HEROISM
- DISCREET
- BOOTY
- PITEOUS
- ENACTED
- PITILESS
- WRECKED
- DOLLY
- NIBBLING
- RESOLUTELY
- ASSURING
- ADAPT
- GRINDING
- TRAPS
- SUPERNATURAL
- SPRINKLED
- CHAT
- COMBINATIONS
- ROBES
- LUXURIANT
- APOLLO
- IVY
- D'YOU
- FOREMAN
- TAME
- STRAP
- GALLOP
- MINER
- SPRAWLING
- LIAR
- GRINNING
- BIN
- CONTEMPTUOUS
- ENCAMPED
- ROAST
- SPOON
- UNDERGROUND
- TORMENT
- LAGREE
- REASSURED
- STRICTEST
- LUCKILY
- SILL
- REJOIN
- CIRCLED
- LOVER'S
- CHEERFULNESS
- MORLAND'S
- UNAFFECTED
- TETE
- HEIRESS
- UNFRIENDLY
- OPPOSING
- STILLNESS
- FRIVOLOUS
- WORSHIPPED
- DIFFERING
- D'ARNAULT
- OMAHA
- DRINKS
- OBEDIENT
- KEYBOARD
- LENA
- TAPPED
- HAIRY
- OUTRAGE
- HYSTERICAL
- THICKETS
- OVERFLOWED
- GRAVES
- DERISION
- CLOUDED
- CANE
- FELLOWSHIP
- GREED
- DISCOURAGEMENT
- FULLER
- PAINTER
- HYMN
- YEARNING
- BUCKET
- EXTRACTED
- ODETTE
- UPTURNED
- UNHAPPINESS
- COMPREHENSIVE
- HOARSELY
- GWENDOLEN'S
- ROUSE
- BEAD
- TORTURES
- THIRTEENTH
- WEIRD
- MESSIAH
- EXCHANGING
- TAXES
- MYTH
- NECKLACE
- SARA
- PIE
- GAP
- CLOCKS
- AFFECTATION
- DISCRIMINATION
- THEFT
- INVITING
- CURTAIN
- COMPETITORS
- REDOUBLED
- VILLAINS
- ESTIMATION
- LONGBOURN
- AYE
- AFFECTIONATELY
- MANIFESTED
- PESTILENCE
- REFUSING
- SHINES
- NURSED
- DELUGE
- EMIGRANTS
- EARTHQUAKE
- MENACED
- EMPLOYING
- ROLLS
- DWELLINGS
- VEILS
- PERSEVERANCE
- COMPROMISING
- CANS
- VERILY
- UNARMED
- RAP
- CUDGEL
- SPUN
- SENORA
- LOWLY
- BOUNDED
- DAYBREAK
- ASSAILED
- SIERRA
- DISMALLY
- BEAMED
- INCLINE
- ARAMIS
- APPRENTICE
- HEEDED
- SALUTATION
- COMPLAISANCE
- LULL
- UNEARTHLY
- ACCOSTED
- TENDING
- SEW
- INCONVENIENCE
- MAKER
- PREDECESSOR
- INCREDULOUS
- SEIZING
- DESCRIBES
- FIXING
- INCIDENTALLY
- FACULTIES
- CHILDREN'S
- PROPHETS
- RECITED
- FOWL
- PRIVY
- RODS
- CLUTCH
- DIVERSION
- GAYLY
- GANG
- BENJAMIN
- FRUITLESS
- ILLUMINATED
- STATISTICS
- ORGANISM
- REGRESSION
- CONTROLS
- ACCURACY
- RABBLE
- NIAGARA
- SUSTAIN
- PREVAILING
- TELLER
- TRADING
- CONQUERING
- INSULTS
- PREACHING
- REENTERED
- LAVISHED
- MANNED
- AGGRESSIVE
- MEASURING
- SERIOUSNESS
- RIPPING
- UNCLEAN
- CARPENTER
- PLANTING
- PREVENTS
- VALUED
- PLANKS
- STOWED
- SEPARATELY
- BINDINGS
- EXCLUSIVELY
- MONSTERS
- RECOMMENDATION
- ALTITUDE
- VIOLETS
- PATRON
- COMBINE
- CLERGY
- PECULIARITY
- QUALIFIED
- WISTFUL
- CLENCHED
- SEALS
- DISCLOSED
- ORE
- PLUCKED
- RANKIN
- THEATER
- TECHNICAL
- NIMBLE
- SMOTHERED
- RESPECTIVE
- CROUCHING
- ADVANCEMENT
- FORK
- MUSICIANS
- KICKING
- SCHOLAR
- STINGS
- OUTLINES
- REPETITION
- LOWDER
- IMPART
- VISIBLY
- BRAVEST
- GULLS
- HEDGES
- HOPEFUL
- REFRESH
- DEFIANT
- RESERVATION
- COMPETITOR
- SURVIVED
- CUPBOARD
- VANKA
- SOUR
- WEEKLY
- JUSTICES
- OVERTAKE
- SOOTHINGLY
- MOTHERLY
- OFFICIALLY
- GRANDMA
- PADDLE
- LOCKE
- VAINLY
- MILITIA
- ASSYRIANS
- ARCHERS
- DIVERSE
- SIZED
- ADMIRINGLY
- INTENDING
- HIPS
- THREAT
- DECEIVING
- ANNOUNCEMENT
- DECKED
- TRAY
- WISEST
- WISTING
- CREVASSES
- CAMPED
- UNDULATING
- SORROWFULLY
- OSTROG
- LINCOLN'S
- FOAM
- ASANO
- COMMENTS
- RIVALS
- REAP
- BATHS
- ODE
- FITNESS
- ERIE
- EASTWARD
- CONFRONTED
- STAIN
- SUFFICE
- WAX
- FOOTPRINTS
- BRISTLING
- SINGLETON
- COMPREHENDED
- PETITIONS
- AMISS
- DRUMS
- FEROCITY
- LIMP
- EXPLODED
- CHIEFTAIN
- HISPANIOLA
- MORGAN
- GUNN
- SPIT
- SACKS
- INTELLIGIBLE
- THEOLOGIANS
- ARTERIAL
- DISPERSE
- EXPAND
- EXTREMITIES
- WEAKER
- ECCLESIASTICAL
- FRATERNITY
- SOARING
- BRIGANDS
- DRON
- KARP
- DELIVERANCE
- DEVISED
- FORCIBLY
- GUARDING
- IDENTIFIED
- INDEFINITELY
- PORK
- WINNER
- FAYLE
- MARYLAND
- LEGALLY
- DEFIANCE
- OVERTURNED
- RELIED
- SNARES
- HONOURABLE
- ESQUEMELING
- EARNED
- KNIGHTHOOD
- SPEARS
- HOARY
- PURSUERS
- ARMORED
- BULLETS
- PALACES
- FLAGS
- DETACHED
- SHERIDAN
- REFEREE
- TEMPTING
- CONVINCING
- RATIONS
- PROPORTIONED
- MONTERO
- POLISH
- INCOMES
- HEROD'S
- ANTONY
- CORDIALS
- COVERS
- FORSAKEN
- BONDAGE
- PILOT'S
- FOAMING
- LABORERS
- TESTS
- APES
- EVOLUTION
- PENALTY
- TREASURY
- DAUBENY
- KNIGHTLEY
- WHOLESOME
- SMITH'S
- REPROACHED
- LICENSE
- ALDERMEN
- HINGES
- SEXES
- CARBON
- OXYGEN
- GUIDANCE
- COMPELLING
- BARODIA
- ELIZABETH'S
- MUFF
- THORNDYKE
- HOP
- ELMHURST
- WRIGHT
- STABILITY
- ARRAY
- CRAGS
- STUPENDOUS
- CARDBOARD
- ABUSES
- GORGE
- SURGERY
- DESKS
- ADMIRERS
- AFAR
- PROFESSORS
- CARTRIDGES
- LANTERNS
- PRESTIGE
- FERFITCHKIN
- DANIEL
- COMPANIONSHIP
- BYRNE
- TYRKER
- ISAAC
- FOSTERS
- VALJEAN
- MEDITATED
- HOGGLESTOCK
- UTENSILS
- PHI
- SIGMA
- TAU
- MARIAN
- ARDOUR
- BAKERS
- BABYLONIA
- INVERASHIEL
- FOOTBALL
- WARE
- CROSBIE
- SOUTHARD
- IDOL
- COD
- JURISPRUDENCE
- MICKY
- GEORGIE
- ADVERTISER
- JUDAH
- MISER
- ADVERTISERS
- COLLINGWOOD
- JACKAL
- WANDS
- PHOTOPLAY
- ZENA
- GEMMEN
- ECONOMIC
- POLDIE
- CHILDS
- LUIGI
- TILDA
- ANDY
- CASIMIR
- BERENGARIA
- OEDIPUS
- LEGISLATIVE
- BROOKS
- BORDERED
- TANGLE
- CURVES
- ADOPTING
- ARABS
- WELLS
- REVIVING
- ORCHARDS
- SUNDRY
- HEREABOUTS
- HELPLESSLY
- DUNNO
- IMAGINING
- PROWLING
- SLOWER
- ADVISER
- PECUNIARY
- CATALOGUE
- FIDELITY
- RECOLLECTIONS
- AWAIT
- EXAGGERATE
- DEATHS
- TRAMPLING
- DISHONOR
- FLUSHING
- LAUGHS
- JENKINS
- HEARTLESS
- APOLOGETICALLY
- BETHOUGHT
- GARDENCOURT
- LIGHTEST
- ISABEL'S
- RECOMMENDING
- CONTRIBUTION
- ENQUIRED
- MOULD
- SYMPATHIES
- TRANSFER
- BOUGH
- AUNT'S
- AMBITIONS
- REVIVAL
- ENGLISHMEN
- STATESMAN
- REFUSAL
- AVOWED
- NAVIGATION
- PROHIBITION
- AFFRONT
- DEPUTY
- ATHWART
- AVERSE
- SUCCESSOR
- CONFERENCES
- COMPENSATION
- REJECT
- UNPRECEDENTED
- CLEVERNESS
- ILLEGAL
- PROVING
- WITHHOLD
- LEMON
- PARSLEY
- ONION
- TABLESPOONFUL
- SEASONED
- BENEVOLENCE
- ENCHANTMENT
- LINGERING
- AUGHT
- SURPASSING
- ACCIDENTAL
- ENCHANTMENTS
- PUREST
- HEATS
- PERCEPTIONS
- VERSE
- GATHERCOLE
- POTENT
- DOWNCAST
- SUGGESTS
- PLANE
- DETAIN
- RUG
- SMILINGLY
- SOCKET
- GLOSSY
- DISCERNMENT
- SYMPATHETICALLY
- CREEPS
- SOILED
- GUNPOWDER
- SHINY
- TIERRA
- FUEGO
- SLAMMED
- OBLONG
- ALLUSIONS
- DEFENDING
- SENTIMENTAL
- EXTRAVAGANCE
- TREATING
- EXERCISED
- INDIFFERENTLY
- HEATED
- HUMBUG
- INDISPENSABLE
- ABILITIES
- UNGRATEFUL
- RURAL
- PARKS
- POSTPONE
- FEASTS
- FLARE
- EARNING
- GALLOPED
- CROAKED
- PARTNERS
- MANE
- WONDERINGLY
- COURTSHIP
- COVETED
- HAZY
- TRAITS
- ATTRACTS
- COMPROMISE
- ROLE
- WEIGH
- DELICATELY
- SPOUSE
- CRAVING
- RESENTFUL
- COUPLES
- GRUMBLING
- PREVENTING
- HUSBANDS
- UNCHANGED
- BERTH
- MOSQUITO
- ALDERMAN
- BASEBALL
- SKIRT
- BONNETS
- EASTER
- PERRY
- RIDDEN
- SCOTT
- SUBSCRIBED
- SHIVERING
- UNDERBRUSH
- CROWDING
- LESSEN
- RAWLINS
- BOBBY'S
- CEDARS
- LOOMED
- STUMBLED
- DISPLAYING
- REALIZATION
- MANTEL
- TRANSIENT
- ECONOMICAL
- RESTORATIVE
- OPERATED
- GREASE
- ODOR
- REPAST
- LAMENTATIONS
- PREVAIL
- INTERRUPTIONS
- DISTRESSING
- PITEOUSLY
- STEAD
- FORTHWITH
- RELATES
- COMMITTING
- AVENGE
- ACKNOWLEDGMENT
- GRECIAN
- APPLICATIONS
- POSTERITY
- COMMENDATION
- AVARICIOUS
- DECAYING
- BEWARE
- HUGH
- VIDA
- HOBBLED
- CHAMP
- ASSISTANTS
- WATCHMAN
- DISCHARGED
- BLOUSE
- ESTABLISHING
- INSULTED
- HINTS
- HUDSON
- PITCHER
- BROADWAY
- LOCATION
- WHOLESALE
- LINKED
- LACKED
- STREAK
- CASUALLY
- PROVINCIAL
- SPRAWLED
- SOOT
- MANUFACTURERS
- TASKS
- BROWNISH
- DOORWAYS
- CORNERED
- USHERED
- RENAUD
- EFFICACY
- FOCUS
- REMINDS
- GRACIOUSLY
- DISPUTED
- WAGER
- FATHOM
- GLISTENING
- UNWELCOME
- RATTLING
- POLICEMEN
- THRUSTING
- EXTERIOR
- LIGHTING
- PADDED
- CUSHIONS
- CROPS
- TELEGRAPHED
- PAYS
- LUSTER
- HIP
- PRACTICES
- SOLVED
- PREVALENT
- FINS
- MUZZLE
- GREENISH
- PRINTS
- BUCKLE
- STRAND
- RETINUE
- YON
- GOWNS
- RESIDED
- TREATS
- COOPER
- MATURED
- EXPENDED
- DISINTERESTED
- MARRIAGES
- PERSONAGES
- PSYCHOTHERAPY
- HYPNOTIST
- MAGNETIC
- AGENCY
- OUTCOME
- ORGANIC
- PURPOSIVE
- CAUSAL
- TENDENCIES
- VICTORIOUS
- MORPHINE
- COURAGEOUS
- GRAINS
- UNFAIR
- MAXIMS
- UNTOUCHED
- SKILLFUL
- INJURIES
- IGNORING
- INVOLVES
- INTRUSION
- LABEL
- EATERS
- HOPS
- HELLO
- POKED
- EATS
- RUBBER
- CUNNINGLY
- THERMOMETER
- REALISED
- BLOTTED
- PROJECTING
- SIDED
- MERITED
- GENTLENESS
- SENTENCES
- EXACTING
- IMMINENT
- SCRUPLE
- STAGGERING
- LAMENTABLE
- DISREGARDED
- PROVOCATION
- DRILY
- ORIGINATED
- WANING
- PEERAGE
- FARMS
- ACQUIRING
- WEAKENED
- LEDGE
- GRANDSON
- CAVITIES
- ASTONISH
- COMPANION'S
- DELIVERING
- FOOTPATH
- SCOLDING
- ASCEND
- STAMP
- ROVING
- CANOPY
- RUDENESS
- FARED
- DISASTERS
- MISERIES
- DETESTABLE
- CONSULT
- STRANGLED
- MISERABLY
- PREFERABLE
- EDEN
- INCURRED
- VOLITION
- UNCONTROLLABLE
- BURDENS
- ASSEMBLE
- CONVERT
- ENCAMPMENT
- PAUSING
- AUSTERE
- ALLUSION
- REGAIN
- TIRE
- PROFITABLE
- ASSEMBLAGE
- VALIANT
- INFLAMED
- TRIPS
- STATING
- MILLTOWN
- SUBMERGED
- BEWILDERING
- WHIMSICAL
- QUERIED
- ALLUDING
- ARABIAN
- TRAVELER
- FLANNEL
- GOIN
- PHILANTHROPY
- FLOWN
- LOYALLY
- SQUEEZE
- HEARSE
- PEAS
- MYSTERIOUSLY
- OWNS
- LANGUIDLY
- QUOTED
- CHIRPED
- EUPHRATES
- BAGDAD
- DIMINISHING
- TRIBUNAL
- INVESTED
- PROSTRATED
- SANCTUARY
- AMENDS
- JEWELLERS
- SYNDIC'S
- OCCASIONED
- CONSORT
- CAMEL
- MANAGING
- DECEITFUL
- INDISPOSED
- PERSECUTED
- ACCIDENTS
- CHARMED
- SCOLD
- GROSBEAK
- EQUIVALENT
- ANGULAR
- MARAUDING
- BARRACKS
- PONDERED
- INTELLECTUALLY
- UNSELFISH
- PREACHED
- SPECIFIED
- BARBARITY
- FLOWS
- HOMER
- CORDIALITY
- DEFECTIVE
- DIVINITY
- BRUISED
- INAUDIBLE
- CHARCOAL
- BREASTS
- IMITATED
- FITZ
- STOUTLY
- SOAKED
- DECEIT
- STONY
- PROMONTORY
- ALLUDED
- TONS
- DISCORDANT
- COILED
- CARCASS
- GALES
- WILFUL
- FAILURES
- INDOORS
- DELUSION
- SHRUNK
- POOLS
- REVOLVER
- ACORNS
- BLENDED
- FLICKER
- FELLOW'S
- TADPOLES
- ELEVENTH
- SACRIFICING
- WHISKED
- PREFERMENT
- PINCHED
- PROPHECY
- TEMPTATIONS
- GOSSIPING
- PEE
- DOWNY
- DESERTS
- NORTHWEST
- BUNDLES
- OPENINGS
- SILENCED
- GNAWED
- THRILLS
- TANGIBLE
- VARIOUSLY
- PAUSES
- CURL
- PAGAN
- BARD
- HAM
- UN
- RECOGNIZING
- POKER
- SIPPED
- DURABLE
- PIEBALD
- BRUISES
- SPYING
- PROFILE
- CYNICISM
- BLUSHES
- BRAGGING
- PROTESTATIONS
- VOWED
- SMELLING
- SLUNG
- FLOWERY
- UNDERTOOK
- CONTEMPLATED
- ENTREATED
- CONJECTURED
- FABLE
- MENDING
- SOFTENED
- FINERY
- INDUSTRIOUS
- LANGUOR
- ELAPSED
- UNBROKEN
- EXCELLENCIES
- SANCTIONED
- PARENTAL
- REJOICE
- DECEPTION
- TILNEY'S
- AVARICE
- THORPE'S
- CONSTRAINED
- CONNECTIONS
- RELUCTANCE
- CLIMAX
- ANDERSON
- MAMMY
- UPLIFTED
- MELODIES
- TRAINS
- BARREN
- DISABLED
- YAWNING
- ENTERPRISES
- SEAMEN
- HUTS
- RIBBON
- STEM
- STEADFAST
- BENCHES
- BLAZED
- SHAVED
- DRUMMED
- PRECISION
- GLIDING
- FRAGMENT
- PLANES
- SQUARELY
- DRAUGHT
- ANCHOVIES
- BLADES
- WILLED
- CATCHES
- FLAP
- OCUMPAUGH
- CORRECTLY
- HAGGARD
- GOLF
- GRUNT
- DONKEY
- THIRSTY
- RIOT
- BRIDEGROOM
- COOKIES
- EMERALD
- FAIRYLAND
- YARN
- ELINOR
- AFFECTS
- POSSESSES
- PUMP
- ENLIGHTEN
- ARGYLE
- BILLIARD
- THRO
- CHAMPIONS
- LISTS
- DOUBTING
- HELMETS
- CONFORMITY
- UNDAUNTED
- SORELY
- POORLY
- SENTINELS
- DISMOUNTED
- ANNIHILATED
- LASHING
- TOTTERING
- STREAKED
- ALPS
- FRAIL
- DARES
- SCRATCH
- WAILING
- PERCHANCE
- THORN
- FILLS
- WINDSOR
- STARVING
- RUMOUR
- EXERTIONS
- MORTALITY
- CAVALIERS
- DUNGEON
- IMPOSE
- DINED
- DIXIE
- PERFORMER
- GIFTED
- ARRAYED
- DEWY
- NE'ER
- GARB
- NOBLY
- CHRISTIANS
- WANTON
- LAWFUL
- CURSES
- SCOLDED
- COOLER
- MERRILY
- SOUNDLY
- COCK
- FOREFATHERS
- DUEL
- BERNAJOUX
- SHARPENED
- QUARRELS
- ASSAULTED
- TENANT
- DEUCE
- HEAVED
- GUINEA
- PERNICIOUS
- INSTINCTIVE
- SLY
- SUGGESTING
- DOLL
- BROOM
- CHASTE
- ROARS
- AKIN
- VIENNA
- EXTENDS
- DEAFENING
- CRACKLED
- DIN
- NOSED
- LITTER
- VENTURING
- LOOKOUT
- BRACING
- PERSON'S
- GRANDPAPA
- WILDEST
- NURSES
- ACTRESS
- FLAW
- U
- AINT
- GEOGRAPHY
- HYSTERICS
- HARNESSED
- SUSPENSION
- CHEERY
- MULTIPLIED
- MASTERY
- DEALINGS
- COMPREHENSION
- DILAPIDATED
- MELT
- HERMIT
- COLLECTING
- ARABIA
- TRADERS
- DREAMER
- BEARINGS
- PENINSULA
- ARONNAX
- NETS
- DEPENDING
- ISOLATION
- ANNUALLY
- UNBEARABLE
- SANE
- RIGHTEOUS
- INVOLVE
- WHIT
- CHRISTIANIA
- RUNNERS
- FEWER
- WEAKEN
- SLEDGING
- REINDEER
- ESKIMO
- MITS
- STAMPING
- ALLOWS
- DEPOT
- SUSCEPTIBLE
- BORDERS
- NOOK
- AUSTEN
- PASTURE
- CHAPLAIN
- GENTRY
- OBLIVION
- INTERMINABLE
- TAILOR
- CARELESSNESS
- GRADUAL
- EJECTED
- NUNS
- GUIDING
- AIMED
- SLEEK
- ELUDED
- UNOBSERVED
- CAFE
- COURTEOUSLY
- DEVIL'S
- PACKAGES
- OVAL
- STARRY
- HAHN
- RANCE
- ANNOY
- GAG
- PROMOTION
- LEADERSHIP
- INVITATIONS
- WAITERS
- INCONVENIENCES
- BESEECHING
- CALICO
- OWES
- RESULTING
- HISTORIC
- SYMBOLS
- SHOWERED
- DOZENS
- RAINS
- SHELVES
- HIRE
- HARDSHIP
- SHILLINGS
- MERCIFUL
- MILLY
- INTERPRET
- STRINGHAM'S
- INDIRECTLY
- PROMINENCE
- CAPRICES
- NUMBERLESS
- ZIGZAG
- WHEELING
- JOSEPH'S
- STRAGGLING
- RASCALS
- STRONGHOLD
- CAPTIVES
- CONSISTING
- PALL
- COOKS
- FLOGGED
- CONVULSIVELY
- JAMES'S
- DISTEMPER
- DISAPPOINT
- MONARCHY
- ALLEGED
- DIS
- SHAGGY
- DESTITUTE
- REGIMENTS
- AUSTRIA
- JURISDICTION
- SCOTCH
- HOLOFERNES
- PROSTRATE
- INHABIT
- VELOCITY
- VIA
- ARDUOUS
- CLASSIFICATION
- ADMITTING
- TORRENTS
- NARROWS
- HATES
- POUNDING
- STUPIDLY
- FRINGES
- CONTEMPTUOUSLY
- PROFFERED
- BARTLE
- TWELVEMONTH
- EXQUISITELY
- UNDERTONE
- CONGRATULATING
- VANE
- OUTWARDLY
- MEND
- BOASTING
- HANSSEN
- CONTRIBUTIONS
- CRUDE
- BLEED
- PATTED
- TYRANTS
- INSTRUMENTAL
- PLATFORMS
- DUSTY
- DEFIANTLY
- MURMURS
- DOVE
- ERA
- LEGENDS
- HIERARCHY
- CHEESES
- RICHNESS
- IROQUOIS
- KINDRED
- ANTAGONIST
- KINDER
- MIRACULOUS
- VILLAGERS
- PRECIPICES
- BOUNDING
- FLITTING
- MEANEST
- MAXIM
- HONORED
- TREMULOUS
- CHAISE
- GUIDES
- PERCEIVES
- CAGES
- WHEREABOUTS
- DRAPERY
- RELIC
- CONCLUSIVE
- THARKS
- DAK
- KOSIS
- STATIONS
- SAB
- PLEA
- CRIPPLE
- PLATEAU
- ISLE
- SURF
- PRIMARY
- ACCURATELY
- PLANETS
- DENOMINATED
- VENOUS
- MODERATELY
- ADEQUATE
- TREATISE
- FANTASY
- PUBLISH
- PRACTISING
- SCHOLARS
- FEE
- MONKS
- SUBSCRIPTION
- WARDROBE
- UNCEASING
- TIMOKHIN
- AMBULANCE
- TI
- HALO
- ROSTOVS
- JEREMIAH
- CELEBRATION
- NAILED
- KUTUZOV
- COSSACK
- FORTHCOMING
- HEARERS
- BEDDING
- LAUGHINGLY
- SWEDISH
- NEARING
- SIZES
- GNARLED
- FULFILL
- ROBERTS
- ALTERNATING
- TANK
- VICKSBURG
- COPIED
- INDICATIONS
- ESPECIAL
- LEARNS
- HUMORIST
- CRECHE
- QUILLAN
- ELECTRICAL
- INDIANA
- NOLAN
- FOURTEENTH
- INHUMAN
- DISCLOSE
- APPRECIATIVE
- BESTOW
- PROGRESSIVE
- TRANSFIGURED
- CONSECRATED
- UNOCCUPIED
- ENCOUNTERING
- OWAIN
- ELSIE'S
- ADELAIDE
- CRUMBLING
- ATHLETE
- SPURRED
- PARCHED
- DECREED
- REASONED
- ETIQUETTE
- GIT
- RIVIERE
- STEERED
- INCONSISTENT
- WADMAN'S
- SAUSAGES
- MILBY
- ANTIGONUS
- SOSIUS
- EXCURSIONS
- LABORED
- MARGUERITE'S
- STUNG
- BALLAST
- MAURICE
- MUSKETS
- STAPLES
- D'YE
- VERITABLE
- DRIFTS
- PIONEER
- IMMIGRANTS
- FERRY
- GRADUATES
- MEXICAN
- LINK
- STRUTTED
- THEREWITH
- WHICHEVER
- LAUDONNIERE
- ESCORTED
- ASTOUNDED
- RANSOM
- TANKERVILLE
- BUNCE
- BAKER
- ELECTORS
- HARTFIELD
- CONNEXIONS
- EXTRAVAGANT
- SIBYL
- TREASURER
- CORNELIA
- CARLYLE'S
- QUAKING
- VARY
- ELEANOR'S
- COUNTIES
- CLUE
- GRIZZLED
- MARION
- MOWBRAY
- IMPUDENT
- HATTON
- TURBULENT
- MANETTE
- MATHEMATICS
- FLOODED
- ARGO
- JASON'S
- STRATEGY
- TEXAS
- NEBRASKA
- INCOMPREHENSIBLE
- GRASSHOPPER
- GODFATHER
- FISCHER
- PANTALOON
- CRYSTALS
- ARABY
- CONTEMPORARY
- SIGNORE
- MAJOR'S
- DISREGARD
- DEALER
- SMOOTHED
- MARVIN
- JUG
- CHESTER
- MOURNED
- CURRANT
- PYES
- COMPOSITIONS
- GATHERS
- SLOANE
- COPLEY
- SUBORDINATE
- PRESENTING
- CANYONS
- TINTED
- MOTORS
- SCRIPTURE
- SABBATH
- SENTINEL
- HAVANA
- BENEFITS
- WAKEN
- PRECARIOUS
- CHAPERONE
- KETTLE
- CHANDELIER
- STRUCTURES
- EQUIVOCAL
- TER
- FAINTEST
- TRUDOLYUBOV
- ROUBLES
- MONTH'S
- ARISTOCRATIC
- ANTIQUE
- RUSKIN
- HINGHAM
- OBSERVANCE
- STRUT
- FOWLS
- BYSTANDERS
- HEAVING
- DRAINED
- FIGHTER
- CAPRON
- MARKEN
- EMBROIDERED
- DISAPPROVE
- PHEASANTS
- MOSQUITOES
- JACKALS
- CHECHEN
- SKIFF
- IMPETUS
- CONSTITUTIONAL
- NIVER
- EF
- VERBAL
- CONFINE
- PLANTATIONS
- COUNSELS
- BASKETBALL
- FRICTION
- PLUMBER
- AMBIGUITY
- BRAGTON
- GORE
- EXIT
- MORGUE
- LABORER
- CONFEDERACY
- CONFEDERATE
- HEPSEY
- MATERIALISTS
- PATIO
- COLYUMIST
- LYRIC
- BLASI
- MEEKS
- PHIL
- ABIGAIL
- RIYOS
- GENZABURO'S
- SAZEN'S
- KIYOMORI
- ARGUS
- ARCHIVES
- STEYNE
- GERALD
- GUNTER
- ANGLO
- NIGHTINGALE
- SHOREDITCH
- WAND
- RATIBOR
- AMEN
- REVENUES
- PROPAGANDA
- DAEMON
- HERACLIUS
- POFFENBURGH
- MERCER'S
- COLLIE
- ODIN
- RITZNER
- JUNG
- HERMANN
- ABOLITIONISTS
- ORIOLE
- HAMISH
- WEBB
- RANDOLPH
- AXEL
- LIEDENBROCK
- FRINGED
- REPUTED
- DECORUM
- NEIGHBOR'S
- DINT
- NEGLECTING
- HOUSEWIFE
- SOWING
- PLACIDLY
- SCANT
- THERE'D
- BIRCHES
- CRAB
- UNHEARD
- UPSIDE
- PERFORCE
- UNCANNY
- SLOPED
- PASSENGER
- FRECKLED
- MOONSHINE
- BLOOMING
- MISTY
- SIDEWISE
- ASCENDANCY
- RELATIONSHIPS
- CLASSED
- CONTIGUOUS
- BOLTS
- MADEMOISELLE
- SATIRE
- CONCIERGE
- STUPEFIED
- FORMALITIES
- CRUCIFIX
- GALLEYS
- ACCOMPLICE
- HARSHNESS
- SINNED
- CONDEMN
- DESPATCH
- ADIEU
- PAINED
- HOMOGENEOUS
- DETEST
- FREEMEN
- DOMESTICS
- HILARITY
- IMPLACABLE
- INTIMATION
- FICKLE
- NOTIFIED
- CAPRICIOUS
- GENTLEWOMAN
- DARNED
- BLOSSOM
- CONSISTENCY
- PRESCRIPTION
- POWERLESS
- EMINENTLY
- SARCASM
- SCAR
- PARLIAMENTARY
- NEGOTIATIONS
- RESTRICTIONS
- FORBIDDING
- INJURIOUS
- PARTISANS
- CESSATION
- DIPLOMACY
- CONTINUATION
- OBSTINATELY
- DIRE
- DRAGGING
- DISPUTES
- MASSACRE
- DIGNITIES
- WITT'S
- HOPELESSNESS
- NAVAL
- PUBLICITY
- WHITEHALL
- RENEWAL
- DEXTERITY
- FRY
- SALTED
- COOKED
- BEEF
- CIVIC
- ADDS
- PERMANENCE
- THROBBING
- SUFFERS
- BUDDING
- DISPOSING
- THROB
- DEFY
- FOREBODING
- FORGETFULNESS
- BLANDLY
- PROVIDING
- SORDID
- ADMIRABLY
- RUFFLED
- KILLS
- RUM
- MEDITATIVE
- UNKEMPT
- INFIRMITY
- BANGED
- TWITCHING
- WREATHED
- ANTAGONISM
- CHALLENGED
- LIMPLY
- HONOURED
- CHUCKLE
- REPROACHFULLY
- TESTING
- GRATUITOUS
- CRITICISE
- ARROGANCE
- TACITLY
- GOTHIC
- GRUMBLE
- JUDICIAL
- AFTERNOONS
- FOREGROUND
- COMPLACENCY
- TERMINATING
- PERILS
- SKIMMING
- SWIFTER
- CONTRIVE
- CHARIOT
- BUSTLING
- POTS
- MASKS
- HIDES
- INDIVIDUALITY
- APPEALS
- PATHWAY
- UNAVOIDABLE
- DECISIONS
- BUILDS
- BENDS
- ENVIRONMENT
- NERVOUSNESS
- CONCENTRATION
- CONTENTMENT
- LEVELS
- ADULT
- NOCTURNAL
- BANDIT
- FENCES
- PRISONS
- SHOVING
- SKIRTED
- BROKER
- ORATION
- HARRINGTON
- IMPROBABLE
- MARSHY
- LANDLADY
- MINISTER'S
- TELEPHONED
- KATHERINE
- OBSCURED
- MARIA
- TURMOIL
- REVEALING
- SMITHTOWN
- BASTILLE
- REPRESENTS
- CARLOS
- OMEN
- STIMULATION
- SHROUD
- UNCOUTH
- FLEEING
- IRONICAL
- NOISILY
- NASTY
- BULGING
- PHASE
- ODDLY
- FORMULA
- MECHANICS
- DELAYS
- FAINTING
- GERM
- DRUGS
- VARNISHED
- DISARMED
- ENQUIRE
- TRANSPORT
- SHEWED
- UNJUSTLY
- DIVERT
- ALLEYS
- VETERANS
- RAYMIE
- SPECULATE
- SLABS
- DRENCHED
- REFERENCES
- CLAIMING
- CLUSTERS
- CIGARETTES
- CLARK
- AVENUES
- ELM
- ALLEY
- SCOTTISH
- LECTURES
- CANAL
- JEALOUSIES
- INCONTINENTLY
- PLANNING
- CINDER
- GREASY
- HATING
- WATERY
- VAGUENESS
- BLOATED
- DOORSTEP
- GROWLING
- REDDY
- COACHES
- TRANSACT
- ROBBING
- SCRUPLES
- TAXI
- MATINEE
- SCATTERING
- STALE
- CRUTCHES
- PALLID
- AMORY
- ASPECTS
- STINKING
- PATTERNS
- REITERATED
- PATHOS
- QUESTIONER
- WARMING
- COLDER
- HONEYMOON
- BLENDING
- ABSORBING
- HAULED
- HERMETICALLY
- CHRONOMETER
- SOLIDLY
- SATELLITE
- HEIRS
- MARVELS
- TRANSMITTED
- COSTS
- DASHES
- VEHICLES
- MANUFACTURE
- UNANSWERABLE
- TEMPORAL
- TESTED
- PERSECUTION
- DETOUR
- SLING
- PARTICLES
- MAGNET
- CLUMPS
- HALFWAY
- FIRMER
- AU
- REVOIR
- CHAMPAGNE
- ADVERSE
- ADVENTURERS
- ACCLAMATIONS
- PAYMENT
- RESPLENDENT
- IMPETUOUS
- OSWALD
- REPEATEDLY
- HYPNOTIZING
- REQUESTS
- TECHNIQUE
- CONVICTED
- CENSURE
- MEANINGLESS
- DISTURBANCES
- OVERSHADOWED
- SYSTEMATICALLY
- SUPPLEMENT
- DISORDERS
- EMOTIONAL
- LOSES
- PSYCHICAL
- REFORMERS
- EMPHASIZE
- TRAIT
- NATUREDLY
- DAWNING
- TRAVERSE
- WATERFALL
- PEBBLES
- TINT
- TRIPPED
- VISTA
- ARCHITECTURAL
- NEGLIGIBLE
- MALICIOUS
- ADROIT
- FASTIDIOUS
- CORRECTNESS
- FICTITIOUS
- GNAWING
- DESPOTIC
- IMPROMPTU
- FUSSY
- SUPREMACY
- UNANIMOUS
- INCONCEIVABLE
- INDULGING
- STUBBORN
- MALIGNITY
- SUPERFLUOUS
- UNFLINCHING
- LASH
- LEADEN
- DISTRUST
- MINUTELY
- PREGNANT
- GOODLY
- INTRODUCING
- DANCES
- LOBBY
- LIMITLESS
- DAVID'S
- SCHEMING
- MAGAZINES
- REPLACE
- PARALYSIS
- ACHE
- GLEAMS
- CONFIRM
- INEQUALITY
- COOLED
- AFFIRMATIVE
- OUTSKIRTS
- HEATH
- CONCEDED
- IMPUDENCE
- EXILES
- EXILED
- PRESENTATION
- QUARRELLED
- DERVISH
- DUSKY
- DUNCAN
- LODGES
- ACCORDED
- EXCLUDED
- REPRESSED
- RESUMING
- SICKENING
- EMULATE
- SQUAWS
- ADO
- HATCHETS
- SOFTEN
- STERNNESS
- SCOUTS
- DESPATCHED
- FEARLESSLY
- PROJECTS
- ADVENTUROUS
- PREMIUM
- SEESAW
- ATTIC
- REBUKED
- MANUFACTURED
- PUNCTUALLY
- SAMPLES
- MUSTACHE
- VEGETABLE
- BLACKSMITH
- DELIGHTEDLY
- GINGHAM
- COBB
- SOMETHIN
- RECITE
- SATISFACTORILY
- SWEARING
- PROFANE
- BLOODHOUNDS
- UNRULY
- LYDIA
- ADORED
- REVERENT
- ACCOMMODATE
- COATED
- PROPPED
- CONDUCTOR
- CHAUFFEUR
- TRIFLES
- LINING
- HORRIFIED
- RICE
- FANCYING
- BESEECH
- DISCONSOLATE
- REQUITE
- WRONGED
- AWAITS
- QUESTIONING
- COCKED
- CROSSLY
- BLOND
- PANES
- WATERLOO
- FLANDERS
- BELGIAN
- TATTERED
- TEMPERATE
- SCREW
- BURDENED
- PASSERS
- OPPRESSION
- PHYSICS
- DISGUISING
- PANTOMIME
- SERMONS
- INMOST
- SPIRITUALITY
- PREACHERS
- GROANING
- OPERATING
- MINGLING
- IMPLIES
- YEA
- SAXON
- PROXIMITY
- SHUDDERING
- APPLIES
- MOCKING
- OVERLOOK
- METAPHOR
- FALKLAND
- OVERHANGING
- VEHEMENTLY
- WRETCHES
- MAGELLAN
- DER
- COUNTENANCES
- COUGHED
- YIELDS
- WATERFALLS
- SOLITUDES
- WINTER'S
- INCONSIDERABLE
- EASTERLY
- ISLES
- CONSIST
- JERK
- MERCILESSLY
- BEHOLDING
- INVENT
- OLYMPIANS
- EXPLORING
- RESPONSIVE
- HAUNTS
- UNDENIABLE
- DISPLACED
- CIVILISATION
- PILGRIM
- DISCUSSIONS
- PRETENCES
- ODOUR
- BELATED
- TRUANT
- FROLIC
- VESTRY
- FOOTHOLD
- SNEAK
- AIDS
- MENDED
- CHAFED
- SCARRED
- LULLED
- CONGRATULATIONS
- FLYCATCHER
- OLIVE
- APOLOGIZED
- BUG
- DISTRICTS
- NEWER
- MOUNDS
- CHINOOK
- WASTING
- NETTING
- PADDY
- MODIFIED
- LENGTHENED
- OCCUPIES
- SHAKES
- SOMEBODY'S
- TIRELESS
- CRACKERS
- LAME
- BUNK
- APPARITIONS
- BROOKLYN
- ARC
- ORIENTAL
- RETORT
- TUSH
- EXPLOSION
- PERSUADING
- FAVORS
- CLAMPED
- FIERCENESS
- DISADVANTAGE
- JOURNEYED
- DIMINUTIVE
- HANDSOMER
- BARRED
- GALLOPING
- OBSOLETE
- DOLLS
- WATERING
- SALLY
- SONNETS
- PRELUDE
- REPROOF
- DEJECTED
- MALADY
- SYLLABLE
- PARTIALITY
- ECSTASIES
- BESTOWING
- TRANSACTIONS
- AIMING
- SUSPECTING
- AVOWAL
- DICTATE
- STRIDES
- MARSHALL
- DOCILE
- LONESOME
- APPREHENSIVE
- SIMPLER
- VOYAGES
- FURIES
- TORMENTED
- UNINTELLIGENT
- RECOVERY
- TROPICAL
- SOFTNESS
- PALMS
- WHITEWASHED
- PILGRIMS
- TREADING
- UPHELD
- POUNDED
- LIGHTHOUSE
- HISS
- COLLISION
- BOLTED
- MOANED
- THICKNESS
- VICIOUSLY
- REACHES
- MEAGRE
- NARROWER
- ENLARGED
- HETTY
- CLOUDLESS
- VERGE
- SEASONING
- MORTAR
- CRUMB
- LOBSTERS
- ONIONS
- REALISE
- TINGLING
- ENDEAVOURING
- CAREW
- CONTENTEDLY
- DENUNCIATION
- WEIGHING
- BEADS
- WORKMANSHIP
- WEEDS
- DELIRIUM
- HELEN'S
- SUH
- MOTIONING
- SHEEPISH
- PSALM
- GALILEE
- THRONGED
- RANGED
- PIPING
- SILKEN
- CECILY
- MAGAZINE
- PLEDGED
- COMERS
- EVERMORE
- BRAVER
- GRASSES
- TRIES
- PROFESS
- INSTANTANEOUSLY
- PENETRATION
- MARGARET'S
- WILLOUGHBY
- CONDEMNATION
- SEEM'D
- BABYLONIANS
- FACTIONS
- ELECT
- ENIGMAS
- CANDIDATES
- NARROWLY
- COMMENCE
- SWAY
- COMBATANT
- TILTED
- THO
- DISTINGUISHING
- HURST
- DISH
- WRETCHEDNESS
- NETHERFIELD
- MENTIONING
- TAPPING
- EMERGENCY
- SHARING
- ARRESTS
- QUAKE
- DUN
- VAPOURS
- DESTRUCTIVE
- BORDERING
- CONTAGION
- DISTRESSES
- COMMENCEMENT
- FORBADE
- ASIATIC
- REVENUE
- EXPENDITURE
- LUXURIOUS
- SUMPTUOUS
- SCREENS
- ENTERTAINMENTS
- FAVOURABLY
- ANOMALY
- DECORATIONS
- INNATE
- FARCE
- COMETH
- WOULDST
- TRUSTY
- CARDENIO
- HEEDLESS
- LASTS
- OPPRESSIVE
- INCLINATIONS
- ATTIRE
- UNAWARES
- FERKO'S
- GLUTTON
- TORMENTS
- FOURS
- HEALED
- PACING
- LIME
- PARTISAN
- LACKEY
- COMPRESSED
- SWELLED
- INSPIRE
- CROWNING
- MAIDS
- STATUES
- CELESTIAL
- VISIONARY
- PAPA'S
- PROCEDURE
- KNOLLYS
- HELPLESSNESS
- COMICAL
- EMBODIMENT
- NOTICEABLE
- ENORMOUSLY
- INVARIABLE
- OUTRAGEOUS
- CRUSADE
- ORIENT
- SQUIRES
- JESTS
- SEMBLANCE
- LOOSED
- SLOPING
- HAVISHAM
- FAUNTLEROY
- SWELLS
- CREAKING
- YER
- LONGITUDE
- BROADER
- METAMORPHOSIS
- LIMITATION
- PRIMARILY
- APPLICABLE
- QUICKEN
- REBUILD
- WHATSOEVER
- TRADITIONS
- LIMITATIONS
- SANITY
- RUGS
- SHOULDERED
- COMPARTMENT
- POCKETED
- ADDRESSES
- FORMALITY
- VICAR
- TRACING
- PONDEROUS
- HUNS
- SHEPHERDS
- GAUL
- LEO
- BONIFACE
- CAPTURING
- INTRUSTED
- MOSQUE
- CHANTING
- ASSUREDLY
- CLAMS
- PRICED
- INDIVIDUALLY
- SORTING
- REDUCE
- ESTIMATED
- BELLIES
- PRONOUNCING
- BLINKING
- SHAPELY
- SPAT
- BLONDE
- VIRGINAL
- FLOURISHED
- TREATIES
- LOFT
- TRIPLE
- CRACKS
- CLOSEST
- FASTENING
- BLINDNESS
- FLANKED
- RECTOR
- UNDISTURBED
- ACCOMMODATION
- TOPOGRAPHY
- FATED
- ABOLISHED
- FACILITIES
- MISTRESSES
- TEDIOUS
- PATTEN
- DAMSELS
- EFFECTUAL
- COWPER
- HEATHEN
- ORCHESTRA
- DISREPUTABLE
- RISKED
- AGILITY
- INSPECTOR
- MERCANTILE
- ICILY
- MOYNE
- CLEW
- CLUTCHING
- PURSER
- LECTURER
- HALJAN
- EXIGENCIES
- BLACKSTONE
- EXOTIC
- FRANK'S
- DASHING
- TORTURED
- METALLIC
- LUNGED
- FRISKY
- MEANLY
- DELICACIES
- PUFFS
- SIXPENCE
- QUIVER
- AVERTED
- IMPLORINGLY
- ABHORRENCE
- MANCHESTER
- GLADE
- HARLEY
- DEPRESSION
- GRANDLY
- ANTICIPATIONS
- OVERPOWERED
- CICERO
- CONCERTS
- POSTPONED
- SHYNESS
- FOOTLIGHTS
- YOUTHS
- WAKED
- BROTHERHOOD
- QUIVERED
- COMPLEXITY
- SUPPLYING
- TREMENDOUSLY
- MILDLY
- UNEARTHED
- SHEPHERD'S
- ENTRENCHMENTS
- CHEMICAL
- SWARD
- EXULTATION
- BREEZES
- PLIGHT
- LOVINGLY
- MOLE
- SWAN
- RIDGES
- MOSSES
- FETCHING
- INTERPRETER
- FRESHMAN
- REPRESENTATION
- MASTERPIECE
- EXERCISES
- THRASHING
- STIRRUP
- VESTIBULE
- WALNUT
- RUMOURS
- COUNCILS
- DRURY
- HOLBORN
- CERTIFICATES
- SURROUND
- EXCLUSIVE
- TRADES
- KESEBERG
- GRANDPA
- STARVED
- CHARGES
- SHUNNED
- CALENDAR
- WILLOW
- ASTOUNDING
- COLOMA
- ANNOUNCING
- FOOTED
- BUNCHES
- HOOKER
- FRAGRANT
- TIDY
- RESORTED
- GOVERNORS
- TORPEDO
- PINNED
- WAG
- PIERCE
- TUSKS
- ENJOYS
- SURPRISES
- TIRESOME
- DEPARTING
- CLASSIFIED
- ANNEX
- ABREAST
- LAND'S
- CRAMPED
- UNEXPLORED
- MUTINY
- SQUEAKING
- UD
- SQUIRE'S
- RATS
- AFORE
- CHRISTENED
- WAGE
- INFLICT
- FORFEIT
- AMOUNTING
- DEEM
- PERSIST
- CHASM
- TWISTING
- TRAMPLED
- GROUNDLESS
- RANGES
- GRANDCHILDREN
- RESEARCH
- LORE
- RESEARCHES
- GOSPEL
- GOOCH
- GEMS
- WEAVE
- DRILLED
- MASSED
- FLUCTUATING
- GESTICULATED
- NOISELESSLY
- REVOLUTIONARY
- LABOURS
- DIALECT
- ILLUMINATION
- WARREN
- TRUCK
- SERFS
- THUNDERSTORM
- UNSPEAKABLE
- ORACLE
- FEASTED
- ANT
- WHEREFORE
- COWARDICE
- PINING
- SHRINE
- FLAVOR
- LACKS
- FRANKLIN
- SAGE
- TENANTS
- HORDES
- CONVERSION
- TAMED
- STUMBLING
- MADMEN
- CAUSELESS
- BELIEFS
- BUDDHIST
- UMEGAE
- BRIM
- RUINOUS
- MEDITATING
- ROKURO
- KUBI
- CONFISCATED
- QUICKNESS
- TESTAMENT
- CONTEND
- CONSIDERATE
- FRAGILE
- STEED
- DISCOMFITURE
- RELICS
- DISMAYED
- DESCENDANTS
- SANTA
- HELMET
- PLAYER
- HALTING
- LICENTIATE
- GEESE
- PINIONED
- INFURIATED
- SOLA
- KOVA
- GRATING
- ZODANGA
- CUSHION
- KANTOS
- EMBERS
- CAP'N
- KNOWED
- WIPING
- SCRAMBLE
- BLUNDERED
- ENSURE
- SNATCH
- BAFFLING
- SUNDOWN
- PAINTERS
- CONFUSEDLY
- STRAIGHTENED
- BRUTES
- PREFACE
- FRAMLEY
- ACCUMULATED
- UNWILLINGNESS
- ARCHBISHOP
- SALUTARY
- UNDRESSED
- MOANS
- HOW'S
- COLLAPSE
- RHYTHMIC
- GNAW
- ENJOYABLE
- GRANDADDY
- DEY
- MEMPHIS
- BOGUCHAROVO
- QUARTERED
- KOLOCHA
- OBJECTIONABLE
- COLLINS'S
- RESERVES
- DIFFIDENCE
- PULSES
- UNCOMFORTABLY
- UNALASKA
- MISSIONARIES
- ZEALOUS
- POP
- COMPARING
- MARSHAL
- SKINNED
- ATTRIBUTE
- GEOGRAPHICAL
- INERTNESS
- VARIATION
- ROSS
- GRACE'S
- EMERGE
- GUST
- ELEPHANTS
- ACCUMULATIONS
- ENGINEERS
- AD
- LOUISIANA
- GAMBLE
- UNHAPPILY
- PICKETED
- BRANNAN
- UTAH
- EVA
- OBSTRUCTING
- O'BRIEN
- GANNON
- BRAZILIAN
- NAP
- CUTLASS
- SETTLEMENTS
- CAMPEACHY
- IMPERTINENT
- RECKLESSNESS
- STING
- DREAMY
- UTTERANCES
- PETALS
- ATOM
- OMNIPOTENCE
- IMPARTED
- DIFFUSED
- UNINTERRUPTED
- GUENEVER
- MAYING
- COVENANT
- CONVEYING
- FIREWOOD
- DISCOURSED
- BABES
- AMOUNTED
- SUFFUSED
- UNKIND
- FUSILLADE
- FOOTMEN
- RIFLES
- LUSTROUS
- WRITHING
- CHARGING
- CARPETED
- THUMPING
- SUAVE
- CHUBBY
- UNMISTAKABLE
- RESONANT
- STEPHEN'S
- JOVIAL
- DI
- EXPLOITS
- COLONEL'S
- VOLUNTEERED
- OFT
- BOWLING
- TONGS
- LADYSHIP
- CORPORAL'S
- CZERLASKI
- NOUGHT
- SYRIAN
- ENDANGERED
- OFTENTIMES
- ANOINTED
- DEVELOPMENTS
- DEFECT
- SLEEPLESS
- CONSOLE
- BRAVADO
- TYING
- STEERING
- EXULTANT
- PERPENDICULAR
- BURGESS
- STOOP
- EMITTED
- STEILACOOM
- MT
- JUAN
- TRINITY
- YEAST
- BEQUEATHED
- ROUNDS
- PLASTERED
- BARRINGTON
- ELTON
- BODY'S
- NICER
- DISSIPATION
- PALLOR
- SCRAWL
- ARMCHAIR
- HURRAH
- TORTOISE
- DUCKED
- JERRY'S
- BAPTISM
- STOMACHS
- INFANTRY
- TAUBE
- SPINE
- DECLINING
- MICK
- MUTTON
- DARNAY
- SCAFFOLD
- DISTASTEFUL
- PANELS
- WORKMEN
- PUMPS
- STRIPES
- SEVERED
- BOON
- TALLEST
- FILE
- MOSES
- JUDICIARY
- FOLD
- LIBERATION
- INSINCERITY
- WRONGLY
- CAPITOL
- STUYVESANT
- BOXING
- EVERGREENS
- HARLEQUIN
- LEANT
- THEO'S
- OCTAGON
- NECKED
- BELGIUM
- GIRLISH
- PAMELA
- POLTON
- FATIGUED
- AFFORDS
- REUBEN
- MEMORANDUM
- MONASTERY
- PATRICIA
- DANNY
- PLEADING
- SUSPICIOUSLY
- PANTRY
- STRAWBERRY
- UNACCUSTOMED
- HARRIS
- FADE
- PAUL'S
- CLAP
- SKETCHES
- CHRISTOPHER
- SLICK
- UNDERSTANDS
- JIST
- LOPEZ
- RATTLER
- IGNORE
- JACKETS
- COLOSSEUM
- FLAKES
- STIFLED
- FEEBLENESS
- SCULPTURE
- ETHEREAL
- SEATTLE
- VICTORIA
- RUNG
- JEHU
- LAYS
- SPAKE
- PADDLING
- PRECEDE
- CLEMENT
- EASEL
- PROJECTILES
- TRAVERSING
- MANIFESTATION
- SICILY
- EPITHET
- FORESTER
- MILLER'S
- GILL
- ABSORPTION
- CARNEGIE
- ROCKEFELLER
- FACILITY
- REIGNS
- PREDECESSORS
- CONCRETE
- CHRONIC
- RELIANCE
- MAINE
- OILED
- PEWS
- PURITANS
- FRINGE
- EDUCATIONAL
- PATRIARCHS
- SPECKLE
- COCKLETOP
- ARRANGING
- DISPATCH
- SENTIMENTALITY
- PETTED
- GARDENING
- ENLIGHTENMENT
- WRANGELL
- GLENORA
- MORAINE
- YOUNG'S
- BUSTLED
- VOLLEY
- THORBIORN
- KODAK
- MAITLAND
- CROPPER'S
- ELUSIVE
- ROMISH
- IMAGINES
- TEREK
- PRO
- TARTAR
- ROBSON
- FOSTER
- MILLVILLE
- WEGG
- HUCKS
- THOMPSON'S
- COMEDY
- AFFIRMED
- HOSE
- WID
- WOODYARD
- SOWERBY
- CRAWLEY'S
- ARABIN
- VERBS
- FANNICOT
- REPRESSION
- NESBIT
- LEGACY
- VARIATIONS
- STIMULI
- CHORDS
- OSTLER
- REGULATIONS
- CRIANAN
- HERALDED
- ACADEMICAL
- PREUSS
- GLASGOW
- MANDERSON
- DIRHAMS
- STERLING
- TICKETS
- GREGGORY'S
- FAMILIARLY
- MAXIMILIAN
- CARTERET
- INDOLENCE
- HEREDITY
- REGIS
- WAN
- VERRY
- ROSVILLE
- HOISTED
- TEACHES
- CENTIPEDES
- CONWAY'S
- RODGERS
- GRIMSHAW
- ARAMAEANS
- REVOLTS
- AHAB
- BECKY
- PAYNE
- DENOUNCE
- MAUMEE
- ANKLET
- ANGELES
- ATTIRED
- KEATS'S
- BRAWNE
- ATTAINMENT
- BRYAN
- FERMAIN
- COUNTERS
- GOVERNING
- CENSORSHIP
- OSSOLI
- RITA'S
- RITA
- ZENZA
- BOBO'S
- HAUNTING
- PRODUCTIVE
- SEDUCED
- BAGPIPES
- FALCON
- WEBBED
- GLUTEN
- FONTAINEBLEAU
- BEAN
- SPOOKS
- CHICHESTER
- TRUMPETED
- ERADICATE
- SWEETWATER
- ROMANIANUS
- PLANTAGENET
- OLDACRE
- GRANNIE
- AUXILIARY
- SORREL
- DEFTLY
- MINUTE'S
- RAMBLING
- FURTHEST
- RAPPED
- HAIRPINS
- MARILLA'S
- HEADACHE
- SUCK
- IMPORTED
- BELL'S
- FULNESS
- TENSE
- LOCKING
- YELLOWISH
- VIVACITY
- SCOPE
- DIMPLES
- PITYING
- PLUMES
- KID
- TALKATIVE
- PATERNAL
- ENDEAVORS
- APPELLATION
- CHATTING
- GRADE
- INTRIGUES
- IMPERTINENCE
- TRADESMAN
- PREOCCUPIED
- VAGABOND
- REFLECTS
- INACTION
- CYPHER
- HOOP
- TINK
- LONGINGLY
- TIGHTENED
- MUMMY
- JEERED
- NEVERLAND
- TINKER
- ARTFUL
- SQUEEZING
- FALTERING
- GLORIOUSLY
- SHAMELESS
- LISTENS
- ENQUIRY
- EXPERIMENTAL
- EXTREMES
- ESTABLISHMENTS
- LIVERPOOL
- CLEVERLY
- LISTENER
- INFALLIBLE
- CASPAR
- SULTRY
- FADING
- MEDITATIONS
- DISMISSAL
- DECAYED
- COMPASSES
- PICTURING
- PREDICTED
- SUITOR
- CHERISH
- TINGED
- SORES
- REFUGEES
- AMBASSADOR
- UNTHINKABLE
- COUPLED
- REDRESS
- PORTS
- STATESMEN
- DOWNS
- MERCHANTMEN
- CRUISING
- ADRIAN
- STIFFENED
- HOSTILITIES
- ABUSED
- THAMES
- NORTHWARDS
- RUYTER
- PLYMOUTH
- STRAITS
- EMISSARY
- SUPPLICATION
- FORCIBLE
- MARITIME
- EXTINCTION
- INSTIGATION
- STRANGEST
- DIPLOMATIC
- MOMENTOUS
- BRIBE
- AVOIDING
- CLAMOUR
- DEFECTION
- LABORIOUSLY
- TRIUMPHED
- THICKEN
- CLEANED
- CARROTS
- THICKENED
- LEMONS
- GRATED
- STEWPAN
- STANLEY
- TINTS
- INCUR
- CIVILITY
- RUSTIC
- RUDELY
- WREATH
- SWEETER
- MADMAN
- MUTUALLY
- INFERIORS
- IDLY
- AMPLY
- HACK
- REPLENISHED
- GLARED
- FOREARM
- STORMED
- SNARLED
- COMPRISED
- DODGE
- RAFTERS
- APOLOGETIC
- WHITENED
- GOINGS
- CHAFF
- RENDERS
- COUSIN'S
- ORIGINALITY
- HEDGEROWS
- APPRECIABLE
- VACANCY
- FRAY
- ADORN
- EXCELLED
- BEWAILING
- FATTEN
- SPELLS
- ASUNDER
- LOAVES
- CHERRIES
- UNTIDY
- FARMER'S
- CHARACTERIZED
- HELPS
- STUMBLE
- CONFLICTING
- CONTESTS
- TRADITIONAL
- OUTLET
- HYPOCRISY
- IRRITABLE
- COUNTS
- UPS
- IMPERFECTIONS
- REVERSED
- RECIPIENT
- PROGRAM
- SAVES
- EMBARRASSING
- BENEDICT
- PIETY
- BLUBBER
- BREVITY
- BUMP
- EFFECTUALLY
- SUBSTANCES
- INTACT
- DEEPENING
- INTERMISSION
- CORONER'S
- ADVERTISEMENTS
- PAREDES'S
- PUZZLING
- TRAMPING
- SPADE
- GRANDFATHER'S
- BREAKFASTED
- MORBID
- JUSTIFIABLE
- YESTERDAY'S
- INCONGRUOUS
- SUBTLER
- CLEANSING
- EXCAVATION
- JET
- SCARF
- HANDLES
- UNHEEDED
- EXCLAIMING
- CANCER
- DRUGGIST
- GUARANTEES
- ADULTS
- WRIGHT'S
- BALM
- ASTHMA
- STRAINS
- HOLDER
- WITHER
- RESTORES
- ITCHING
- BENEFICIAL
- SPOONFUL
- PURIFYING
- INNOCENTLY
- AGREEABLY
- OMIT
- ENDEAVOURS
- EMBRACES
- TURKISH
- HEBREW
- LEPROSY
- APPLYING
- INTIMATED
- RAISES
- PREJUDICED
- SINBAD
- CONDESCENSION
- INVADE
- BINS
- EMBLEMS
- MINNEAPOLIS
- LAYER
- TICKLED
- LOCATE
- ENTERPRISING
- BRIGHTEST
- UNIFORMS
- PIKE
- LAWNS
- COMMODIOUS
- SKILLED
- SPHERES
- HAMPSHIRE
- AVIATORS
- PASTOR
- CYNICAL
- ARTIST'S
- FICTION
- HEMMED
- DEVOTING
- SPURIOUS
- DEMURE
- DEVOID
- COILS
- SENSITIVENESS
- JOINS
- DISOBEY
- WADDLING
- GREETINGS
- SQUIRRELS
- PRAIRIES
- UNMOVED
- PROVERB
- MORCERF
- GRIEFS
- THUNDERBOLT
- WIDENED
- COUNT'S
- HOSPITALS
- CLAMOR
- GLIMMERED
- CABS
- COLLARS
- MIRACULOUSLY
- RESTAURANTS
- COARSENESS
- REACTIONS
- GRANDER
- INSINCERE
- MERGED
- BAYONET
- CONICAL
- FLOATS
- LAUNCH
- HUMPH
- DISC
- DECREASED
- REVOLUTIONS
- ARITHMETIC
- LEASE
- SENSATIONAL
- ADVERTISE
- PROPHESIED
- MOVABLE
- HIGHWAYS
- MOISTURE
- THREADED
- HERETOFORE
- CRASHED
- LOOP
- BUBBLE
- RELUCTANT
- NEEDLER
- FLAPPED
- SWIRLED
- WARY
- AWAKING
- SNARLING
- SPITTING
- JERKING
- TURENNE
- TACTICS
- SUBURB
- FRONTIERS
- REVERSES
- STIRRUPS
- OBEISANCE
- REIGNING
- RESIDE
- REBUILT
- EFFACED
- REBUILDING
- CORRESPONDED
- REVERE
- FETE
- INFIRM
- CURATIVE
- DISEASED
- LAYERS
- OVERWHELM
- DESTROYS
- DIRECTS
- HOURLY
- OVERPOWERING
- ADDICTED
- IMMORAL
- INCREASES
- METAPHYSICAL
- THERAPEUTIC
- AGENCIES
- IMITATING
- MISGIVINGS
- PERSUASIONS
- OVERCOMING
- TRANSITION
- POSSUMS
- LONE
- WHITER
- YOUNGSTERS
- POOH
- CLEVEREST
- RECUR
- WHISPERS
- RECEDING
- TROUGH
- APPLIANCES
- REELED
- INTERVENED
- TRANSVERSE
- RAILED
- SLEEPER
- THEOLOGICAL
- FRUITFUL
- EXTRAORDINARILY
- FANATICAL
- CONVICTIONS
- SOULED
- IMPARTIAL
- CONCEIVABLE
- LUST
- PERSPECTIVE
- INVINCIBLE
- EXTORTED
- ACKNOWLEDGING
- NOTORIETY
- SCANDALOUS
- CONTINUITY
- FORESEEN
- SORRENTO
- TOURISTS
- VOLCANO
- REGISTER
- ARISTOCRAT
- POSE
- ENJOINED
- WOMANHOOD
- QUARRELSOME
- DISMISSING
- UNIMPORTANT
- EXPANDED
- EXPECTANCY
- WRIGGLED
- GRIPPING
- NORTHEAST
- JETS
- MERIDIAN
- LUNAR
- SAVANT
- RESTRICTED
- ABUNDANTLY
- DIFFERENCES
- VAPOR
- PERSISTENTLY
- RAREFIED
- EMBARRASS
- GALLANTLY
- ABIDE
- GUSH
- SHALLOWS
- CHAMBERLAIN
- NIGHTCAP
- CACAMBO
- GALLEY
- DECENTLY
- IMPALED
- PORTE
- RAVISHED
- DISSIPATED
- MEDDLE
- JEHOIADA
- POMPEY
- NERO
- PASTRY
- EL
- POPE'S
- UNCAS
- HURONS
- STALKING
- RETRACED
- ADVENTURER
- RIVETED
- CHEATED
- CONTRADICTED
- CHILDLESS
- IMPENDING
- BLACKER
- SCALP
- POLITIC
- IMPLEMENTS
- ORATOR
- BEANS
- ACQUIESCENCE
- YENGEESE
- HUNTS
- DISCARDED
- SHORN
- PATRIARCH
- AFFINITY
- MEDALS
- REVERENTLY
- PERILOUS
- SAGACITY
- DISTRIBUTE
- INFANTILE
- AWARDED
- RECLINING
- PRACTICED
- TUB
- ACQUIESCED
- CIDER
- COLT
- MITE
- REPLIES
- REBECCA'S
- SHAVEN
- DIMPLED
- ACID
- DEARBORN
- GRAPPLE
- CLASH
- HASH
- BURNHAM
- RAISINS
- ELLEN
- S'POSE
- RIB
- MINCE
- CURVING
- JOLTING
- BESIEGED
- CAVALCADE
- DOWAGER
- ACTIVELY
- TRIO
- LUGGING
- JERMYN
- UNPACK
- POPPING
- WATCHES
- PENCILS
- THRILLED
- SUSPENDERS
- FLOCKED
- HEIGHTEN
- ABOU
- BANISH
- UNITING
- WITHSTAND
- APPAREL
- SYRIANS
- SURPASS
- SQUEAKY
- SPARROW
- AGILE
- WENCH
- FISHWIFE
- VOLTAIRE
- DANGEROUSLY
- COMPOSITE
- RICKETY
- ATTACHING
- FURROWS
- TALKER
- ASTUTE
- COMER
- DISAGREEMENT
- PENNILESS
- BAREFOOTED
- VENOMOUS
- TILLAGE
- UNFORESEEN
- CULTIVATOR
- CONCURRED
- FIRMAMENT
- MOTLEY
- BLUNDERS
- FLATTER
- GRADATIONS
- RELINQUISHED
- OMISSION
- PRESUMPTION
- PRACTISE
- IGNOBLE
- NAPLES
- BARBARIANS
- STRIVE
- SCULPTOR
- INCLINES
- THINKER
- INHOSPITABLE
- STUNTED
- BARED
- ARTICULATE
- EDUCATE
- MINSTER
- BEARDS
- HANDSOMEST
- REPAID
- CONFINES
- BOISTEROUS
- RAINING
- INTIMATELY
- BROILED
- OTTERS
- SUBSISTENCE
- BRAZIL
- UNCLES
- HERDS
- QUALIFICATIONS
- AIMLESS
- SUBMITTING
- ANON
- EGO
- GIANT'S
- OUTDOOR
- FRISKING
- BRIMMING
- MUFFINS
- UNICORN
- RACED
- SHOWERS
- HYPOCRITE
- EVADE
- RALLY
- IRRESPONSIBLE
- FULLEST
- CALVES
- ANTICS
- BUBBLING
- VALOROUS
- FRETTED
- WRISTS
- BHAER
- SURVIVORS
- HARASSED
- SHIPPING
- BON
- JOSIE
- HEROINES
- STRENGTHENED
- MATRONS
- TOILING
- REAPED
- DINGY
- SHORTCOMINGS
- CHEBEC
- FEATHERED
- LATTER'S
- CLICK
- SITES
- STRUNG
- MOUSING
- UNPLEASANTLY
- FARMYARD
- OUTSET
- PEACH
- RIPPLE
- IMMOVABLE
- CARVINGS
- BURSTS
- HEADLESS
- RESEMBLE
- SPIRITED
- FAULTLESS
- LOGAN
- OVERLOOKED
- METHODICALLY
- THAT'LL
- YOUNGSTER
- RUNNIN
- FIGHTIN
- RELIGIOUSLY
- OUTSTRETCHED
- LOATHING
- HUSKILY
- GENT
- STEALTH
- ROT
- CAVALIER
- CONTRACTING
- GRATE
- DISOBEDIENCE
- SUN'S
- REVENGED
- DEPRIVING
- MYRTLE
- EXPIRE
- INHABITANT
- DRAGONS
- SERVICEABLE
- PIANOFORTE
- SENSIBILITY
- BARONET
- FRETTING
- COMPREHENDING
- PROFESSIONS
- INADEQUATE
- RUINING
- ACQUIESCE
- CHAGRIN
- PROBABILITIES
- TROPHIES
- STALLS
- ALLUDE
- CAIRO
- VIRTUALLY
- KIRKPATRICK
- SHRIVELED
- DETECT
- SPASM
- TRICKLING
- WRENCHED
- MOLLY
- CUTTERS
- BRUSHING
- SNOUT
- PLASTER
- AROUSE
- ZEST
- BEWITCHING
- PAGEANT
- SERENITY
- BILLET
- ALLOWANCE
- WALES
- CURSE
- SHUFFLE
- SKIPPER
- WRINKLE
- NAUTICAL
- REQUIRING
- TON
- SPECKLED
- SUBJUGATION
- CONVULSIVE
- PROPAGATED
- PERCEPTIBLY
- NEEDFUL
- DISCOURAGING
- SNOWFIELD
- MOURN
- REMINISCENCES
- ACCOMPANIMENT
- ALEXANDRIA
- NETTLE
- NECESSITATES
- SEASONABLE
- HARVEY'S
- MOISTEN
- BROTH
- LIDS
- THRUSTS
- SPOILS
- PRAWNS
- MOISTENED
- COMPROMISED
- OVERLOOKS
- COMPATIBLE
- NEARED
- AEROPLANE
- ALIBI
- FO
- PHARISEES
- NEEDING
- JUDAS
- OVERHEAR
- SNEAKED
- DEMORALIZED
- PEG
- BOWEN
- SPECTRAL
- SIMULTANEOUS
- LEAPS
- SPRUCES
- RUMBLE
- PRETENDS
- CASTS
- NAMELESS
- COMPLIED
- FORGIVEN
- TRIPPING
- IMPEDIMENT
- AFFRONTED
- EASED
- HOTTAM
- LANCES
- CONFER
- IMPERCEPTIBLY
- SUBSTITUTED
- IGNOMINY
- REVOLVING
- MALIGNANT
- EXCESSIVELY
- PEMBERLEY
- PROTESTING
- CONDESCEND
- REBELLIOUS
- DRAGOONS
- LOCUSTS
- STATURE
- SCRUTINY
- TREPIDATION
- DANK
- AVALANCHE
- VERDANT
- UNSETTLED
- GENERATED
- UNPRODUCTIVE
- LEGISLATORS
- ASSIGN
- SUBDUE
- BANKRUPT
- MEXICO
- BOSOMS
- WILDS
- HOLDERS
- LEVIED
- METROPOLIS
- MARSEILLES
- ITALIANS
- PERISHING
- SHARES
- CONDESCENDED
- TRIFLED
- REPENTANCE
- RECTOR'S
- MONROE
- CRAVED
- BUZZ
- TILES
- PROFESSIONALLY
- THEATRES
- LUXURIOUSLY
- MOTHS
- CHATTER
- ELATED
- FEATS
- WELCOMING
- DONNED
- DOUBLET
- YEOMAN
- CRACKING
- CHIDE
- AFOOT
- UNDONE
- KNOWEST
- ROUGHNESS
- DISHEVELLED
- FALSEHOODS
- SIRS
- PRESSES
- DISPARITY
- FOREGO
- FERVOUR
- BETROTHED
- DISTRACTION
- TRAVELS
- DISCONSOLATELY
- TROUBLING
- BLOOMED
- YELLING
- BELONGINGS
- JUSSAC
- CARDINAL'S
- RECOIL
- HURLING
- AJAR
- RENDEZVOUS
- DESIGNATE
- MYSTIFICATION
- THRICE
- PISTOLES
- DEMON
- SEEKS
- MOONBEAMS
- INTIMATIONS
- BRILLIANTLY
- SEPULCHRE
- HORIZONTALLY
- SWEDENBORG
- COLOURING
- RAMBLE
- MEDICINAL
- DERISIVE
- THEOLOGY
- MISGIVING
- DIVINED
- COVERLET
- VOLUNTEERS
- NEWSBOY
- SOLICITED
- EXHIBIT
- CONTRADICTORY
- HYPNOTISTS
- GROSSLY
- CONFESSIONS
- BEAUMONT
- GEORGE'S
- CORRIDORS
- ARM'S
- COUNTESS'S
- TOOL
- LINEAGE
- SYMPATHIZED
- WOT
- SHA'N'T
- DORINCOURT
- EARLS
- DEBRIS
- CABLES
- METALS
- SOUVENIRS
- FERDINAND
- DARKEST
- CONCENTRATE
- TRANSFERENCE
- THIRDLY
- INCOMPATIBLE
- DISCORD
- QUARRELING
- TEACHINGS
- DERYCK'S
- PEARLY
- DERYCK
- COMPLICATIONS
- COPPERS
- THANKFULNESS
- STEAMING
- DEFINITION
- REGULATION
- GREEDILY
- PERUSAL
- RINGLETS
- ECSTATIC
- PALENESS
- THEODOSIUS
- VISIGOTHS
- COASTS
- IMPROVING
- REFORMED
- MOSLEMS
- CAMELS
- SURRENDERED
- CIRCUMFERENCE
- GAUGE
- PANAMA
- REWARDS
- ARRIVES
- GENUS
- ATLAS
- SCHOLARLY
- MUSSELS
- SHIMMERING
- EMBROIDERY
- DIVIDING
- AUTHENTIC
- UGLINESS
- SMELLED
- HOMAGE
- FERVOR
- HEM
- IDEALISM
- PERMITTING
- RUDIMENTARY
- FOREIGNERS
- ADVOCATE
- UPHOLD
- JANGLING
- MAHDI
- RAIDS
- UNWISE
- GABLE
- SHOD
- STRAPS
- POLAR
- GLOVE
- PRIZED
- FREEZE
- ANIMAL'S
- SCISSORS
- DAMAGED
- COMMODITY
- SINKS
- ENTAILED
- AUSTEN'S
- THEATRICALS
- OBSERVES
- INVENTIONS
- HARPSICHORD
- FAN
- FANS
- PREMATURELY
- SPIN
- GARDENERS
- MANUAL
- CONTRASTED
- GROOMS
- DEFAULT
- STIMULATED
- UNDERWORLD
- CRITICALLY
- THINNED
- FEVERISHLY
- SPECULATIVE
- TOPPED
- PLEAD
- OMINOUSLY
- REPOSING
- GENUINELY
- FRACTION
- HOUND
- PACKAGE
- GRANTLINE
- FERROK
- SHAHN
- CORDON
- EAVESDROPPER
- PROWLER
- CRESCENT
- IRONICALLY
- ENCASED
- INADVERTENTLY
- MATHEMATICAL
- FANCIFUL
- ALLURING
- SATAN
- MUSTER
- SNEAKING
- SHAC
- PATCHED
- GLIBLY
- INTRUDER
- WOMANLY
- UNSATISFACTORY
- CARESSING
- OBSERVANT
- FLING
- INCONVENIENTLY
- SNEERING
- SCHOLARSHIP
- HINDERED
- COLORING
- COMPETENT
- THORNS
- ROSEBUD
- SHADOWED
- STAID
- ANCESTRAL
- WAREHOUSES
- FEASIBLE
- COMPARATIVE
- BENEFICENT
- PLASTIC
- DISCIPLINED
- ROUT
- RARITY
- JOURNALIST
- ACCESSIBLE
- WAISTCOATS
- DEFICIENT
- MEANINGS
- GAIETY
- FOOLISHNESS
- DISCOVERS
- IRRESISTIBLY
- INSCRUTABLE
- VANISH
- CHRISTENDOM
- BULWARK
- QUARRY
- WRAP
- POSSE
- HADST
- STAKES
- NARROWED
- HOUR'S
- SEAWARD
- BITTEN
- QUARRIES
- RUDDY
- SCREENED
- INSTRUCTING
- VINTAGE
- NEEDLESSLY
- FURTIVELY
- CHRIST'S
- GILES
- MAYOR'S
- ADVISING
- BEFALL
- SOUTHWARK
- GENERALITY
- INTERVIEWS
- ADOBE
- INQUIRINGLY
- BEREAVED
- CHUNKS
- LISTENERS
- SHOPKEEPERS
- WADDLED
- COWARDS
- UNCOVERED
- BIER
- VALLEJO
- CHEROKEE
- MODELLING
- SKILFULLY
- MUSLIN
- UNFEELING
- ELECTIONS
- REGAL
- CHIVALROUS
- RETAINERS
- EXPEDITIONS
- ABSTAINED
- FACTION
- SATE
- CHARIOTS
- HYPOTHESES
- HOTLY
- UNTOLD
- TITANIC
- RATES
- PLOTS
- COLLECTIONS
- VERSED
- SPERM
- BUCKLING
- FRIGATE'S
- PIER
- HORIZONTAL
- PROPELLER
- SEEKERS
- PASSAGEWAY
- FORECASTLE
- RAILINGS
- CORRESPONDS
- WATERWAYS
- MONSTER'S
- PLOWED
- TRAILED
- UNDULATED
- SEEKER
- BAILIFF
- POYSER'S
- CURTSIED
- REDDER
- PASTURES
- ABSTRACT
- EXPLANATORY
- EVERYBODY'S
- HILARIOUS
- MASSEY
- GRIEVE
- REVENGEFUL
- MORRIS
- SETH
- PROSPECTIVE
- FORFEITED
- REMONSTRATED
- SOLICITORS
- DATED
- MILADI
- INTERPOSE
- CRACKLING
- COMPLIMENTARY
- OVERTAKEN
- REPROACHFUL
- SARCASTICALLY
- DESCRY
- HOWLED
- JOKING
- BLIZZARD
- SASTRUGI
- DOWNHILL
- UNEVEN
- ERECTION
- MOUNTS
- OSCAR
- NILSEN
- CHASMS
- DRIVERS
- EVOKED
- PHOTOGRAPHER
- UNDERLYING
- BASES
- ELIOT
- WHEELER
- SOOTHED
- WAGGING
- REPRESSING
- PONDERING
- THUNDERING
- EMBOLDENED
- RECEDED
- TRIBUTARY
- DISLOCATION
- DEXTEROUS
- PARAPET
- DISOBEYED
- SWATHED
- THREADS
- SUBTERRANEAN
- VISTAS
- TROTTING
- ALTARS
- ECLIPSED
- JUNO
- UNLAWFUL
- PAINTINGS
- PRODUCTIONS
- ADORE
- UNDUTIFUL
- PIGEONS
- ODORS
- OBEDIENTLY
- PRECIPITATE
- JUPITER
- NUPTIALS
- DWELLS
- TEEMING
- MOUTHED
- LIQUORS
- GIN
- MONTREAL
- ALGONQUIN
- SUBSIST
- FORTIFICATIONS
- ATHLETIC
- CHIRP
- UNPERCEIVED
- REPOSED
- CHEERILY
- CHATTED
- FOOLISHLY
- ADEQUATELY
- ILLUSTRATE
- SIGNIFICATION
- NARRATION
- RESIDENT
- DELUDED
- WOODCUTTER
- FUGITIVE
- MONSTROUSLY
- JUDGES
- COMPLIANCE
- TENEMENT
- DESPISABLE
- INTERFERES
- STAIRWAY
- PIAZZA
- ROANOKE
- DOMAINS
- UNGOVERNABLE
- COMBATS
- SCOUNDRELS
- TOILS
- BLESSINGS
- TOOTHACHE
- INACTIVITY
- VIRGINS
- MOURNERS
- DROLL
- CAMACHO
- QUITERIA
- MOODY
- BLITHE
- SINNER
- PROTRUDING
- ANNIHILATE
- JED
- HORDE
- DISORDERED
- RAID
- IDIOT
- GROPING
- SOLDIERY
- DISCERNIBLE
- KAN
- DISCLOSING
- PATROL
- CITY'S
- CONSPIRATORS
- BUNGLED
- GIBBET
- SHIPMATES
- STRANDED
- FLICKERED
- HUMANE
- AVAILED
- TRANSMIT
- ESSAYED
- INDEPENDENTLY
- ORIFICES
- WOMB
- RETAINS
- INVENTING
- DUKE'S
- UNFINISHED
- APTITUDE
- NAUGHT
- ACCURATE
- CONSERVATIVE
- THRASHED
- ICONS
- ADJUTANT
- UNDRESS
- UNDRESSING
- CREAKED
- KITTEN
- FLOPPED
- BUZZING
- SPHINX
- BODIED
- GAL
- RUTHLESS
- ARRIVALS
- CAPS
- TRAITORS
- PIERRE'S
- IDIOTIC
- HOSTEL
- ENTRENCHED
- CONGRATULATED
- STRANGENESS
- RECURRENCE
- GROUPING
- INHERENT
- COMBINING
- RESPECTIVELY
- SOCIABLE
- SEQUEL
- SLANG
- MOLLIE
- CHINIK
- RADISHES
- PRICKS
- PURER
- OBSERVERS
- ANNALS
- SUCKED
- MINNOWS
- ADAPTATION
- PONDS
- HEAVIEST
- ALABASTER
- ITEM
- REPULSION
- REPUTE
- INTERMEDIATE
- INVESTIGATED
- SWARMING
- NANCY
- LOADING
- CLYTIE
- TONED
- SPOOK
- LAB
- MANON
- REASSURINGLY
- IMPRISONING
- MOREY
- SPRINGS
- FLORIDA
- OCCOQUAN
- EXTORT
- DISCREETLY
- BRITAIN
- RECRUITS
- JAMAICA
- PIRATICAL
- BIOGRAPHER
- PERSECUTIONS
- MODERATION
- CAPTORS
- REVERED
- WRITINGS
- TAPESTRY
- VERDURE
- HAPHAZARD
- INDIVISIBLE
- CONSUMING
- ANTICIPATION
- SYMBOL
- HORSED
- SEEST
- AMBUSH
- SPORTIVE
- TAUNTS
- LEOPARD
- TRAPPINGS
- VALOR
- UNWILLINGLY
- WAST
- PRANCING
- AVENGED
- HEAVINESS
- OBSERVANCES
- REMARKING
- HOWARD
- CARRINGTON
- LUCY'S
- LETS
- PORTICO
- LABELLED
- ROADWAY
- PONIES
- PIKES
- NEWCASTLE
- NICETY
- SWIFTNESS
- SURMOUNTED
- SHIN
- MOCKED
- ENCIRCLED
- GOODWIN
- REVELATIONS
- BUTT
- PORCELAIN
- FRAMES
- EVENTFUL
- POPLAR
- ROD
- DRAWLED
- MOUNTAINEER'S
- HEERD
- FER
- GUT
- GRATEFULLY
- NODS
- INVESTMENT
- ROUTH
- CONGRESSIONAL
- FRAUD
- BUCKSTONE
- COUNTRY'S
- CONFIDING
- ABLEST
- VOICED
- VOWS
- SECLUSION
- SLAB
- PROPHETIC
- ILLEGITIMATE
- BAREHEADED
- DEI
- REASSURE
- PRELIMINARIES
- PLUG
- CRITTER
- OFFICER'S
- BENNETT
- PREVALENCE
- DOWNWARDS
- VENERATION
- BESPEAK
- SLOP
- COMPLETION
- PREDICTION
- GRIEVOUSLY
- DISCRIMINATE
- ATTORNEY'S
- UNSUCCESSFUL
- BRIDMAIN
- RESPECTABILITY
- DEFENSE
- DOMINION
- DEDICATED
- ARABIANS
- VENOM
- SOONEST
- DISCOMFORTS
- DEPRECATION
- LIAISON
- MISTRESS'S
- BORROWING
- CHEAT
- RETREATS
- RIPENING
- NAVIGATE
- LESLY
- TAUNT
- STIFLING
- RUSSEN
- LYON
- RUDDER
- BOAT'S
- CRAG
- TUMULTUOUS
- OUTLYING
- EVERGREEN
- PUGET
- DEEPS
- DREAMLAND
- SLACKENED
- OUTBURSTS
- MISHAP
- SHOVEL
- RECRUITING
- POTTER
- RIOTS
- BENIGN
- IMPOSITION
- ADROITNESS
- DESPOTISM
- BISHOPS
- REMINDING
- DECLARES
- ISABELLA'S
- HANNAH
- CONNEXION
- DECEASE
- INSTITUTE
- SAVIOUR
- PROVOKE
- DENS
- UNSTEADY
- SODA
- HUSHED
- TECHNICALLY
- ACCUMULATION
- PROCLAIM
- IMPERCEPTIBLE
- SEVERN
- BARGAINED
- DECISIVELY
- RENOUNCED
- UNACQUAINTED
- NOIRTIER
- PALAIS
- AIDE
- SUFFOCATION
- SAILOR'S
- HISSED
- WAKEFULNESS
- BARBED
- LAD'S
- TICK
- TRENCH
- ELIGIBLE
- PAVEMENTS
- PRECIPITATED
- DELEGATES
- LORRY
- ENTREATY
- ASPHYXIATION
- TENACITY
- ABSORB
- COUNTERACT
- AGONIZING
- DEPICT
- THROES
- TIPSY
- SHARK
- DEGRADING
- LAUREL
- MOUTHFULS
- FORKED
- WRIGGLE
- BAYONETS
- SENATORS
- REPUBLICANS
- DEMOCRATS
- CAROLINA
- WARDEN
- DISAPPROVAL
- LANDSCAPES
- LAURELS
- WRAPS
- COLUMBINE
- OUTSTANDING
- TRANSFORMATION
- PALER
- BROOME
- BENJAMIN'S
- MULTIPLICITY
- ASSISTING
- SOLACE
- CONSTRUCT
- CORROBORATION
- TOUCHSTONE
- SYLLOGISM
- AMALFI
- RIG
- DEARS
- AMBLED
- RAINED
- VERANDA
- SCOWLED
- OUNCE
- SKIPPED
- HARRISON
- LASS
- WORKMAN
- STUMPS
- ANNETTA
- UNPROFITABLE
- TRANSFIXED
- POINTER
- PLOUGHING
- MUNSON
- FITFUL
- BLUISH
- CELIA
- WADE
- PEAKED
- CARESSED
- SULLENNESS
- WAGED
- BLOODSHED
- GLOOMILY
- HIPPOPOTAMUS
- CY
- BILLY'S
- LICKING
- SISSON
- CUMULI
- HEAVIER
- PILING
- BENUMBED
- MOUNTAINEERS
- FILMS
- SPUTTERING
- PROFUSELY
- WAHSATCH
- OQUIRRH
- LILIACEOUS
- CRUMBLED
- BUTTERFLIES
- FRITILLARIA
- SHOOTS
- DOLEFUL
- HEARKENED
- BUSHELS
- FRIED
- MOORED
- INDEFATIGABLE
- CAYOS
- OBLITERATED
- TRUDGING
- CURTLY
- WAREHOUSE
- NUDE
- SWEDEN
- WOOD'S
- EVIDENCES
- CONCLUDING
- SCATTER
- EXCEPTIONALLY
- SOUL'S
- BUCKETS
- TILED
- ECONOMICS
- BANANA
- MULLINS
- SMELTERS
- CONSPIRACY
- MYRA
- CUBANS
- STEPHANUS
- COMBEFERRE
- MONDETOUR
- PHASES
- CONTRADICTIONS
- DOGMA
- JAVERT
- FORKS
- MONTPARNASSE
- SOUS
- THENCEFORTH
- UPRISING
- FATHOMS
- COWBOY
- NEWCOMB
- WAGGED
- MILL'S
- INDUCTIVE
- OBNOXIOUS
- PRESCRIBE
- PREFERS
- AGGREGATE
- HOSTILITY
- MONARCHICAL
- CERTIFICATE
- SWAGGERING
- CONCEALS
- UNPARALLELED
- TIMBERS
- CEREMONIAL
- ARCHED
- COURTED
- CHICKS
- CHANTY
- BOLDEST
- SCRATCHING
- HOWLS
- SCOWLING
- LARBOARD
- HAMPER
- CHOKE
- RUBIES
- CRISTEL'S
- COMPONENT
- COMPONENTS
- CHATEAUBRIAND
- STODDARD
- CRISTY
- INFIRMITIES
- WEEK'S
- SERVANT'S
- SIBONEY
- CHAPPARAL
- CONNECT
- LLEWELLYN
- KANE
- CAVALRY
- FORETOLD
- COWBOYS
- CARBINE
- CRITICISMS
- OCCUPANTS
- FIGHTS
- THORHILD
- THORSTEIN
- ERIC'S
- FISHED
- WINELAND
- ASGARD
- UNQUESTIONABLY
- EXACTED
- SADDLED
- CROPPED
- BARKING
- HEPBURN
- MONKSHAVEN
- HESTER
- CHANTED
- MANUFACTURER
- FOOTSTEP
- TIBERIUS
- CRAZED
- CHASTITY
- GILLENORMAND
- WIGS
- REBUKE
- SOOTH
- SANCY
- SCEPTRE
- COO
- PLUME
- PHRASEOLOGY
- MODIFICATIONS
- PRICELESS
- ACES
- HALLO
- PUCK
- PURCHASING
- BARCHESTER
- INTREPID
- COLLISIONS
- COURFEYRAC
- BLINDING
- ALLISON
- CULPRITS
- SAVELL
- SOPHOMORE
- ANNABEL
- CROSBY
- SCORED
- WHOOPING
- VERSAILLES
- PRUDE
- PIPED
- OCCULT
- HAMS
- DEMEANOUR
- HERB
- MONTEZUMA
- FLESCHE
- WEBSTER
- PRINCETON
- NEILL
- CRITIC
- ABU
- HASSAN
- ZIYADI
- GUESTWICK
- HANDIWORK
- JULIA'S
- THEATRICAL
- KEMP'S
- HARLOW
- IMPULSIVE
- MAGISTRACY
- CHIMERA
- SOOTHE
- FIDDLER'S
- LACHENEUR
- THROWS
- SOMERS
- POTHIER
- HUSBANDMAN
- LEGISLATOR
- MELCOMBE
- GROWL
- PORRINGER
- URSUS
- DOUGHERTY
- CHEEKED
- RIGOROUS
- PALELY
- FRESHMEN
- COUNSELLED
- GROTTO
- TOPICS
- BALDWIN
- ETA
- PYROXYLE
- HERBERT'S
- ADAD
- HITTITES
- ARAMAEAN
- INVADING
- KALKHI
- TEUTON
- MEMORIALS
- COMBED
- ROGERS
- VOTERS
- CYNTHIA
- FAIRFAX
- INTRUDE
- VERDICT
- POINDEXTER
- ERRED
- PALANQUIN
- ANKLETS
- LIED
- OWNING
- HATBORO
- KILBURN
- CALIFORNIAN
- RICHMOND'S
- RUBEZAHL
- COLVIN
- CURFEW
- WAPENTAKE
- QUORUM
- GENEVA
- CONFRONTATION
- MAYNARD
- BURGLARS
- TUNES
- MILAN
- CULTIVATING
- CAPITALISM
- ANTIPATHY
- LEDGER
- SAFIE
- EMPIRES
- SUPERINTENDENCE
- PALESTRINA
- FROSINONE
- CAVERN
- MONOCHORD
- LEGATO
- RUBATO
- PRELUDES
- CURRANTS
- ASP
- GRAHAME
- SHRIMPS
- IMPLICITLY
- CHAUVELIN'S
- LUTHER
- ENIGMA
- PROPERTIES
- LASHER
- NACKERSON
- MAHOMET
- SARACENS
- CHERSON
- PHILIPPICUS
- PORSENNA
- CHASKEY
- PRINTZ
- PLANCHET'S
- TRUCHEN
- ROCHELLE
- INDISCRETION
- UNWITTINGLY
- STEAK
- PETRIFIED
- INTUITIVELY
- BUTTERFLY'S
- ORDINANCE
- BRIGGS
- CRAYFISHES
- NEWFOUNDLAND
- BELOSTOMA
- OUCH
- FEELERS
- TAPPAN
- STOWAWAY
- SUTHERLAND
- AUGUSTINE'S
- PEARS
- MALLESON
- BERNARD'S
- KOREAN
- ANTUNG
- PURVIS
- BREAKER
- THOU'LL
- QUATERNARY
- GEOLOGISTS
- ARCHAIC
- EGOISM
- DECENCY
- SOW
- PATRIARCHAL
- PRIM
- STREAKS
- DISAPPROVED
- SPENCER'S
- SCHOOLING
- JOB'S
- BRUNSWICK
- SHINGLES
- BLANKLY
- WINCEY
- BRAIDS
- AWKWARDLY
- SHYLY
- PULLS
- SEASICK
- PROWL
- DETESTED
- MAGNITUDE
- GUARDIANSHIP
- REPELLING
- RECURRED
- INCURABLE
- EXTRICATE
- CORE
- IGNOMINIOUS
- INFLEXIBILITY
- SUPPLICATE
- MERCHANDISE
- MAGNIFY
- IMPERTURBABLE
- IRREPARABLE
- RUMORS
- UNPUNISHED
- RUNAWAY
- SIREN
- ASSIZES
- BEHAVING
- UNWORTHILY
- ABSURDLY
- KENNEL
- CROWING
- DARN
- SHRINKING
- GENIALITY
- MENIAL
- CONCESSION
- LISTLESS
- IMPARTIALLY
- PAINTS
- SOFTENING
- RECTITUDE
- PALAZZO
- CRESCENTINI
- HOVERING
- RECOGNISABLE
- JUDGEMENT
- THOROUGHNESS
- REPUBLICS
- STRICKLAND
- ABUSIVE
- DEFENSIVE
- STRENUOUSLY
- EMBASSY
- PURSUANCE
- NETHERLANDS
- STRENUOUS
- CREWS
- INTERCEPT
- BALTIC
- SOUTHERLY
- SHETLANDS
- INFERIORITY
- SEAMAN
- WARSHIPS
- PRIZES
- DENMARK
- ASSUMPTION
- INSISTENCE
- CONTESTED
- BLOCKADE
- NEGOTIATION
- OBSESSED
- ACCESSION
- INGRATITUDE
- GUARANTEED
- ENVOY
- CIPHER
- FORWARDED
- VALIDITY
- INSURMOUNTABLE
- DRAIN
- SAUCEPAN
- SPRIG
- CELERY
- NUTMEG
- MALT
- TABLESPOONFULS
- UNITES
- PORTRAY
- SAVOR
- CHILLING
- MUSES
- CANKER
- PRECINCT
- RECOUNT
- SOOTHES
- EXPANDS
- EMPLOYER
- MIGHTILY
- GINGERLY
- KNOB
- STUDS
- NEWCOMER
- KNUCKLES
- STRANGER'S
- GLOVED
- AGONIZED
- SEALING
- THUD
- SHOCKING
- HALLWAY
- DETAINING
- PATAGONIA
- CLANG
- ARCHER
- DULLEST
- GOUTY
- MORSELS
- INCIDENTAL
- DRYNESS
- IMMODESTY
- LICENCE
- CENTRED
- CARICATURE
- FLAGGED
- STEEPED
- MANOEUVRE
- TRIUMPHS
- SPONTANEOUS
- BAS
- HARMONIOUSLY
- LUGUBRIOUS
- PREVISION
- CAPACIOUS
- TIMBERED
- PROFUNDITY
- ENDURABLE
- HURL
- RIVERSIDE
- LAZINESS
- SONOROUS
- BLEST
- MYSTIFIED
- MISTAKING
- INTERTWINED
- SHIPWRECKED
- CRICKETS
- ROSETTE
- ELSE'S
- OVEN
- DWARFS
- CLOTHS
- PANS
- LONGINGS
- INTENSIFIED
- CRIPPLED
- GLORIFICATION
- REPUGNANCE
- EXPLOIT
- RELAXATION
- DEGENERATE
- ACHIEVING
- FAIRNESS
- DEVELOPS
- DOSES
- PARALYZING
- RETALIATION
- INCREASINGLY
- KEEPERS
- SPELLING
- SMACKS
- BENEDICTION
- PULLMAN
- TREMORS
- JAY
- EFFULGENCE
- ELASTICITY
- IMPLEMENT
- CANNED
- UNTROUBLED
- PAWN
- BRIMSTONE
- FIREWORKS
- SPECULATING
- LAPIERRE
- ASTIR
- JONATHAN
- HERBAGE
- PUDDLE
- UNRAVEL
- WOODLAND
- PROCLAIMING
- WESTCHESTER
- TORONTO
- DETENTION
- BURLY
- FUNEREAL
- OUGHTN'T
- AFTERMATH
- TINKLING
- MANNERED
- TEMPORARILY
- FUTILITY
- LOPPED
- GRUNTING
- IGNORES
- SETTLES
- SURER
- ABOMINABLY
- IMAGINATIVE
- LOUNGED
- UNCTUOUS
- MUTELY
- WEIGHTS
- MORSEL
- DOUBLY
- NAMING
- PALATE
- LABORING
- SCALD
- ARNICA
- TUBES
- RHEUMATIC
- NOSTRIL
- USEFULNESS
- PROMOTING
- WRAPPER
- UNSURPASSED
- TONIC
- DUSTS
- VARNISH
- PERFUMES
- CIMETER
- DISCONTINUED
- DINARZADE
- RESOLVING
- MISTRUST
- CONJURING
- ACQUAINT
- WIDOWER
- ARABIC
- POTION
- RAVED
- PROVEN
- ELEVATOR
- LOADS
- RUMORED
- UNANNOUNCED
- WUTHERSPOON
- CROSSROADS
- REGRETFUL
- MINNESOTA
- MORTGAGED
- CIGARS
- ORATORICAL
- PUNCH
- BOUNTIFUL
- HEARTHSTONE
- CAMEL'S
- CRABS
- CHEWING
- WESTERNER
- FOOTSTOOL
- BOOST
- FAMED
- EQUALLED
- PROMOTER
- FILED
- ANXIETIES
- STUDIOS
- EXPLORERS
- METHODIST
- SUPPERS
- AMBASSADORS
- INFIDEL
- SCIENTISTS
- ENLIST
- MILITANT
- SUFFRAGIST
- GRUNTED
- BANTER
- FLATS
- FRENZIED
- SARDONIC
- GROCERY
- PROTECTING
- TIMOROUS
- POISE
- INVOLVING
- IMPERSONAL
- STRAIGHTENING
- ALLEVIATE
- BEAUCHAMP
- AFFABLE
- PURITANICAL
- DISHONORED
- FINANCE
- DEMOLISH
- DEPOSIT
- RALLIED
- PORTFOLIO
- CHARITIES
- WIDOWS
- NABOB
- DEFICIENCY
- MONOTONY
- SKYLIGHT
- CLATTER
- COMPOUNDED
- BORES
- SQUALID
- SUNFLOWERS
- SUMMERS
- STICKY
- DEPOSITS
- MAGNIFIED
- DISTASTE
- DODGING
- UMBRELLAS
- ADOLESCENCE
- CALORIES
- MIND'S
- SUE
- JILL
- INSTRUCTOR
- THOUGHTLESS
- HYDROGEN
- MURCHISON
- COLUMBIAD
- HUMORED
- DEADEN
- PULSATION
- BALTIMORE
- RAMMED
- INTERPLANETARY
- CLASP
- STANDARDS
- AXIS
- CEREALS
- WEIRDNESS
- VITALITY
- TRASH
- ANNOUNCEMENTS
- BRIEFITES
- PARTAKE
- TROLLEY
- WIZARD
- CONSTRUCTING
- WHEREON
- TRANSLATION
- KERM
- CHER
- GLORIFIED
- RYNCH'S
- MOLD
- JUTTING
- WATCHERS
- SPACER
- GROUPED
- WEBBING
- ANGLED
- CLAWED
- THIGHS
- SHIED
- FRANTICALLY
- WORCESTER
- BEVERLEY
- EDWARD'S
- EXPELLED
- DISBANDED
- MERITORIOUS
- RUSSET
- DESERVEDLY
- RIDDLES
- HAMPTON
- BRIDES
- DETAILED
- TELEPATHIC
- SYMPTOM
- ELABORATION
- LEGISLATURES
- RECOGNIZES
- SUGGESTIBILITY
- REENFORCED
- NATURALISTIC
- THERAPY
- ARGUES
- THERAPEUTICS
- INALIENABLE
- URGES
- POSTULATES
- INJECTIONS
- PERVERTED
- OVERLOOKING
- UNINTENTIONALLY
- ABSTAIN
- UNDESIRABLE
- CLINIC
- FULLNESS
- UNDESERVED
- VIEWPOINT
- MASTERING
- UNSTABLE
- PO'LY
- UNS
- BABYHOOD
- OVERHUNG
- FLUX
- GULFS
- EPOCHS
- PERPLEXING
- PANORAMA
- UNFAMILIAR
- WAKES
- CONFUSING
- MAXIMUM
- TRANSLUCENT
- COLOURLESS
- SURGING
- REMOTELY
- DEEPENED
- ENORMITIES
- ESPOUSED
- ALLEGIANCE
- EXPLOSIVE
- EXASPERATING
- UNRESTRAINED
- LUCIDITY
- FEIGNING
- UNSUSPECTED
- EXPEDIENCY
- GROOVES
- DIVERSITY
- ABSTRACTION
- UNDISPUTED
- SICKLY
- INSPIRITED
- INSUFFERABLE
- IRRITABILITY
- IRRELEVANT
- CONSTRAINT
- BLUNDER
- CONSOLING
- INSUPPORTABLE
- REQUIREMENTS
- MARVELOUSLY
- RASHNESS
- UNMISTAKABLY
- PARAMOUNT
- PREDICT
- PROTRACTED
- VEXATIOUS
- LOUISE'S
- IMPOSTURE
- WELDON
- ASSORTED
- MOROSE
- ECSTATICALLY
- BARNYARD
- WETTING
- INSISTENT
- VOLCANIC
- SATURATED
- INCANDESCENT
- HABITABILITY
- SOLAR
- DIFFUSE
- CALCULATIONS
- DIMINUTION
- LINEAMENTS
- SIEVE
- TUNIC
- SNAPPISHLY
- DEVOTEDLY
- ROGUISH
- STEEPER
- BLACKED
- WETTED
- BUMPED
- SIPS
- BUBBLED
- PUNISHING
- UGLIER
- SIGNIFIES
- MUFTI
- PEEL
- ORANGES
- SUPPING
- ASSASSINATED
- DARTS
- LAUDABLE
- ILIAD
- PROFITING
- DEPORTMENT
- ENTERTAINERS
- EMPIRICISM
- INHALE
- WEED
- INHALED
- EDDIES
- SQUAW
- PRECEDES
- TERRITORIES
- LOUNGING
- IMPRESSIVELY
- ASSURANCES
- SALUTATIONS
- FRUGAL
- TOMAHAWKS
- REPULSE
- CORA
- SWALLOWS
- AUDITORS
- EVASIVELY
- SLAUGHTERED
- INSTANTANEOUS
- DUR
- INSINUATION
- DISAPPROBATION
- TEMERITY
- UNCONCERNED
- VIGILANT
- FASHIONS
- WORKINGS
- ADOPTION
- APPETITES
- FAVORITES
- HOUSEWIVES
- JUNCTURE
- WHOLEHEARTEDLY
- PERKINS
- PLUSH
- CEILINGS
- HUES
- SYNDICATE
- TRUSTWORTHY
- INGREDIENTS
- REEL
- MARKETS
- INTERVIEWED
- IMPERIOUS
- BLINDS
- UNFOLDED
- PRONOUN
- INEXPERIENCED
- CONFIDENTIALLY
- CORROBORATE
- UNBELIEVABLE
- NICKNAME
- WRESTLED
- BEREFT
- ALLERS
- SAG
- APPROVINGLY
- BAREFOOT
- BELLE'S
- FESTAL
- GLISTENED
- ENCHANTING
- INFER
- INUNDATED
- DRIVER'S
- PATTY'S
- HILLIARD
- INITIALS
- BRIDESMAIDS
- FERVENTLY
- CHRONICLE
- PADS
- SNIFFING
- DAINTILY
- MAE
- BRISTLED
- JALIB
- MOSQUES
- MAT
- SUSTENANCE
- TESTIFY
- HAROON
- FRAILTY
- ENLARGING
- EXTOLLED
- ALIGHTING
- WAITS
- INCAPACITY
- DURST
- OVERCHARGED
- JAAFFIER
- EGYPTIANS
- CONTRACTS
- WREN'S
- GRAYISH
- REDWING
- BLACKBIRD
- COAXED
- BLOTCHES
- FLARING
- FLEMING
- ASTRIDE
- EBB
- DISPENSE
- GRIEVANCE
- CALAMITIES
- LEAVEN
- DWARFED
- PURSES
- LIGHTEN
- RUSE
- CREDITORS
- HOSTELRY
- DEFINE
- DISPOSES
- SUNS
- ATTACH
- BEHAVES
- LOFTIER
- CULTURED
- UNINTERESTING
- MYSTICAL
- PEDANT
- PLEBEIAN
- COSTUMES
- PREFERENCES
- DESPERATION
- ANDREWS
- VALUATIONS
- UNINTELLIGIBLE
- SUFFICIENCY
- EVOKE
- BOUNDLESS
- CREATIVE
- UNDERGOING
- SHRED
- MIRE
- HAMMER
- GRANTING
- SANCTIFIED
- SANCTITY
- EXPRESSES
- QUESTIONABLE
- FUEGIAN
- PATTING
- SLAPS
- GUTTURAL
- GRIMACES
- INSTRUCT
- INDUCEMENT
- POX
- TACITURN
- SIMPLEST
- HARANGUE
- MOUNTAINOUS
- PEAT
- DESCENDS
- SWAMPY
- PREDOMINANT
- ENLIVENED
- DWINDLED
- SUCCEEDS
- HABITATIONS
- LACED
- SCREAMS
- SUPERSTITIOUS
- PERSONIFIED
- LABORIOUS
- PERPETRATED
- BYRON
- DECREASE
- EMERGING
- KEN
- INANITY
- CHASING
- POPLARS
- PUDDLES
- MUFFIN
- THRONGING
- GRIZZLY
- VALIANTLY
- LION'S
- PANT
- STILE
- RAMBLED
- BILL'S
- LAWLESS
- HEDGEROW
- AIMS
- ALTERNATE
- CHANT
- WHIPPING
- CLIPPED
- HOMEWARDS
- DAINTIEST
- BROTHERLY
- REBELLED
- DESPAIRED
- PLAYMATE
- ECLIPSE
- SUFFERERS
- CONGRATULATION
- POEM
- OATS
- PLEASANTER
- WINTRY
- VIOLINIST
- ORDERING
- PENITENCE
- SOAR
- KINGBIRD
- RACKET
- CRESTED
- SHADING
- RESTFUL
- UGH
- WOODPECKER
- NUTHATCH
- NESTING
- SCREECH
- BITES
- BRUSHY
- SWAMPS
- PEBBLY
- MARKINGS
- BOBBING
- DIFFERED
- SURFACES
- RAPIDS
- BALES
- DISAPPEARS
- FRONTS
- HEWN
- CLAN
- JARGON
- GREEN'S
- DAWDLED
- SKIP
- TWIXT
- BREATHES
- THROBS
- INEXHAUSTIBLE
- WARPED
- ALBUM
- DECREES
- SCULPTURED
- JOG
- SNAPPER
- STEVE
- BULLDOG
- UNSADDLED
- FRYING
- THINKIN
- GLIMMERING
- MADDEN
- PRETTINESS
- FLESHLESS
- DEATH'S
- TREMOR
- REALIZING
- DESERTING
- WINEGLASS
- SPITEFUL
- FAIRY'S
- LOCRINOS
- BIRD'S
- FAREWELLS
- PRISONER'S
- JEWELLED
- FONDLY
- APPEASED
- REJOICINGS
- RAVENS
- MORLANDS
- LANK
- ENJOYMENTS
- PROPENSITIES
- QUOTATIONS
- HUMOURED
- NEEDLEWORK
- FRET
- ESSAY
- AVOCATIONS
- APOLOGIZE
- SILENCING
- WOODSTON
- AFFRIGHTED
- LATTERLY
- SHREWDNESS
- NECESSITOUS
- MURDERING
- PITIABLE
- SANCTION
- REPLETE
- RETROSPECT
- LUDLOW
- EXHILARATING
- EXCLAIM
- PYRAMIDS
- SHRUNKEN
- ADMITTEDLY
- SOLICITOUS
- GARDENER'S
- DAPPER
- SALESMAN
- DAINTIES
- SACKING
- NELLIE
- CRASHING
- HUSSY
- HIDEOUSLY
- DISCOLORED
- DENVER
- FIBRE
- UNBRIDLED
- SPAR
- UNREST
- LUCID
- OCEANS
- RAILS
- CLEARINGS
- UNROLLED
- WISP
- PUNKAHS
- POIGNANT
- PAD
- SPLASHING
- BULKHEAD
- VIVIDNESS
- REPRODUCED
- PERDITION
- ASSESSOR
- SUNBURNT
- DRAPERIES
- BUTTONED
- INCOHERENT
- AUDIBLY
- ELONGATED
- HARBOURED
- EXPANDING
- BINDS
- MEDITATE
- DINAH'S
- TREADS
- WANED
- NAG
- BLURRED
- MUSICIAN
- MILDER
- SHELTERING
- SOFTER
- STARTLE
- QUARTS
- RECIPE
- EIGHTHS
- BATTER
- KETCHUP
- PRICKLY
- RIFLED
- MUSHROOM
- FRAMED
- UNSTEADILY
- CARESS
- CYCLING
- DANGLED
- LENGTHEN
- CAREWORN
- OCUMPAUGH'S
- EXAGGERATION
- BURDOCK
- DISHONESTY
- FELDERSON
- DINNAH
- DONKEY'S
- HOSANNA
- PALESTINE
- PASSOVER
- NAZARETH
- CLINKING
- BAPTIST
- ANECDOTES
- BEAU
- INTOLERABLY
- CONCEITED
- PURR
- THRIFTY
- BLAIR
- SPICE
- PICNIC
- PRECINCTS
- WAVERED
- SOUNDLESS
- MARIANNE'S
- HACKNEYED
- BLASTED
- PLAIT
- EYEING
- JENNINGS
- RAILLERY
- NEWNESS
- ARCHNESS
- HUMBLED
- PERFECTIONS
- HABITUATED
- VICTORS
- VALOUR
- QUERIES
- AMUSEMENTS
- CANDIDATE
- ILLUMINATE
- RETURN'D
- MAGUS
- IMPERIOUSLY
- HOUSINGS
- FEINT
- ANTAGONISTS
- SADDLES
- COMMENCING
- BINGLEY'S
- LOUISA
- AGREEING
- MEANNESS
- APOTHECARY
- LIZZY
- UNVARYING
- WITTICISMS
- WILY
- ASSEMBLING
- FRATERNAL
- UNAVOIDABLY
- DRYLY
- SUBDUING
- QUENCHED
- POURS
- ELEMENTARY
- SUBSERVIENT
- PINNACLES
- THUNDERS
- WRECKS
- DISORGANIZED
- INSIGNIFICANCE
- EPIDEMIC
- LAMENTATION
- TAMPERED
- PARAGRAPH
- GRAVEN
- RESIDENTS
- NECESSARIES
- PLOUGH
- WORTHIER
- MANUFACTORIES
- DIMINISH
- LIKELIHOOD
- CHEVALIER
- GRUFF
- TURRET
- ASSERTIONS
- EXERTED
- LAVISHLY
- STRAIGHTEN
- BARTLETT
- CARYOE
- CHARLIE
- THUMBS
- DISTINCTNESS
- EXPENSIVELY
- TREND
- TWINKLE
- TILTING
- QUARTERSTAFF
- VENISON
- NOTTINGHAM
- TANNED
- WILES
- TROD
- DRUBBING
- SMITE
- THUMPED
- EVENLY
- INNS
- CARDED
- GAITERS
- VOUCH
- DOROTHEA
- VASSALS
- INDUCING
- BETROTHAL
- IDLERS
- STAB
- PRECIPICE
- PANGS
- SCORCH
- WOLF'S
- PLOUGHED
- PERFUMED
- FOAMED
- LAMBS
- EDICT
- GASCON
- APPRENTICESHIP
- MELEE
- CALMING
- LOUVRE
- BLOCKHEAD
- DILATED
- LIBERALITY
- BONACIEUX
- CONJUGAL
- MAJESTIES
- GOLDSMITH'S
- GEM
- CADET
- SHUN
- FLIRTATION
- KNOWL
- SITUATE
- REVERIE
- DEVOURING
- COMMISSIONED
- FLUENT
- MADAME'S
- COUGHING
- HAUNT
- ACCOMPANIES
- KINDEST
- INMATE
- CRACKY
- FUMBLING
- OPERATES
- EXHIBITS
- PHRASES
- UNLIKELY
- OBEYING
- SUBJECT'S
- GLASSY
- SIMILARITY
- DILATE
- COOPER'S
- MAGNETISM
- RESPOND
- PARADOXICAL
- PROGRESSED
- TAM
- MUSKETRY
- PRESUMABLY
- UNEDUCATED
- SINGLED
- TOUGHEST
- WALLED
- SODDEN
- CROWS
- VICTUALS
- INFIRMARY
- LATTICE
- BRUSHES
- BUSTLE
- HORSESHOES
- LONGS
- PARTNERSHIP
- NEWPORT
- BEN'S
- AX
- STRANDS
- TRANSMUTE
- ALEMBIC
- EXPLORER
- DRAUGHTS
- NOBLER
- ARROGANT
- WONDROUSLY
- SMOOTHNESS
- INCORPORATED
- NOTABLY
- SHRINES
- UNCRITICAL
- EXTENSION
- GODDARD
- WEAKENS
- ADJUST
- SIMPLIFY
- INFECTIOUS
- LUSTY
- NEATNESS
- STEAMER'S
- WIMPOLE
- SCANNED
- PRACTITIONER
- KNOWETH
- BLANDISHMENTS
- PORTERS
- CIRCULATED
- WILDFELL
- NIGHTLY
- FROLICSOME
- CEMENT
- PERUSE
- PROSPERED
- BRILLIANCE
- BLANCHED
- HOOFS
- SALUTING
- MILLWARD
- VICARAGE
- DEPRAVITY
- BEVERAGE
- REPREHENSIBLE
- INVADERS
- BARBARIAN
- MAXIMUS
- TIBER
- SACKED
- SEAPORT
- RAVAGES
- TRUCE
- NARSES
- CARAVANS
- KORAN
- MEDINA
- ATROCIOUS
- MULTIPLE
- HAZARDS
- CALCIUM
- NATURALISTS
- SECRETE
- INSIDES
- SOLIDIFYING
- OYSTER'S
- STERILE
- DUBIOUS
- EXTRACTING
- TWOFOLD
- BASTARD
- OPAQUE
- STRAINERS
- CLEOPATRA
- COLLECTS
- SAGGING
- HAIRLESS
- LINGER
- ED
- SUPERVISION
- DISCHARGING
- DAYTON
- COMELY
- PITFALLS
- STRIVEN
- FRAUGHT
- THREESCORE
- MASSACRES
- ARMENIANS
- VIOLATED
- EDITORIALS
- SUDAN
- RAPE
- FULFILMENT
- NORWEGIAN
- TARRED
- ANTARCTIC
- STAYS
- PROPORTIONATELY
- BRIDGES
- WORKSHOPS
- SLACK
- TEMPERATURES
- DAMPNESS
- SEAMS
- DISINCLINED
- SMARTING
- TALLOW
- LENGTHS
- ALTERATIONS
- FISSURES
- PHOTOGRAPHIC
- RELIABLE
- PEMMICAN
- WISCONSIN
- BEACON
- ATTAINING
- NOOKS
- PRETTILY
- PROPRIETORS
- UNDISTINGUISHED
- REPRESENTATIONS
- QUARTERLY
- EDMUND
- RESTLESSNESS
- SAVOURY
- PRACTICABLE
- MINORITY
- ENTRUST
- UNEQUIVOCALLY
- DERIVES
- SUPPORTS
- CLOG
- FINED
- SPINDLE
- LOOSELY
- JADES
- PRESIDED
- FATES
- SKULKED
- TEXTURE
- HUNGRILY
- ATTRACTING
- POCKETBOOK
- GROOMED
- WOWZER
- POKE
- GETTER
- LURE
- IRRITABLY
- POMPOUS
- MAHOGANY
- MELODRAMATIC
- GRUFFLY
- ASHEN
- HELLISH
- KNOBS
- BANK'S
- STEAMSHIP
- DENIALS
- BABBLING
- REELING
- TWEEZERS
- GRANTLINE'S
- SUITE
- SOMBER
- PLOTTING
- JERKIN
- PANTS
- SWAGGER
- RASP
- HERITAGE
- TRAJECTORY
- SNAKY
- JOCULAR
- LASHES
- INTOXICATED
- RADIO
- ASSAILANT
- SIZZLING
- STARLIGHT
- DRAMAS
- LEVER
- STARCH
- UNEXPLAINED
- HOTTEST
- RHETORIC
- UNSHAKEN
- CONSTANCY
- PATS
- REPRESS
- QUETCHAM
- ALLOWANCES
- OFFSPRING
- FIXEDLY
- QUICKEST
- RASHLY
- REVISED
- HOLINESS
- INSTITUTIONAL
- CHIP
- HELSTONE
- FERN
- REVELLING
- URGENCY
- UPLAND
- AUTUMNAL
- GORMANS
- LABOURERS
- BACKGAMMON
- DIXON
- ALIAS
- HALES
- MORTIFIED
- ALTERCATION
- ASPERITY
- MITIGATE
- OVATION
- BARRELLED
- SAGACIOUS
- INCENSE
- KEYED
- SOARED
- MILLY'S
- ODDEST
- ENDLESSLY
- IMPERTURBABLY
- STOCKING
- MASKEW
- EXCISE
- OFFING
- GENTLEST
- SMUGGLERS
- PURBECK
- TWINGE
- SNAIL
- TWOULD
- BRAMBLES
- STRIDE
- RUBBLE
- SUFFOCATING
- HATCHWAY
- WOES
- TUMULTS
- SINEWS
- COMMOTIONS
- SCANDALIZED
- CARNAL
- GOATS
- BOLING
- JEALOUSLY
- ALMA
- MATER
- COLLEGES
- SAKI
- SAKI'S
- BLEAR
- DIPS
- TEASE
- VANKA'S
- RUBLES
- OLGA
- IGNATYEVNA
- WRAPT
- FRIGHTFULLY
- DOG'S
- TOKENS
- VARIABLE
- ABATEMENT
- DISTEMPERS
- SADDLER
- OVERSEER
- PORTUGAL
- APPOINT
- IRRESOLUTE
- NOISOME
- WHEREOF
- INEXPRESSIBLE
- FALLON
- FRESHLY
- DRYING
- HAPPENINGS
- SORROWING
- CUPBOARDS
- PUNISHMENTS
- PANTHER
- LIEUTENANTS
- DESERVING
- HUGGED
- TRIMMING
- FRISBIE
- TUFT
- NEUCHATEL
- ENTRANCED
- HERALDS
- FRAU
- MARIE
- SLIPS
- SENORITA
- CONTRIBUTORS
- TOWERED
- SAP
- WILDMAN
- FLETCHER
- AUGURED
- FEUDAL
- SHAFTESBURY
- VULNERABLE
- CANONS
- PUNCTILIOUSLY
- CONGREGATED
- TRANSACTION
- EDINBURGH
- REVOLTED
- SUPPORTERS
- RAM
- SURVEILLANCE
- PRUSSIA
- SOUNDINGS
- TENFOLD
- RAMS
- DEBATED
- CRYSTALLIZED
- PURGE
- ENERGETICALLY
- FURNACES
- STOIC
- DEFIED
- FANATIC
- KIT
- AFTERDECK
- MAJESTICALLY
- FERRIES
- TENDERS
- WHARVES
- THROATS
- HAILING
- NEGOTIATE
- ISLET
- AMAZINGLY
- BLACKISH
- DENSELY
- CHESTS
- TACK
- STRIPPING
- FICKLENESS
- BUNKER
- DONNITHORNE
- HEV
- HEARER
- CITED
- DISAGREE
- PITCHING
- TONGUED
- MARTYR
- MICHAELMAS
- CALLOUS
- RETRIBUTION
- RUMINATING
- WOUNDING
- LEVISON'S
- WOOLEN
- UNDECEIVED
- CALMER
- AMICABLY
- CAMPING
- UNABATED
- BRAKES
- GUSTS
- SNOWFALL
- TOILED
- FORENOON
- IRREGULARITIES
- BJAALAND
- THORVALD
- ABYSSES
- THOUGHTFULNESS
- CREVASSE
- BALLADS
- JINGLES
- RHYME
- PUSSY
- PUSS
- DESCENDANT
- FLEET'S
- DISMISS
- CRADLES
- RESIDING
- EDITIONS
- DECADE
- ILLINOIS
- NESTLED
- PROFICIENT
- RUFFLES
- HAMMERING
- DEPUTED
- THAWING
- INDISTINGUISHABLE
- BAWLING
- ALCOVE
- OBLIQUELY
- BALCONIES
- ROBED
- ARCHES
- DIPPING
- BARGES
- LONGITUDINAL
- UNDULY
- TESTIFIED
- NOTICEABLY
- VICTORIAN
- LABOURER
- CHINKS
- AISLES
- MASONRY
- VAULTS
- ILLIMITABLE
- YELPING
- DETERMINATE
- USURP
- NUPTIAL
- ZEPHYR
- PERFORMERS
- LUTE
- PREYED
- SICKLES
- FROWNS
- INEXTRICABLE
- WOOLLY
- PROSERPINE
- REALMS
- ALLEGORY
- ALLUDES
- WOVE
- SAPPHIRE
- HOLIEST
- NIBBLE
- ROQUEFORT
- CUBE
- BUNG
- ANDRE
- PALATABLE
- UNCHANGING
- HUDSON'S
- BOUNTY
- INVETERATE
- LIGHTENED
- ONTARIO
- ROAMED
- WINDINGS
- JESUITS
- UNMOLESTED
- RICHELIEU
- DESTINIES
- DOWNFALL
- HINDRANCE
- DULLED
- VULTURE
- FORESIGHT
- ACUTENESS
- REFRAINED
- FLUENTLY
- SECT
- BAMBOO
- NAZORAERU
- GOBLIN
- HERMITAGE
- CAPACITIES
- DEWS
- HAUNTERS
- WICKEDLY
- SUTRAS
- RECITING
- PLUCKING
- BRIGHTENING
- GOBLINS
- SUWA
- SAMURAI
- TRANSITORY
- EXCLUDE
- CROWED
- SPINSTER
- DENOTED
- DEIGN
- ENGENDERED
- MOIST
- EXHIBITING
- CONSTRUED
- DEARER
- REBELS
- COMPLETING
- CASUALTIES
- COSTING
- CHASTISEMENT
- LINEAGES
- PYRAMID
- MAINTAINS
- MONARCHS
- MEDES
- WILLS
- ERRANTRY
- PROVERBS
- CODICIL
- EMERGENCIES
- JINGLE
- BASILIO
- MATRIMONIAL
- NOONDAY
- NOTARY
- AFFIDAVIT
- EXCELLENCES
- SLOTH
- SNORING
- ENVYING
- THRESHING
- SUCKING
- INTERMEDIARY
- PRESERVER
- BACKING
- ACCOMPLISHING
- EMITTING
- FOREBODINGS
- PINNACLE
- CHAINED
- WARHOON
- THARK
- VANQUISHED
- INCUBATOR
- MANIACAL
- FIENDISH
- ABASHED
- STOCKADE
- LOCKER
- DAVY
- JONES'S
- SURVIVING
- SNORED
- SPOUT
- STRANGLING
- NICK
- MAROON
- GUNN'S
- SUPERSTITIONS
- LOWLANDS
- INGRATIATE
- GUNSHOT
- EXPOUND
- COMETS
- NECESSITATED
- FEIGNED
- DEMONSTRATE
- TRAVERSES
- OBSERVABLE
- INANIMATE
- DEDUCING
- KINDLING
- FERMENTATION
- DISSECTED
- VIZ
- CONVENIENTLY
- POUCHES
- ENTRANCES
- AMOUNTS
- SURGEONS
- ESCAPES
- THINNER
- HUMORS
- PUBLISHING
- SPONTANEOUSLY
- ANALOGOUS
- ASTRAY
- FIFTHS
- MESSRS
- BIOGRAPHY
- PLAYGROUND
- SLOWNESS
- EXHIBITIONS
- TRADESMEN
- BATTELS
- DUNG
- SCHOOLFELLOW
- SCHOSS
- DISTRACT
- NATASHA'S
- MYTISHCHI
- QUILT
- SUPPLE
- EARTHEN
- GOSPELS
- WHIFF
- RESTRAINING
- SOMNAMBULIST
- IMPUNITY
- NIGGER
- MARSER
- LARD
- SALOONS
- NATCHEZ
- FORGETFUL
- REPLYING
- ALPATYCH
- OBDURATE
- SHOVE
- BROADSHEET
- ARSENAL
- SHUFFLING
- RIOTING
- BANTERING
- BURGHERS
- MOZHAYSK
- SMOLENSK
- HIGHROAD
- UTITSA
- STIFFNESS
- EXPECTS
- SUSPECTS
- CHARLOTTE'S
- PLEASANTEST
- EXERCISING
- UNACCENTED
- DUPLE
- VARIES
- GOLOVIN
- MINERS
- ARCTIC
- SEAWEED
- IMPEDED
- MAILS
- COT
- LEGION
- LACES
- BOARDED
- JENNIE
- MIGRATION
- DATUM
- SEGREGATION
- QUASI
- SCOOP
- WHIRLWINDS
- INDIGENOUS
- ZOOLOGIST
- ORTHODOXY
- DISCHARGES
- STICKLEBACK
- HERESY
- CRUCIFIXION
- RIGOROUSLY
- ORIGINS
- PRECIPITATION
- ARCHAEOLOGISTS
- ORTHODOX
- CONCEIVING
- CORNWALL
- UNINTERESTED
- LIZARDS
- DAMNATION
- TENNESSEE
- HOGS
- HOG
- NIGGERS
- PARSONS
- DURHAM'S
- WHO'LL
- SPORTY
- CLIMATES
- FOREFEET
- INFAMOUS
- SOMEWAYS
- CARPENTERS
- WIDOW'S
- BUD'S
- TATE
- ENGINEERING
- HANDICAP
- GESS
- HOLATI
- FEDERATION
- PLASMOIDS
- INDUSTRIAL
- RAIDER
- TARGET
- PROFESSOR'S
- HUH
- WINTERS
- OREGON
- BETTY
- COSU
- LOGICALLY
- DEXTEROUSLY
- HERNDON
- MATS
- CLIENTS
- ADMINISTRATION'S
- STRAITJACKET
- CLOSETS
- BAYING
- HABEAS
- CORPUS
- NORFOLK
- LEGALITY
- VIOLATION
- CONTENDED
- MILFORD
- FOLLIES
- EMBLEMATIC
- RESORTS
- WHACK
- EXTENUATING
- ROC'S
- OUTLAWED
- SHIPPED
- PIRACY
- DOUGHTY
- MERIDA
- IGNOMINIOUSLY
- UNFOLDING
- STRAIGHTFORWARD
- AWKWARDNESS
- ALOOF
- BETHINK
- PERMISSIBLE
- GLEANED
- FATHERLAND
- ENTITY
- BEFELL
- SENESCHAL
- FORSOOTH
- DOLEFULLY
- SPURS
- COMMANDMENT
- ASKEST
- KNIGHT'S
- HEREUPON
- APOSTLE
- JACOB
- PORING
- COAXINGLY
- AILS
- ADDITIONS
- DUET
- LESLIE
- ENNA
- SMOOTHING
- VACATED
- BONBONS
- INTERSPERSED
- UNWHOLESOME
- VENTNOR'S
- TIMED
- SUMMONING
- JAVELINS
- WEIRDLY
- DISQUIETING
- GLOBES
- RECTANGULAR
- CLICKED
- PARODY
- SMITING
- ROCKED
- ANNIHILATION
- NORHALA'S
- WRAITHS
- ENIGMATIC
- BILLOWS
- COUNTERPART
- SHRUBS
- SNUGLY
- QUEERLY
- THAR
- FISHIN
- WAYWARD
- PUFFING
- WHAR
- MOTTLED
- GEE
- PEAL
- BUMPING
- TRANSLATE
- ROWBOAT
- YACHT
- BOHEMIAN
- FLINGING
- DISSOLVED
- DOMINICAN
- CLOISTER
- SUPERIORS
- DEATHBED
- MEDICI
- RIGHTFUL
- CALIBRE
- LUCRETIA
- REFEREES
- RAGING
- SCOUTING
- WYANDOTTE
- WILCOX
- ROCHESTER
- BUNTLINE
- LIZZIE
- SHANDY
- CIVILIAN
- CHATTELS
- CITADEL
- TURN'D
- ALL'S
- TRIM'S
- HUMOURS
- FLIMSY
- DISTINCTIONS
- WIDOWHOOD
- RECALCITRANT
- MAGNATE
- STOCKED
- PAPPUS
- JUDEA
- MACHAERUS
- JERICHO
- DEMOLISHED
- AUXILIARIES
- ADVERSARIES
- CUSTODY
- EMPTYING
- SLAYING
- FAINTNESS
- WITHHELD
- FREEING
- INTERPRETING
- MODERATED
- PREPOSTEROUS
- COURTLY
- INDIGNITY
- CHIRPING
- FIREARMS
- RILEY
- GALLED
- DRAFTED
- PENAL
- PRECEDENT
- MUZZLES
- KNELL
- SURVILLE
- VETCH
- BILLOW
- OUTSPREAD
- SPRAINED
- COX
- DESCRIED
- HELM
- SPURTS
- WEARISOME
- THEODORE
- TACOMA
- CONSEQUENT
- IMPROVISED
- CONTENTION
- COWLITZ
- OLYMPIA
- DRENCHING
- SELLS
- BUYS
- AGRICULTURAL
- PATRONS
- SHIRLEY
- GRAZING
- CAUTIONED
- UNCOMPLAINING
- NARRATED
- RECOUNTED
- BREVET
- CANAAN
- PLUMB
- MACKENZIE
- DEDICATION
- CITIZENSHIP
- DISTINCTIVELY
- EQUALS
- FROWSY
- KINGSHIP
- ATROCITIES
- ANT'S
- ATOMS
- ABOLISH
- WORSHIPPING
- FOODS
- PUNY
- PARASITES
- ADAPTABILITY
- RHINOCEROS
- QUIVERS
- ARQUEBUSIERS
- ROBUST
- SOUTHWESTERN
- HOGSHEAD
- JOURNEYING
- EXTORTION
- BRETON
- EXASPERATED
- OUTCRIES
- BROWBOROUGH
- COUNTESSES
- BROUGHTON
- DAUBENY'S
- KENNEDY
- INEFFABLE
- TAYLOR'S
- TUMBLER
- VIRGINIA'S
- NORMAN
- ORGANIZE
- PARASOL
- WINSLOW
- TREASURED
- CAPTIVATING
- PROPOSING
- GRAFT
- SOLICITOR
- REFLECTIVE
- ROTTENNESS
- CROOKEDNESS
- BUTCHERS
- ATTRIBUTING
- UNDERTAKER
- LEAKED
- BULKY
- CLIENT
- DEVISE
- DISSENTING
- BUCK'S
- SCRAPE
- REMONSTRANCES
- SLIMY
- DAMAGES
- YELLOWS
- LOFTILY
- OUTCAST
- ABBE'S
- CHISEL
- FARIA
- PUZZLES
- ENABLES
- OVERFLOW
- REPROACHES
- IMPERFECTION
- NOBLENESS
- INFATUATION
- FREDERICK'S
- CANDID
- SUFFOCATED
- PORTABLE
- CIRCUMSTANTIAL
- BOILER
- BELLIGERENT
- BOMBS
- CANNONADING
- BITTEREST
- HOAX
- SIGNALS
- SOCIALLY
- ENFORCE
- NATION'S
- SOMME
- HAYDEN
- CIGARETS
- CELLARS
- LITHE
- JACKASS
- CARRION
- PETS
- EVREMONDE
- DOGGED
- EMIGRANT
- ENTHUSIASTICALLY
- REDDENED
- REGRETTING
- PRIOR
- CREWMEN
- CUBIC
- DIOXIDE
- TRACTS
- BATTERIES
- YAWNS
- MUTTER
- ALLOTTED
- HEADACHES
- TOWED
- POUNCE
- BAROMETER
- MOLLUSKS
- OPULENT
- JOINTS
- SNORE
- ARGONAUTS
- COMPOSEDLY
- PLOW
- HARROWED
- SPROUTED
- RECOMPENSE
- WREATHS
- POTENTATE
- ENTANGLEMENTS
- ASPIRATIONS
- TUMBLING
- VERNON
- FLURRY
- TELEGRAMS
- LEGISLATION
- WIDESPREAD
- PROPHECIES
- PRESIDENTIAL
- PRESIDENT'S
- SAVORED
- CZAR
- NA
- DERIVE
- MOLESTED
- ADMONITION
- MONTANA
- CALIBER
- SWINDLED
- DICKENS
- REDDENING
- BURGLAR
- CLAUS
- NUISANCE
- CONVENTIONS
- THING'S
- PASTE
- SCHOOLBOY
- UDO'S
- CASUALTY
- GINGER
- BARODIA'S
- IMPULSIVELY
- INTERCHANGED
- AHA
- REPAIRING
- ANASTASIA
- MARRIES
- BERLIN
- FLUTE
- IMPERATIVE
- GHOSTLY
- SMASHED
- CONCUSSION
- DISAPPOINTING
- COLLEAGUE
- REUBEN'S
- IL
- SIRENS
- HORSEMAN
- WATSON
- SCORNFULLY
- MERRICK'S
- STUBBY
- COZY
- MISCHIEVOUSLY
- PAUPER
- COMPLACENTLY
- BUNS
- ISHAM
- AMALGAMATED
- NAPKIN
- AESTHETIC
- BLYTHE
- NICEST
- SMALLPOX
- GRAVEYARD
- WAILED
- DISLIKES
- WRATHFULLY
- IMPROVERS
- LUCKLESS
- MUDDLE
- TABLECLOTH
- EMERSON
- YANKEE
- JONAH
- WAKEFUL
- ANGELIC
- SCHOOLROOM
- SMOKY
- PROVOCATIVE
- SQUEAKED
- HUMILIATED
- QUIETER
- WHISKY
- SECULAR
- CALMED
- FACILITATE
- CLOCKMAKER
- SAWDER
- MARRED
- LOVABLE
- ENTITLE
- ORLANDO
- PATRONAGE
- EDITORS
- GRESHAM
- EXCHEQUER
- MENAGERIE
- CLOWNS
- NORTHUMBERLAND
- FAY
- WINKING
- WAYSIDE
- JINGLING
- SPLENDORS
- PRANCED
- BOTHERING
- TIMBERLINE
- SNOWS
- CHAPARRAL
- BUTTE
- BOULDERS
- PACKS
- ICEBERGS
- HOLLOWS
- SACRAMENTO
- SPRINGTIME
- BOXED
- BOOMED
- BRUISE
- MITIGATED
- OUTPOURING
- DENSER
- GLACIAL
- RAVISHING
- DORMANT
- DAISIES
- INACCESSIBLE
- TERMINUS
- CHARMINGLY
- RAILROADS
- RIDES
- CONES
- FIRELIGHT
- RAVENOUS
- BAIT
- PAPOOSE
- REDEMPTION
- DIGNITARY
- UNWIELDY
- TORCHES
- CRAFTY
- RETRACT
- YOUTH'S
- EMPHASIZED
- DISHEARTENED
- REBS
- BRIERS
- TALKIN
- VEXATIONS
- STATUTES
- TAILORS
- UNRIVALLED
- GNATS
- KNOCKS
- LECTURE
- HAWKS
- WHATE'ER
- INHERIT
- STEWARDS
- VENGEFUL
- DUELS
- SAK
- PUPPY
- DOYLES
- CUSHIONED
- SNORT
- MULTI
- UNMARRIED
- CAT'S
- FEASTING
- TALERS
- COBALT
- ENDOWMENT
- ASBESTOS
- FIZZLECHIP
- HARDWARE
- CASKET
- OUTLANDISH
- TRICKERY
- BANANAS
- DRAFT
- CALCULATION
- AUCTION
- MATTRESSES
- OMNIBUS
- GUTTER
- BRUTUS
- CAESAR'S
- FREDERIC
- WEARER
- MINUS
- ADORABLE
- INTERSECTION
- USURPATION
- PURCHASES
- EPONINE
- UNDERWENT
- FARTHINGS
- CORKS
- JACQUES
- SOCIETIES
- RENDING
- CHANVRERIE
- INFAMY
- PROTESTATION
- INESTIMABLE
- WHEWELL
- DEMOLISHING
- INTRINSICALLY
- INSUBORDINATION
- STUPIDER
- ANTON
- ANTONITCH
- SERVITUDE
- PURPOSELY
- SHAM
- REBUFF
- SIMONOV'S
- APOLLON
- PALTRY
- FUNKED
- HOMESTEADS
- EPHEMERAL
- TRAILS
- HINGHAM'S
- BELFRY
- DERBY
- RIPPLES
- SNOWBALL
- BROOD
- CHICK
- DIGEST
- MOW
- IMPLORING
- SCUFFLING
- GLIDE
- BISON
- PLUMAGE
- SNARL
- TIGER'S
- CAPERS
- BOA
- KEEPER'S
- BUOYANCY
- RECALLING
- SLUGGISH
- ADRIFT
- VASTNESS
- REVOLVED
- VAPOUR
- POISED
- UNRECOGNIZABLE
- REMINDER
- BLAMING
- NATIONALITIES
- ALEXANDER'S
- INTERACTION
- MISDIRECTED
- BOURBONS
- STAEL
- TALLEYRAND
- MURDERS
- INTRINSIC
- LADDERS
- NATHAN'S
- TRIBUTARIES
- TERMINAL
- STEADYING
- ROYLAKE'S
- UPPERCLIFF
- DRAINS
- FALSELY
- GLOODY'S
- STEPMOTHER'S
- DIMMED
- EARTHY
- NOSEGAY
- WARNINGS
- SANTIAGO
- UNDERGROWTH
- SCRIBBLED
- ROUTED
- SOLDIER'S
- THORGEST
- GUNNBIORN
- ODIN'S
- TIGHTS
- ERICSSON
- WHIZ
- CLATTERED
- PRANKS
- PRESS'D
- PITIES
- DUNKIRK
- PRITHEE
- ACQUITTED
- FRIAR
- DISSERTATION
- PATHETICALLY
- PREDESTINED
- HALTER
- OLENIN'S
- ABREK
- LUKASHKA'S
- KUNAK
- LUKE
- MARYANKA
- DRABANT
- VANYUSHA
- UNDREAMT
- WILLIAM'S
- SYLVIA'S
- YO'R
- IDENTIFY
- EXPEND
- LUNNON
- INTERROGATOR
- LOITERING
- LIGHTNESS
- IVER
- DEMONS
- RANGING
- FOUNDING
- SCENTS
- RUMOR
- INDISPUTABLE
- EVOLVED
- TRODDEN
- MADELEINE
- SHROVE
- METAPHORS
- PROUVAIRE
- COQUETTE
- ERR
- BROOCHES
- PRELATES
- WUZ
- KASE
- DAR
- LEER
- BRAGGER
- PADDLES
- LUFTON'S
- SCOWL
- DAMASK
- GRANTLY
- CROUP
- ARTISANS
- PRECOCIOUS
- MARVELLOUSLY
- INTRACTABLE
- O'MALLEY
- IMPLICATED
- HARLOWE
- THEMES
- HUSTLED
- SCION
- BOUDOIR
- KNOTS
- PARBLEU
- BOUQUET
- CATASTROPHES
- DIAGNOSIS
- DEALS
- HYSTERIC
- WIDEST
- PRESCRIPTIONS
- REDUCTION
- STIMULATE
- BRAITHWAITE
- WORTHILY
- TREMULOUSLY
- PANEGYRIC
- BANDAGED
- WIRED
- VOLOR
- UNREASONABLY
- DRAPERS
- HALLIDAY
- NOBBLER
- DIGGINGS
- FRAZER
- POLLUTION
- LAURENT
- VIOLIN
- WAISTBAND
- FINDLAY
- INCE
- CONTINGENCIES
- TWEED
- ULSTER
- CAIRNGORM
- BOULDER
- VERACITY
- FUNNEL
- ATHLETES
- RACIAL
- SIOUX
- INFLUENTIAL
- ASSIDUOUSLY
- AROUSING
- PRECEPTS
- SECTARIES
- CURTIUS
- RIVULETS
- IMPRACTICABLE
- FOUNDERS
- LOWELL
- MORTON'S
- DILLSBOROUGH
- SWITCHED
- MARVELLED
- SHAHRAZAD
- WIGHT
- KHORASANI
- SUCCOUR
- HATTIE
- SKAGGSY
- ALLINGTON
- ENVIES
- CROFTS
- ANIMOSITY
- AIL
- RECITATION
- QUAINTLY
- NORA'S
- LEVICE
- HARMLESSLY
- FREIGHTED
- COLLECTOR'S
- WHEW
- MUSHA
- ENDOWMENTS
- INARTICULATE
- CATHOLICS
- WEALTHIER
- PAPISTS
- BOTTLED
- COLLOQUY
- HOOPER
- DICKY
- BLINDFOLDED
- HOLMES
- THUMP
- PIPER
- LECOQ'S
- EUGENE
- OAKEN
- LOITER
- POLYTE
- UNREASONING
- JOSIANA'S
- STUART
- ESSEX
- CUPIDITY
- FADES
- DIMINISHES
- DEFORMITY
- HURTFUL
- PRICK
- POULTRY
- PUGILIST
- NECKLACES
- CHARTERS
- GIBRALTAR
- MULTIPLICATION
- VALUATION
- DISUSE
- SHEDS
- SHREDS
- DUKES
- SUB
- SHREWDLY
- BARGAINS
- EMBITTERED
- FATHERLESS
- GARRET
- CASSY
- GRAND'THER
- SPONGE
- MERCE
- SAFFRON
- UNFALTERING
- OSBORNE
- WHOOPED
- HELIOTROPE
- PHENOMENAL
- CORLEONE
- ATTICS
- BURNISHED
- ALCOVES
- PROW
- LUMBERING
- TYPEWRITER
- STOCKTON
- UPLIFT
- SEDATIVE
- NOVELIST
- CHESTNUT
- WHISTLES
- WINNINGS
- DIETERLI
- GARRETS
- MARDEN
- WALLACE'S
- PLAYFELLOWS
- UNCHRISTIAN
- INSTRUCTIVE
- GNOMES
- MOULDERING
- CORK
- YE'D
- CAROLS
- HOWLAND
- METROPOLITAN
- INKLING
- WITCHES
- MISDEEDS
- MINAMOTO
- PROVIDES
- TOKIWA
- IOLCUS
- INITIAL
- GRAZE
- PALISADE
- HEMORRHAGE
- ACCUMULATE
- EFFICACIOUS
- PREMATURE
- SYRO
- CAPPADOCIAN
- MUSKI
- URARTU
- ARBELA
- SURU
- BRAKE
- OMRI
- JORAM
- HAZAEL
- SHAMSHI
- CAMPAIGNS
- MONSTROSITY
- AIRSHIPS
- FOSTERED
- STYLED
- MUSTY
- MANUSCRIPTS
- TYPOGRAPHICAL
- PUBLISHER
- MARCY
- DOBBIN'S
- EMMY
- DOBBIN
- ADMIRER
- BEFRIENDED
- INEXPERIENCE
- CALTHORPE
- CASSIUS
- CALHOUN
- MISSILE
- MORTGAGE
- VENTURES
- EUCLID
- SHOWMAN
- FAIX
- SAXONS
- DANES
- EMPERORS
- SUARD
- CHRONICLED
- SUFFRAGES
- PENSIONER
- ACCUMULATING
- MAINTAINING
- IMPAIRED
- MISLEAD
- AUGMENT
- TRAMPED
- STARLING
- CAVELL'S
- WALLFLOWERS
- DOMINATION
- EVOLVE
- EDWIN
- UNDIGNIFIED
- GRATIFICATIONS
- LEEDS
- UPPERMOST
- TAPERING
- CARRUTHERS
- KENSINGTON
- ORANMORE
- TERMINATED
- IRRETRIEVABLY
- BEQUEST
- JASMINE
- STORK
- VIRILE
- FENWOLF
- CHATHAM
- AVE
- BARKILPHEDRO
- CORONET
- COUNSELLORS
- MEDWORTH
- NORMANDY
- POACHERS
- CABINETS
- HOST'S
- SPITS
- STACKS
- PROPHETSTOWN
- BELLERS
- MASHED
- INDUSTRIES
- CLEVELAND
- CONTINENTAL
- PLUTOCRACY
- CAPITALS
- PROPAGANDISTS
- FANATICISM
- WHEREWITH
- CHAOTIC
- BIDS
- MOTTO
- AUTOGRAPHS
- AUTOGRAPH
- CHINK
- GUITAR
- POISONER
- PAPAL
- HORATIUS
- CARLINI'S
- DERIDED
- CLAVIERS
- JACKS
- FORESHADOWED
- FUGUE
- MOZART
- FUGUES
- HARMONIC
- CONFLICTS
- ELSNER
- ASSIGNING
- ENRICHING
- FUNDED
- MICHABO
- CLOUDCREST
- O'SHAUGHNESSY'S
- JARRED
- TILLIER
- POTTED
- PERCY
- ECCLESIASTICS
- DEVOUT
- PONTIFF
- EMENDATION
- AVOIDANCE
- WRUNG
- MARTYRS
- SPORTSMAN
- BULGARIANS
- DEFEATS
- THEODOTUS
- BEHEADED
- BAKING
- RATION
- DULCET
- SWEDES
- HELSENBURG
- PROTESTANTS
- UNDOUBTED
- PORTHOS'S
- PURRING
- COMPLIMENTED
- CONJUNCTURE
- IMPETUOSITY
- PREROGATIVE
- POPERY
- HUGONOTS
- IMPEACHMENT
- PARLIAMENTS
- SUBJECTION
- USURPATIONS
- BUTTERNUT
- A'RONY
- HEZ
- PYTHAGORINA
- FLATIRONS
- COONSKIN'S
- SPINAL
- GRUNTS
- DUELLO
- HOBBY
- DECANTER
- SPECTERS
- CLANK
- WRAITH
- YAK
- PROLONG
- PUFFY
- BARMAID
- JESSIE
- BOURBON
- BOGNOR
- CHARMING'S
- SLAPPED
- BLENNIES
- TADPOLE
- RAD
- DOTS
- CHURCHWARDENS
- PEARSON
- BATSY
- PRIMAL
- SIMON'S
- CONNISTON'S
- TOBACCONIST
- MIGHTIEST
- YALU
- MANCHURIA
- SLAV
- TRAININ
- THOT'S
- AMNESIA
- INCEST
- AFFECTIONAL
- ALDERS
- CASCADE
- PROP
- MISSIONS
- VOLUNTEER
- BETOKENED
- PONDER
- AFTERNOON'S
- EMBOWERED
- HOMESTEAD
- PROVERBIAL
- DISTRUSTFUL
- MATTHEW'S
- AGGRESSIVELY
- KANGAROO
- AUSTRALIA
- ASYLUMS
- REGULATED
- INNOVATION
- HOPETON
- SPRY
- CHORES
- PRIDED
- QUALMS
- POISONING
- PESSIMISM
- STEADIER
- FILMY
- UNGAINLY
- STATIONMASTER
- BEARDING
- SHUFFLED
- LIPPED
- SCRAWNY
- DEFERRED
- GORGEOUSLY
- BANKRUPTCY
- EUGENIE'S
- DISSENSION
- GRISETTES
- CLEAVING
- BETRAYS
- ANDREA'S
- INVOKE
- PROVENCE
- LAZARETTO
- ELEGANTLY
- COMPLAINING
- MISCHANCE
- INSULTINGLY
- EGOTISTICAL
- EXTENUATE
- CORRECTION
- LUCCA
- DISGUISES
- WEAKNESSES
- PERVERSITY
- PARDONABLE
- HALVES
- NIBS
- UNFAVOURABLE
- TOOTLES
- CRAFTILY
- WENDY'S
- COMFORTER
- TARTLY
- NURSE'S
- NANA
- ADMITS
- DARNING
- FORLORNLY
- FORGETS
- RECIPROCITY
- DISPLEASES
- DETESTING
- INTERVIEWER
- WAITER
- HANDSHAKE
- DISBELIEF
- BUNCHIE'S
- FINITE
- MISSIVE
- ALTERNATIVES
- CRITICISED
- CASHMERE
- WATERPROOF
- SKETCHED
- PERSISTING
- GENOA
- INTERLUDE
- MUNIFICENT
- SALIENT
- APPRECIABLY
- WOO
- DECADES
- DORESLAER
- ROYALIST
- FORMALLY
- COMMONWEALTH
- EPITHETS
- COALITION
- OFFENDERS
- ENGLAND'S
- SQUADRONS
- SEIZURE
- PRIVATEERS
- ADMIRALTIES
- REINFORCED
- COMMERCIALLY
- PENN
- ENCOUNTERS
- INTERCEPTING
- CALAIS
- CLAMOROUSLY
- SHOAL
- KENTISH
- DUNGENESS
- INDOMITABLE
- GALEN
- LEGHORN
- ADEPT
- CONCILIATORY
- COINCIDED
- CROMWELL'S
- RUMP
- DEANE
- SOLIDITY
- INSTITUTED
- REDUCING
- UNA
- REFIT
- REARGUARD
- CHICANERY
- RATIFIED
- RATIFICATION
- PEREMPTORY
- OUTCRY
- DECIPHERING
- GARNISH
- SLICED
- CUPFUL
- MUSHROOMS
- MINCED
- TEASPOONFUL
- YOLKS
- BAKE
- ESTABLISHES
- IMPUTATION
- CENSORS
- GLOWS
- COMMEND
- BEHOLDEN
- ADHERENCE
- REMEMBRANCES
- DEROGATORY
- VISITATIONS
- DRUGGED
- INFUSE
- REHABILITATE
- PEACEMAKER
- MEREDITH
- REMINGTON
- HEALS
- UTILIZE
- BLACKMAIL
- AIRILY
- DEPRECATING
- BODYGUARD
- SECRETARIES
- DEFERENTIALLY
- SPANNED
- SALVER
- PEREMPTORILY
- LOOSENING
- VALET'S
- MANIPULATION
- UNNECESSARILY
- PURSED
- DROOP
- EYELID
- SUPERSCRIPTION
- INDOOR
- BEALE
- UNPAINTED
- NOBODY'S
- SITTIN
- PEASANT'S
- MANACLED
- ASSASSINS
- ALLAYED
- SANDWICHES
- TESSELATED
- SARDONICALLY
- FUNK
- SYMBOLIC
- PASTEBOARD
- NATIONALITY
- TOUCHETT'S
- SAVOUR
- LIVELIEST
- ACCUSING
- TASTELESS
- INCENTIVE
- FORECAST
- DISCREDIT
- REFUTING
- DEVOLVED
- PERSISTENCY
- LATTICED
- PORTMANTEAU
- VOCABULARY
- FELICITIES
- FESTIVE
- CANDLESTICK
- PONT
- FORERUNNERS
- TINKLE
- WITCH'S
- TOAD
- TIGHTENING
- DISMOUNTING
- BUCKLES
- GORSE
- TRANSFORM
- MILKED
- PAILS
- RYE
- SQUINTING
- SHUTS
- DELIVERY
- APPRAISE
- HINDRANCES
- PERSONALITIES
- WASTES
- COOPERATION
- CLASHES
- PRECEDENCE
- FAULTY
- CRAVE
- BOSS
- INFLUENCING
- SCRUTINIZED
- EMOTIONALLY
- WAVER
- SHAKY
- MANNERLY
- WEDGE
- WIDENS
- DISLODGED
- PERPLEXITIES
- SCALING
- FRIGHTENING
- MANNERISM
- WANE
- UNTURNED
- HELPFUL
- UNHEALTHY
- RELINQUISH
- MATURING
- UNATTACHED
- TOPERS
- OUTLAW
- HARBORS
- KILLERS
- INSANITY
- MENU
- PHYSIOGNOMY
- SENSORIUM
- FILAMENTS
- DIVESTED
- ADMINISTERING
- SHINGLE
- DISTILLED
- BOWERY
- BRACELET
- CRANIUM
- HADES
- BUM
- PEACOCK
- UNINJURED
- MORASS
- INCONCLUSIVE
- MINUTENESS
- CONVENED
- PROSECUTE
- HAMILTON
- OVERALLS
- BLACKBURN
- LESSENS
- HOWELLS'S
- UNCOMMUNICATIVE
- GROPE
- COURTHOUSE
- ROTUND
- PANAMANIAN
- ARRAIGNED
- BYGONES
- PERMITS
- MOURNER
- UNCOVERING
- IMMATERIAL
- CUFFS
- EVERYTHING'S
- SPASMODICALLY
- SOMBRELY
- NECKTIE
- INSTABILITY
- HOPEFULLY
- CROQUET
- ULCERS
- SCROFULA
- CAUSTIC
- PORES
- MARES
- INFANTS
- HYSTERIA
- CLEARS
- ACHES
- ASHLAND
- LUXURIANCE
- INVIGORATES
- DEBILITY
- KIDNEY
- LITE
- WOODWORK
- PIANOS
- ENAMEL
- SCRATCHES
- ETHER
- CORRESPONDENTS
- INCOMMODED
- BEWAILED
- REGULATING
- MERCHANT'S
- BARRENNESS
- BELLOWED
- FITTER
- COMBATING
- BELLOWING
- COMPASSIONATE
- LANGUISHING
- LEPROUS
- POTIONS
- ENRICH
- PERSPIRE
- VIZIER'S
- DILIGENT
- CRUCIFYING
- EZRA
- POSTMASTERSHIP
- SINECURE
- BARTENDER
- SOLICITATION
- STERNER
- DIVULGED
- TOWN'S
- JUBILANT
- IOWA
- TOWNSMEN
- MINNIEMASHIE
- SAUCERS
- INVESTMENTS
- NELSON
- HAYDOCK
- FISTED
- BULLING
- MOINES
- MATCHLESS
- DAUNTLESS
- QUESTING
- APPLAUDING
- DISILLUSIONS
- MADRID
- DOMESTICITY
- MAGNOLIAS
- CURTAINED
- SHANTIES
- TINCOMB
- TABERNACLE
- PLAID
- FLIPPANT
- NEWSPAPERMEN
- BUREAUS
- CONTAMINATION
- LADYLIKE
- CHEMISTS
- ENVELOPES
- CHESAPEAKE
- SCOFFING
- ENTHUSIASTS
- DUDES
- MULTITUDINOUS
- INVEST
- EAVES
- ANECDOTE
- UNBELIEVABLY
- OBSEQUIOUS
- ARCHITECTS
- TRACEABLE
- REFUGES
- PETTINESS
- AFFLICT
- TUNNELS
- CLOUDY
- CAMBRIC
- PUNCTUAL
- CRAPE
- AVIDITY
- MORREL
- VILLANY
- SIGNING
- NEAPOLITAN
- WRESTED
- THOMSON
- TRANSACTING
- QUITS
- REPELLED
- CREDITOR
- GUILTILY
- DISCOUNT
- FOI
- CREEDS
- OPALESCENT
- PAWNED
- INTERLACED
- SNIFFED
- FETID
- USHERS
- SQUAD
- CAPES
- DETACHMENT
- SUBWAY
- LEERING
- PHANTASMAGORIA
- BLISTERED
- NIGHTMARES
- ENVELOPING
- SPOONS
- HERDED
- CYNICALLY
- ACCUSATIONS
- AUTO
- PASSIVELY
- SIMPER
- GROTESQUELY
- WORRIES
- APPENDICITIS
- FROGGY
- PARKER'S
- DOUGHNUTS
- RECEPTACLE
- DIVAN
- NICHOLL'S
- COMMUNICATES
- CANINE
- BETS
- LAUNCHING
- SUSTAINS
- REVOLVES
- DIZZINESS
- BEAUTIFYING
- TRANSPORTATION
- UTILIZED
- GASEOUS
- HOISTING
- EXPERIMENTING
- PROPELLED
- EXCEL
- FENCING
- SWEATING
- FEATURELESS
- LIMBED
- POTENTIAL
- DOWNSTREAM
- CROUCH
- SHRUGGING
- BETRAYAL
- TIMELY
- FINGERING
- BEAST'S
- THANKFULLY
- PRUDENTLY
- AROMATIC
- LURKED
- LURCHED
- RIVER'S
- TRAPPED
- VINCENNES
- LEVEE
- ENLISTING
- HUMPHREY'S
- INTENDANT
- OVERTURES
- NOMINATED
- KERCHIEFS
- GALLANTS
- ASHLEY
- DISCONTENTED
- PROFFERS
- HEATHERSTONE'S
- MONEYS
- PRESENTATIONS
- EMPOWERED
- MISUSE
- INTERDICTED
- AWAKENS
- ANTAGONISTIC
- OVERRUN
- PSYCHOTHERAPIST
- AUTOBIOGRAPHY
- USER
- ELIMINATE
- WILLFUL
- VALUES
- INBORN
- UNFIT
- ADJUSTING
- PSYCHIATRY
- PSYCHOLOGISTS
- FREEST
- REENFORCEMENT
- EXPANSION
- HAMPERED
- ADMINISTER
- REMODELING
- INATTENTIVE
- EDUCATORS
- UNTRAINED
- ARTIFICIALLY
- RETARDED
- ANTISOCIAL
- SUPERFICIALLY
- INTRODUCES
- MARSUPIALS
- CHATTERER
- REPROVINGLY
- BRAMBLE
- BLACKY
- CONFLUENCE
- SYNTHESIS
- SUBCONSCIOUS
- RECUMBENT
- CONTOUR
- PASSER
- BREAKERS
- MINIMUM
- MISCALCULATED
- ARMCHAIRS
- STIMULUS
- TRANSPARENCY
- VIANDS
- SUBTLY
- ANOMALIES
- INSENSIBILITY
- EXCESSES
- INCREDIBLY
- FERTILITY
- ARGUMENTATIVE
- DICTION
- PRESENTIMENTS
- NEIGHBORLY
- SILENCES
- DISCOMFITED
- VOLUBLE
- BEMOANED
- ALLEGE
- PARADES
- CULPABLE
- INTERROGATION
- EJACULATION
- EXTENUATION
- INCITED
- INCOMPARABLE
- CLUMSINESS
- PETULANCE
- RIOTOUS
- MALEVOLENCE
- EMBARRASSMENTS
- ASSUMES
- DEFIES
- DISSIPATES
- IMPOSES
- LAX
- DULNESS
- MANIFESTLY
- MICROSCOPIC
- MINISTERING
- NERVELESS
- VARIANCE
- OMITTING
- COMMONPLACES
- ADJUSTMENTS
- DIAMETRICALLY
- FRAGMENTARY
- MISINTERPRETATION
- PEDDLING
- PELTING
- SHIFTS
- TITULAR
- TARANTELLA
- TWIRLED
- ADOPTS
- OBSTRUCT
- ANTECEDENTS
- FLORIANO
- CONTE
- UNMASKED
- IMPOSTORS
- ENTANGLEMENT
- HARSHLY
- CORRUPTED
- MATERIALISTIC
- TILLING
- STOUGHTON
- BAWLED
- GROOMING
- PRODUCTIVENESS
- FERMENT
- SAUL
- SCRUPULOUS
- HEMISPHERE
- FORTIETH
- FURROWED
- RELIEFS
- HERSCHEL
- REGULARITY
- CONTRACTION
- IRRADIATION
- SELENITES
- ALTERNATIONS
- MOON'S
- GEOLOGICAL
- UNFATHOMABLE
- COMPLEMENT
- RADIATION
- INSUFFICIENCY
- ROTATION
- EVAPORATION
- BEWITCH
- VISIBILITY
- BRAN
- HATCHING
- SWIMMER
- BEWITCHED
- STEADINESS
- NONCHALANCE
- TUGGED
- CANDIDE
- INCAS
- FRETFUL
- PASHAS
- DISTRACTING
- INQUIETUDE
- CITRONS
- PISTACHIO
- UNADULTERATED
- TURK
- JEROBOAM
- SYRACUSE
- DISPUTING
- INQUISITION
- DORADO
- COWARD'S
- SAGAMORE
- ADMONISHED
- TERMINATION
- UNQUESTIONED
- PARTICIPATION
- MOLDED
- HEYWARD
- TOMAHAWK
- REED
- FUMES
- EYEBALLS
- TRANSMISSION
- MONTCALM
- INROADS
- MOHAWKS
- EMISSARIES
- SHARPEN
- ABORIGINES
- BRIGHTEN
- TRINKETS
- APTLY
- COEUR
- SYLLABLES
- TARDY
- CONJUNCTION
- VENERATED
- USURPERS
- DEEMING
- ESCORTING
- PUMPKINS
- GROPED
- TRUNDLED
- EXCELSIOR
- CIRCULARS
- PREMIUMS
- BOOKCASE
- RAPTUROUSLY
- CRINKLED
- LAUNDRY
- ORNAMENTAL
- HYSTERICALLY
- INDULGENT
- SAWYER
- WATSON'S
- CARMINE
- BLOOMS
- GENIALLY
- CORROBORATED
- CONSCIENTIOUSLY
- BLACKSMITH'S
- FIGURING
- LILAC
- ROWENA
- RANDALL
- COSILY
- PICTORIAL
- ELOCUTIONIST
- COUPLET
- MINNIE
- SHAKER
- TERSELY
- BORROW
- PEPPERMINTS
- SAGACIOUSLY
- CULLOUGH
- PRINCIPAL'S
- DIMPLE
- CRAM
- OUTRAGED
- FLUSTERED
- SEASIDE
- FLIRT
- FLIRTING
- HARPER
- MATCHING
- TROOPING
- ROSALIE
- BAGGY
- MERTELLE
- TRENT
- PUNCTUATED
- KOOLLOOB
- ALEPPO
- PERSECUTES
- REQUITAL
- AYOUB
- RIGOUR
- RUSHEED
- INGENUOUS
- HEAPING
- EQUIPAGE
- EUNUCHS
- DEPLORABLE
- AYOUB'S
- SOEVER
- FATALITY
- FETNAH'S
- JEWELLER
- HAZARDING
- INDIGNITIES
- PERSIANS
- SQUEAK
- SLYLY
- SPUTTERED
- SHEEPISHLY
- BUGS
- TANAGER
- REDCOATS
- FIFTIETH
- FORTIES
- FAIRS
- SKIMMER
- ROMANCES
- DELILLE
- MATERIALISM
- EXUBERANCE
- HUSSARS
- CABARET
- BONAPARTIST
- AVENGING
- DECEPTIONS
- MASTODON
- INADMISSIBLE
- CONCORD
- ADORATION
- PYRENEES
- OTTOMAN
- WEDDED
- DUSTED
- SPIDER'S
- SPIDERS
- NIHILO
- ARISTOTLE
- DISCOVERER
- FLOTSAM
- UPROOTED
- SWIRLING
- UNEQUIVOCAL
- CHIME
- SUBTLETY
- CONDEMNING
- INDEMNITY
- FUNDAMENTALLY
- SEDUCTIVE
- UNEGOISTIC
- RETIREMENT
- UNCONDITIONALLY
- SEDUCTION
- GALIANI
- D'EPINAY
- HYBRID
- MASQUERADES
- CLASSICAL
- FLORENTINE
- TRANSCENDENTAL
- DIVINING
- CIVILIZATIONS
- MOORISH
- HALCYON
- COMPULSION
- NAIVETES
- ELEVATIONS
- FORGED
- INSIDIOUS
- PERFECTING
- OVERSPREAD
- ADVOCATES
- DISCLOSES
- PONDEROUSLY
- HELVETIUS
- INSINUATED
- MORALIZING
- PONSONBY
- BEAGLE
- DIEGO
- SQUALLS
- DOMESTICATED
- GUANACO
- CLICKING
- YAWNED
- AWRY
- KEENER
- WALTZING
- CHARTERED
- FUEGIA
- RIO
- PARTAKEN
- PORTUGUESE
- ZOOLOGICAL
- INLETS
- BAYS
- ALPINE
- GOEREE
- PUTREFYING
- TROPICS
- FAGUS
- COMMEMORATION
- MOORLAND
- SLEET
- PUFF
- SURGE
- TUFTS
- LOINS
- TRICKLED
- SUCKLING
- TEMPESTUOUS
- PUTRID
- QUARTERMASTER
- DIALECTS
- UNCEASINGLY
- BACKBONE
- PERU
- EXPLODE
- STEREOTYPED
- INFREQUENTLY
- MOCCASINED
- WHOOPS
- SCENTING
- IMPARTING
- ESTRANGEMENT
- TRACKLESS
- MASTERFUL
- SWISH
- PULSED
- PLAYIN
- MAGNANIMOUSLY
- DUBIOUSLY
- EDWARDS
- MUMMER
- CLODS
- DISSENT
- RELAPSE
- RAREST
- UNPRINCIPLED
- JIGGING
- UNWINKING
- FATUOUS
- RECONCILING
- VICAR'S
- SQUEALING
- WHISK
- WAGGISHNESS
- EMIL'S
- HAMBURG
- NAT
- CAMERON
- FIREFLY
- LURED
- DEMI
- SCRUBBING
- FARING
- SOWED
- TARES
- DILIGENTLY
- THINLY
- BERGMANN
- PENANCE
- COFFINS
- CHEBEC'S
- BREASTED
- FEARLESSNESS
- TINIEST
- SCRAPPER'S
- DRONES
- EYESIGHT
- NOTCH
- BEDCLOTHES
- WOODPECKERS
- WINSOME
- YANK
- CHICKADEE
- KILLY
- SPOOKY
- NESTED
- MORE'S
- STRAWS
- SHAMEFULLY
- FEEDS
- SQUATTED
- RAFTS
- DISASTROUSLY
- FIBRES
- VANCOUVER
- SKATING
- MINK'S
- MOULDED
- MOBILE
- BABY'S
- SWERVING
- RETINA
- PLIANT
- DISCONNECTED
- BRIGHTENS
- UNRESPONSIVE
- SCULPTURES
- MASTERPIECES
- BACCHUS
- ELDARA
- DREW'S
- NOTHIN
- GETTIN
- FODDER
- PA'S
- SPEAKIN
- NARRATOR
- RINGER
- DRIVIN
- PINTO
- LOOKIN
- RELAX
- SLIT
- CLUMSILY
- BLINKED
- FOLLOWER
- TOLLIVER
- BROODED
- EAVESDROPPING
- OUTRIGHT
- WRINGING
- CORNFIELD
- PLENTIFULLY
- BLACKBERRIES
- OUTWITTED
- HOOF
- SPILT
- INJUNCTIONS
- ODOURS
- VIOLINS
- SOUTHWARDS
- QUAILED
- CROAKING
- FOAL
- UNPROPITIOUS
- SPINNET
- PROFICIENCY
- CLEANLINESS
- CONFIRMATION
- RAPTURES
- PERVERSENESS
- SEDENTARY
- RICHARD'S
- CRAVATS
- OCCURRING
- MISCONDUCT
- UNLOOKED
- MILDNESS
- REJECTION
- MISLED
- COMMUNICATIVE
- WHOMSOEVER
- ALLENS
- SUDDENNESS
- SPURNING
- CONJECTURES
- INTIMIDATE
- HEREFORDSHIRE
- WEIGHTED
- MUNIFICENCE
- CHESTNUTS
- CONSERVATORY
- CONQUESTS
- DISCRIMINATIONS
- FREQUENCY
- FIDGETY
- PICCADILLY
- ATHENS
- CONSUMMATION
- INCOMING
- CONFERRING
- SMARTEST
- ANSON
- WILLY
- O'REILLY
- JEWELRY
- ACTRESSES
- PRODIGIES
- SUBSERVIENCE
- TOY
- MULATTO
- CUFFED
- BOXWOOD
- UNMERCIFULLY
- MEDDLING
- PRIMEVAL
- EXPERIMENTED
- GLOATING
- WALTZ
- DUSAK
- GIGGLING
- ROOMFUL
- TOPAZ
- COLOGNE
- WHELP
- DISGUSTINGNESS
- INTERROGATE
- WAYMORE
- RAKE
- DEBAUCHERY
- ESCAPADE
- INDEFINABLE
- COMPLICATION
- LAMENESS
- DEBAUCHERIES
- ROADSTEAD
- DREAMERS
- ATTUNED
- SIAMESE
- CHINAMAN
- GANGWAYS
- CONFINING
- CREVICES
- GRIME
- TURBAN
- ASTERN
- UNBELIEVERS
- VISCOUS
- AWNINGS
- IMMENSITY
- FLICKED
- ENSLAVED
- COLLIDED
- AWASH
- SCORNFUL
- BLOTTING
- FOREPEAK
- METICULOUS
- TRUTH'S
- EDDIED
- VOLUMINOUS
- DRILL
- DRAPED
- USELESSLY
- DELIBERATING
- COURAGEOUSLY
- DOWNRIGHT
- SOLIDARITY
- AGGRIEVED
- OAKBOURNE
- BLIGHTED
- WRAPPING
- EFFULGENT
- OUTGROWTH
- SEARCHINGLY
- SLOMAN'S
- CHATTY
- SPAWN
- TAMMY
- CONDIMENTS
- OZ
- FLAVOURED
- FORCEMEAT
- MALES
- ADHERING
- BIVALVES
- ANCHOVY
- STEW
- LISTLESSLY
- SHIELDING
- MOCKINGLY
- CLIPS
- SPLASHES
- CYCLIST
- DOER
- SURPRISINGLY
- QUARRELED
- JUPP
- UTTERS
- COLLAPSED
- IMPERVIOUS
- MUTTERINGS
- VALERIE
- ENGULF
- SLANTED
- STUTTERED
- LAK
- YAS
- MO
- FARNSWORTH
- UNTIED
- GENTILES
- DOVES
- BAPTIZE
- SCRIBES
- RANSOMED
- HIGHWAYMAN
- TRUE'S
- FRASER
- THRIVE
- CHOPS
- RASPBERRIES
- PELLUCID
- GLEN
- SOJOURNED
- OLIVIA
- SCOFFED
- FURTIVE
- CHIMES
- NETTLES
- DASHWOOD
- INCONSIDERATELY
- HENCEFORWARD
- MILSOM
- INCIVILITY
- RIGOURS
- RANKED
- ARTLESS
- CONSEQUENTIAL
- TRANSPORTS
- EMBELLISH'D
- ACCOUTRED
- PROCLAIM'D
- ATCHIEVEMENTS
- PERSIA
- ITABOD
- SYCOPHANTS
- REMOUNTED
- CATCH'D
- ESQUIRES
- CONVEY'D
- PLAY'D
- REBUFFS
- EXCEPTED
- STRATAGEM
- ARTFULLY
- SABRE
- CHIEFTAINS
- PUSH'D
- MUTES
- SWEATED
- SLUGGARD
- AGGRAVATION
- OPPRESS
- JOT
- BENNET'S
- PIES
- LUCASES
- NOURISHES
- SONNET
- GRACIOUSNESS
- MERYTON
- SOLWAY
- BLINDFOLD
- FORDS
- DECEMBER'S
- MOONLESS
- MATIN
- SALUTES
- PARTICIPATED
- GESTICULATIONS
- LOQUACIOUS
- REMOUNTING
- HAUGHTILY
- WHARTONS
- APPREHEND
- SUSPENSE
- BASKING
- METEOR
- DELL
- ROOTED
- INFINITY
- SHRINKS
- TENURE
- INSECURE
- IMPEDIMENTS
- CONTAGIOUS
- PREVENTION
- SUBSCRIPTIONS
- INTERCHANGE
- CARGOES
- ABODES
- CINNAMON
- SPURNS
- DELLS
- POLLUTED
- WEEPS
- EXPORTS
- OPULENCE
- BEGGARY
- GLORIED
- CONCILIATE
- ABANDONING
- ALLEVIATION
- HURRIES
- REMITTANCES
- AFFORDING
- PENSIONERS
- OLDEN
- INDIGENT
- HOE
- LAVISH
- BREEDS
- VISITATION
- SIGNORS
- TRIMMINGS
- BODED
- LUDOVICO
- JEERINGLY
- ARCHLY
- SEBASTIAN
- SLASHING
- CASEMENT
- EXCELLENZA
- EMILY'S
- APPEASE
- IMPLORE
- FOOL'S
- RAMPARTS
- BASEMENT
- FITZGERALD
- MOY'S
- DRINKER
- MONEYED
- SILVERWARE
- AUGMENTED
- GLASSWARE
- MANAGERIAL
- CASHIER
- TAILORED
- VEST
- DRESSY
- COMMENTARY
- BASK
- SEQUESTERED
- EARED
- SENSORY
- PALAVER
- PUFFED
- JULES
- MAYST
- BLYTH
- WRESTLING
- RANGERS
- SHERWOOD
- JUBILEE
- COWHIDE
- MAYHAP
- DARLINGS
- SCURVY
- FUME
- TAN
- METHINKS
- TWAIN
- TESTILY
- MANCHA
- OWNER'S
- MONTERA
- CARDENIO'S
- COMB
- RELIEVES
- DEPICTED
- FERNANDO'S
- PROTECTORS
- PROSTRATION
- CHERISHING
- KLETKE
- MAIMED
- SIGHTLESS
- DIABOLICAL
- QUAKED
- UNINHABITED
- MAIDSERVANT
- GASCONY
- GERMAIN
- BRAWL
- D'ARTAGNAN'S
- DEVIATED
- SUCCOR
- SORTIE
- DISSOLVE
- INCARCERATION
- DUNGEONS
- GRATINGS
- PRETENSE
- MYSTIFIER
- GLITTERS
- TENACIOUS
- LACKEYS
- TRAITOROUS
- REVERIES
- HOAR
- WHIPPER
- BALUSTRADE
- PEDESTAL
- REOPENED
- EXPOSTULATION
- FOUNDER
- KEYHOLE
- COURTESIED
- SMIRK
- INTANGIBLE
- WIXTED
- PRECIPITATELY
- LAUDANUM
- INCENSED
- DISTORTION
- BIZARRE
- VRAIMENT
- WAT
- BRYERLY
- FEIGN
- PETITE
- WONTED
- ENQUIRIES
- MILLINER
- GARRULOUS
- HOUSEMAID
- LES
- WINCED
- HEIGHTENING
- IDIOM
- SMACK
- ODDITY
- SCALED
- SLUMBERING
- BROOMSTICK
- SEMICIRCLE
- UNCLASP
- VOMIT
- FLINT'S
- CHAFFED
- PURCHASER
- GLAZED
- DIFFIDENT
- BOLDER
- IMPRINTED
- RESOUNDED
- EXTINGUISH
- EVENING'S
- HARANGUED
- SURGED
- MAGNETIZER
- PRICKING
- WAGNER'S
- OPERAS
- SUPERSEDE
- RESPIRATION
- REALISTIC
- INCONSISTENCY
- CHARLATAN
- CHARLATANISM
- OPERATORS
- CLAIRVOYANT
- EATER
- LETHARGY
- ACCOUNTABLE
- LOURDES
- PERSONATED
- STRIKINGLY
- MYLES'S
- ARMORY
- BOYHOOD'S
- GRACED
- COARSER
- CHEERLESS
- MARROW
- PUDDINGS
- SWEETENED
- MINSTRELS
- LOATHE
- MUSTARD
- CROCUSES
- GASCOYNE
- COOK'S
- ARBOR
- SPLINTERING
- PROPRIETRESS
- KINDNESSES
- JAKE
- SOMETIME
- FELLERS
- ORTER
- OLE
- CUM
- MORNIN
- WITHERING
- PICTUR
- TIPTON
- SPEEDING
- JOGGINS
- SYSTEMATIZED
- PATRONIZING
- CHEMISTRY
- VOGUE
- DISTILLATION
- SULPHURIC
- PERPETUATION
- ATTESTED
- SUPPLEMENTED
- PERPETUATED
- UNCHARITABLE
- INEXTRICABLY
- ENSUES
- LOOM
- MESMER
- IMBUED
- KROGER
- HYPNOTHERAPY
- AUTHOR'S
- EXPERIENTIAL
- ATAVISTIC
- KLINE
- CONSTITUTES
- INHERENTLY
- CEREBRUM
- CONDITIONED
- REFLEX
- FOURTHLY
- UNITS
- GLADSTONE
- CATO
- DECORATING
- DRAWINGS
- FIG
- BEAUTIFY
- FULFILLING
- ENLARGE
- UNREALITY
- LOBE
- EMBLEM
- WAGING
- SUREST
- PROGRESSING
- DOLING
- DIVED
- CONTRABAND
- SLANDER
- UNWAVERING
- IRRITATING
- GOSSIPS
- SAUNTERED
- EQUESTRIAN
- EDGING
- DUPLICITY
- PLODDING
- QUARRELLING
- REPROVE
- INNUENDOES
- DERISIVELY
- PASTORAL
- DISPLEASING
- DEFEATING
- SCOURGE
- EUDOXIA
- BEGS
- INVITES
- MAJORIAN
- CARTHAGENA
- HOSPITABLY
- BASILICUS
- CLOVIS
- BULGARIA
- GOTHS
- JUSTIN
- GELIMER
- VITIGES
- GRANDEST
- MANUFACTURES
- MOHAMMEDANS
- ISLAM
- INTENDS
- KHADIJAH
- IDOLS
- GABRIEL
- MOHAMMEDAN
- DIETH
- QUADRANGLE
- BA
- MOSLEM
- NORTHWESTERN
- CEYLON'S
- BENGAL
- LUCRATIVE
- OARSMEN
- APOPLEXY
- WHIMS
- EMPLOYERS
- CAREFREE
- SWISS
- JUNGLES
- NOOSE
- SIRR'S
- TERRAIN
- ORIENTALS
- SOLIDIFIED
- PROTEIN
- TESTACEA
- SAXONY
- SECRETING
- SHELLFISH
- CREATURE'S
- ROTTED
- IMMERSED
- SPHERICAL
- EARRINGS
- ERRATICALLY
- VESTMENTS
- SIEVES
- CLASSIFYING
- HARVESTING
- CARP
- PHILOSOPHICALLY
- UH
- DIDACTIC
- NEMO'S
- HARPOON
- COMPLETENESS
- GRIFFITH'S
- SQUIRMING
- AFIRE
- INTERMINGLED
- MADLY
- STYLISH
- COAXING
- SIDEWALKS
- UNRIGHTEOUS
- UNDRAWN
- PANACEA
- FOURSCORE
- TRIBUNALS
- INCOMPETENT
- NOMINALLY
- MASSACRED
- ABANDONMENT
- BURGLARY
- OBSESSION
- TAMELY
- CONFEDERACIES
- REALIZES
- BIGOTRY
- ZIGZAGS
- DORMITORY
- PULP
- BUNKS
- EYEBOLTS
- METRE
- DEPOTS
- NANSEN
- HICKORY
- TAR
- HUITFELDT
- HOEYER
- ELLEFSEN
- STIFFEST
- ASSORTMENT
- BERGEN
- PENETRATES
- RIME
- SPREADS
- ELABORATELY
- TIRES
- PATENTS
- UPPERS
- MEASUREMENTS
- PRIMUS
- STOCKHOLM
- HORIZONS
- MERCURY
- MAKER'S
- AILMENT
- RUST
- GROCER'S
- NOURISHING
- OATMEAL
- EMBANKMENT
- SPORTSMEN
- UNNUMBERED
- CORNICE
- ERECTING
- COPSE
- RADIATED
- SHRUBBERY
- RENTED
- COOPERS
- PULPITS
- NAMESAKE
- MANSFIELD
- INTUITIVE
- RECOMMENDATIONS
- RETICENCE
- EMBELLISH
- RECEIPTS
- MEAD
- LAMENTABLY
- APPENDAGE
- ENCUMBER
- PERIODICALS
- FASTIDIOUSNESS
- MINUET
- GYRATIONS
- ADDISON
- HORNPIPES
- SUPERINTENDED
- CONCOCTION
- DISTILLING
- LEIGH
- FLAX
- BALLAD
- SPINNING
- SINGSONG
- SPORADIC
- SUBCONSCIOUSLY
- IMPOTENT
- MEDLEY
- STARES
- JOSTLED
- ELBOWED
- ESCORTS
- STRAGGLERS
- LOITERED
- UNOPENED
- CURB
- APPROVING
- NONCHALANTLY
- FASTIDIOUSLY
- MUSINGLY
- UNCERTAINLY
- DIAL
- IMMACULATE
- EMPLOYEES
- HARLEM
- BLUNTLY
- BANKNOTES
- LURCH
- PENITENTIARY
- SPURTING
- HALTINGLY
- VIAL
- ALMONDS
- SWIFTEST
- INSIGNIA
- UNGUARDED
- ORB
- FETISHISM
- HAHN'S
- TROTTER
- SILKY
- AMAZON
- PLAITED
- BALCH
- SECLUDED
- EARSHOT
- MAGICIANS
- MESH
- DEFLECTING
- DUD
- FINGERED
- WHATEVER'S
- CARTER'S
- ANITA'S
- LUNG
- CUBBY
- BUZZER
- CYLINDER
- MATERIALIZED
- MIKO
- IDYLLIC
- FETTERS
- ARCHERY
- JUSTIFYING
- CAPABILITY
- DAVILOW
- TOLERANT
- RAINY
- FLACCID
- EUPHONIOUS
- SHORTNESS
- ANTIQUATED
- FLUFF
- WHIMPERED
- HOLLIS
- GOGOFFS
- ARROWPOINT
- MARQUESS
- LUSH'S
- STINTED
- BLOSSOMED
- CRAMP
- UNINTENTIONAL
- SUBMISSIVELY
- DISCOVERABLE
- ACCOMMODATED
- WOODY
- EQUIP
- BERESFORD
- DEDUCTION
- BYES
- ORBITS
- DRAWBACKS
- WAFTED
- PARISHIONERS
- CLASSICS
- MIDDLETON'S
- NEWEST
- HALE'S
- BELLES
- LESSENING
- WOODLANDS
- THORNTON'S
- CAPTIVATED
- IMPERTINENTLY
- TELEGRAPHIC
- LANCASTER
- SUPERNATURALLY
- IMPUTED
- DENOUNCING
- BRIBES
- GLOSS
- BRUMMAGEM
- IMPOTENCE
- PARADE
- IMPECCABLE
- RELEGATED
- RAKED
- EXPANSIVE
- INSISTENTLY
- LOWDER'S
- FLASHLIGHT
- WINCING
- INVOKED
- INHALING
- BENEVOLENTLY
- APPRECIATING
- PROPOUNDED
- CELEBRITIES
- EXPONENT
- ARENA
- CARESSINGLY
- MARTYRED
- NOSING
- AMERICAN'S
- GRIST
- EXPEDITIOUS
- SPANGLES
- CIVILISED
- ENUNCIATED
- SHIRK
- CEREBRAL
- HANDICAPPED
- CHOP
- SPLICE
- KERCHIEF
- WELLED
- OUTWIT
- HEARTEDNESS
- UNTRAMMELLED
- SCUFFLE
- QUALM
- NARROWEST
- CRANNY
- CHIDING
- BESOTTED
- MASKEW'S
- GIDDINESS
- HUMMOCKY
- APACE
- FRESHENED
- SCARING
- ROOKS
- CLOTTED
- FIRELOCK
- OWLS
- EXCAVATIONS
- STEEPLY
- VESTMENT
- GOOSEBERRY
- GASES
- STRANGLE
- OVERGROWN
- TINDER
- ENIGMATICAL
- GARGANTUA
- DEPRAVED
- MERLIN
- ALLEGORIES
- GOETH
- SAITH
- BATTALION
- REVENGING
- JOAQUIN
- PENNED
- SMOLDERING
- MOORE
- HUTCHINGS
- MALL
- UNDERGRADUATE
- COMPETE
- UNGENEROUS
- COSMOPOLITAN
- INSENSITIVENESS
- JUDICIOUSLY
- ICARUS
- ACCLAIMED
- CRUMPLED
- HUGGING
- HERRING
- GRUEL
- BRAT
- TAILED
- PELAGUEYA
- LEVANT
- CYPRUS
- BURIALS
- ABATED
- WAGGONS
- FRIGHTED
- FOOLHARDY
- CRIPPLEGATE
- CLARKENWELL
- WEALTHIEST
- ROTHERHITHE
- COMPUTED
- ACCUSERS
- DONNER
- TORTURING
- STONED
- BREWING
- PRIVATIONS
- FREMONT
- LATHERED
- CLEANSED
- POPPIES
- AIMLESSLY
- BROWNING
- BLIGHT
- SUTTER'S
- KETTLES
- MASON'S
- WATCHWORDS
- PARCELS
- OBSTRUCTIONS
- GRANDMA'S
- SOLDIERLY
- HOMELIKE
- HONEYS
- PUNCTIONS
- TAKIN
- JAKIE'S
- SONOMA
- HARDWOOD
- MODELLED
- DUCKLINGS
- SOCKETS
- MESHES
- EXUBERANT
- CENTREPIECE
- WIRTHIN
- LADLE
- EMBELLISHED
- HARROWING
- JAKIE
- CASTILIAN
- HORNED
- SUPERINTEND
- EVOLUTIONS
- MONMOUTH
- SANGUINARY
- MURRAY
- CONDOLENCE
- UNQUESTIONABLE
- HAGUE
- UNPLEASING
- TARDILY
- FRUSTRATED
- EVASIONS
- SKELTON
- ORKNEYS
- KIRKWALL
- MISADVENTURE
- REPEL
- DUNSTAFFNAGE
- MANIFESTO
- JOACHIM
- HEREIN
- MUSTERED
- ASCENTS
- VOUCHSAFE
- INIQUITY
- ACHIOR
- BETHULIA
- ISRAELITES
- ADORING
- SYNAGOGUE
- WRECKAGE
- SOLUTIONS
- BATTERING
- TRANSOCEANIC
- QUARTO
- HERALD
- FORMULATE
- REFUTED
- ADMISSIBLE
- GENERA
- CETACEAN
- IRONCLAD
- PROFESSORIAL
- LOOPHOLE
- GAZETTE
- INSURANCE
- ARSENALS
- ARMING
- WAYLAID
- VOCATION
- BOTANICAL
- MANSERVANT
- FLEMISH
- UNSOLICITED
- ENTHUSIAST
- BALEEN
- EMOTIONLESS
- SOCKS
- SKELETONS
- UNPREDICTABLE
- MAMMALS
- CONTAINERS
- FUNNELS
- ACCOMMODATIONS
- SKEPTICISM
- CHURNED
- GAFF
- WHALERS
- MANEUVERED
- SPYGLASSES
- POPULATED
- UNCONCERN
- MOLDY
- TACKLED
- CONTINENTS
- TROPIC
- OPTICAL
- GEARS
- IMMENSENESS
- UNSOLVED
- DISK
- SHROUDS
- PROBING
- MURKY
- SATCHELL
- CHURCHYARD
- AGGRAVATED
- DAB
- BETTERS
- CATECHISM
- PINAFORE
- JAM
- STAN
- IN'T
- SATCHELL'S
- FOLKS'S
- SEATING
- THURLE
- ON'T
- ULL
- CHURN
- INS
- OUTS
- INT
- GALLONS
- SCOURING
- EXPIRES
- HANNA
- CAUSEWAY
- NAME'S
- SIGHTEDNESS
- SOLO
- QUARTET
- CORKED
- ANYBODY'S
- ISOLATE
- EGREGIOUS
- FAVORING
- CIRCUMLOCUTION
- GUTTERS
- PRECLUDED
- ASPIRE
- HECTIC
- BARONETCY
- CORRECTING
- CUTLETS
- SNOWING
- SNOWED
- ACCLAMATION
- NEGLECTS
- DISPERSING
- UNPLEASANTNESS
- BEACONS
- THREATENINGLY
- LANDMARKS
- DIARY
- HELMER
- ROPED
- BATTLEFIELD
- HELL'S
- BOTTOMLESS
- RISKING
- QUILLS
- PORCUPINE
- TENACIOUSLY
- DITTIES
- SHELLEY
- DISPROVE
- PERRAULT
- DISTAFF
- VERGOOSE
- VERTIGOOSE
- GRANDCHILD
- GOOSE'S
- WM
- WHITMORE
- MONOGRAPH
- SUSSEX
- CHARLEMAGNE
- CREATIONS
- JUICY
- WRINKLES
- DEARIE
- BLEATING
- VE
- HOBBLING
- RABBITS
- INTERMITTENTLY
- ARIGHT
- OVERLAPPING
- DOMINATED
- CONTINGENT
- DOMINANT
- CLAMOURING
- INTRUDERS
- DEAFENED
- UPBORNE
- ARCHWAYS
- TOOTHLESS
- SHRIVELLED
- CADENCES
- UNISON
- SPACIOUSNESS
- DISTINCTIVE
- FELSPAR
- FEEDER
- RESTRAINTS
- ELIMINATING
- SPLENDOURS
- INDUCEMENTS
- NOURISHED
- LABOURING
- DIFFERENTIATING
- SLITS
- DISFIGUREMENT
- BARBARIC
- ARCHINGS
- UNLOADED
- POWDERED
- INKY
- THUNDERED
- SHRUG
- LOOMS
- REEKS
- TITANS
- SWARTHY
- HUNCHBACK
- RIDICULOUSLY
- STREWED
- VASES
- ADMONITIONS
- VOCAL
- UNSEEMLY
- CERES
- HOUSEWIFERY
- MILLET
- LENTILS
- FLEECES
- NOONTIDE
- EREBUS
- GROVELLING
- BLISSFUL
- FABLES
- WORSHIPPER
- OLYMPUS
- CENSOR
- MOORE'S
- SALES
- SNACK
- GRUYERE
- ANJOU
- FORTISSIMO
- TAWNY
- BRITTLE
- NIPS
- DOCTORING
- DANZIG
- CARAWAY
- SIP
- TRIBAL
- TILLED
- MAIZE
- MOHAWK
- GOADED
- PROLOGUE
- ALGONQUINS
- VOYAGER
- WANDERERS
- CURBED
- GARRISONS
- DEPLETED
- LOYOLA
- ABSOLUTISM
- HEARKEN
- HEARKENING
- SUPPOSITIONS
- TATTOO
- VEX
- KYUSHU
- WITHSTOOD
- MUGEN
- VERB
- MIMETIC
- BUDDHA
- VALUABLES
- SOLES
- IMAGINATIVELY
- JIKININKI
- DEVOURS
- OBLIGES
- AGREES
- ANJITSU
- ROADSIDE
- SKIRTING
- ATONEMENT
- AQUEDUCT
- GNASHED
- SQUATTING
- FOULED
- MURDERER'S
- COLLEAGUES
- KAI
- JOCOSELY
- TOLERATED
- PARLORS
- PEDDLER
- JEANETTE
- INDISCRIMINATE
- DOWER
- EMPIRIC
- ARRESTING
- SIMPLES
- CHECKING
- DISRESPECT
- PERSEVERED
- LAWTON'S
- PRESUMPTUOUS
- BANDAGES
- COMPLACENT
- EXCEEDS
- VIRGINIANS
- VALE
- DESPONDENCY
- RENOVATED
- EXERTING
- PROFFER
- SOMERSAULT
- RESTORING
- SLIPPER
- PORTIONED
- NICETIES
- WIELD
- LORDSHIPS
- ACCOMPANIMENTS
- GOALS
- CARRASCO
- UNHINGE
- APPERTAINING
- ENAMOURED
- ASCERTAINING
- SLACKEN
- SURNAME
- CAMACHO'S
- TOLEDANS
- CORCHUELO
- DISMOUNT
- DESPISING
- ONSET
- DEVOUTLY
- STRIPS
- CUTTLEFISH
- HILT
- AURORA
- FERVENT
- COUNTERPOISE
- ENLIVEN
- SPICES
- CAPTIVATE
- BASHFUL
- GALA
- LATERALLY
- FANGS
- TRANSCENDING
- VISE
- GLIMPSED
- FROTHING
- OVERWHELMINGLY
- FIGURED
- ERSTWHILE
- BATTLED
- TUSK
- PURGATORY
- GEHENNA
- STRAPPED
- UNMANAGEABLE
- THOAT
- SKULLS
- WARHOONS
- TRANSCENDS
- INSUBORDINATE
- RIPPED
- HERCULEAN
- JAILER
- VICTIM'S
- THOATS
- THRONES
- ENCRUSTED
- DIGNITARIES
- PYGMIES
- JAILERS
- LABYRINTHINE
- LOOT
- HELIUM
- ALARMS
- REAPING
- DUSKILY
- TORCHLIGHT
- LUBBER
- ROPE'S
- BUOY
- RUINATION
- HOSTAGE
- HITCH
- TAIN'T
- RACER
- COOLEST
- BANDAGE
- HAWKINS
- WRIGGLING
- AXES
- SILVER'S
- SMOLLETT
- GAYER
- ONSLAUGHT
- RAVING
- BULWARKS
- SOJOURN
- DEDUCED
- CONDUCE
- EXPOUNDED
- CONSUME
- CONFORMATION
- CORRESPOND
- VENA
- INAPPROPRIATELY
- CANALS
- PELLICLES
- PRECLUDE
- LIGATURE
- SMALLNESS
- PERFORATED
- DISTRIBUTING
- IRRATIONAL
- EMITS
- IDIOTS
- BRAINED
- POSTSCRIPT
- LANDLEAGUERS
- KEPPEL
- RUSSELL
- WYKAMIST
- BARRISTER
- SUICIDAL
- BOARDERS
- SUNBURY
- UNFURNISHED
- FOES
- UNATTRACTIVE
- WEALD
- BOARDER
- COWSHEDS
- JOCUND
- FOLIO
- CANTERBURY
- PRODIGAL
- CEASELESS
- BEDSTEAD
- TUGGING
- ADJUTANT'S
- RHYTHMICALLY
- CHILDLIKE
- CALECHE
- DRESSINGS
- DELIRIOUS
- COCKROACHES
- RUSTLED
- SEQUENCE
- ENJOIN
- SPLINTERS
- BOLKONSKI
- BROWNIE'S
- REFRIGERATOR
- BEAVER'S
- FATTY
- COON
- WOODCHUCK
- FELLING
- SWUM
- SPECULATOR
- CURRER
- ALTHESA
- CHATTEL
- DEM
- TOBIAS
- LOUISVILLE
- SLAVE'S
- WASHER
- BATON
- ROUGE
- FETTERED
- E'ER
- PEERLESS
- REND
- FLAY
- BEGRUDGED
- LAVRUSHKA
- BELTS
- LIBERATED
- MATTING
- BROADSHEETS
- COUSINE
- IVANOVNA
- MISINFORMED
- IRRESOLUTION
- JULIE
- VORONTSOVO
- LEPPICH
- COURIER
- FLOGGING
- OUTPOST
- STEADFASTLY
- REJECTING
- INTERFERING
- EFFUSION
- REQUESTING
- DISMISSION
- ASSIDUOUS
- SHORTEN
- WICKHAM
- INSPIRES
- MISLEADING
- ADMIRES
- DISOBLIGING
- CAROLINE'S
- LAMENTING
- IRKSOME
- PRESERVATIVE
- OFFENDING
- CIVILITIES
- SYNCOPATION
- TEMPO
- QUADRUPLE
- TEMPI
- QUINTUPLE
- SPICK
- CARPETS
- SERGE
- DIGESTIVE
- RECITATIONS
- DANCER
- ANTIDOTE
- SWISHING
- TRUTHFULLY
- ASLANT
- BEHRING
- COMIN
- COMMONEST
- SHRUB
- OBSTRUCTED
- BIRTHPLACE
- KEECHAWIK
- LETTUCE
- YUKON
- YAHKUK
- APENNINES
- METEOROLOGICAL
- BIRMINGHAM
- HURRICANES
- ASSERTING
- NIXON
- STICKLEBACKS
- GRAY'S
- AARON
- ACCEPTANCES
- ATTRIBUTABLE
- MORADABAD
- FERIDPOOR
- DISREGARDING
- TORRENTIAL
- PUTREFY
- ATMOSPHERIC
- BUIST
- HELTER
- SKELTER
- HINDON
- ACCURSED
- TORNADO
- SPRINKLE
- SPECIFY
- THEORETICALLY
- ACCEPTS
- CORRELATION
- EQUIVALENCE
- IRRESPECTIVE
- NEWTONIAN
- HARMONIZES
- SARGASSO
- IMMATURE
- STAGNATION
- HATFUL
- APPROXIMATION
- INVESTIGATOR
- WETTER
- CONCEIVABLY
- SNATCHING
- WINGLESS
- FRILLS
- SAUSAGE
- GANGS
- CHECKER
- REGISTERING
- COPYING
- PLUGGING
- WRITER'S
- OVERTIME
- WALLOWED
- SALARIES
- PHILANTHROPIST
- REFRESHMENTS
- TRANCE
- SCALPS
- PONE
- PUYA
- REFILL
- UNGRATEFULLY
- COMBINES
- ORGANIZATIONS
- HUB
- RAIDERS
- TERRORISM
- MANAGEABLE
- PERSONNEL
- COHEN
- WHITCOMB
- ELLA
- FINDEISEN
- KENT
- GRAM
- QUAY
- ROBERTSON
- COLORADO
- INDIANAPOLIS
- STAFFORD
- EMORY
- JACKSONVILLE
- ILLOGICALLY
- SUFFRAGISTS
- HINTING
- SUFFRAGER
- WHITTAKER'S
- COMMITMENT
- OUTSTRIP
- BLACKEST
- DOROTHY
- BRUTALLY
- BANGING
- CLANGING
- SKIMMED
- COGNIZANT
- MAINMAST
- CHEAPER
- DISTEMPERED
- STRAITENED
- PALLET
- ABATING
- INVIOLABLE
- PRACTITIONERS
- BRAWNY
- PUG
- QUELLED
- QUELL
- SESSIONS
- BLOODED
- ALLOY
- AMUCK
- BROWED
- MARINER
- BEDRAGGLED
- BUCCANEERING
- PROVOKING
- OVERT
- RIGGING
- PILLAGE
- UNSUSPECTING
- LIFELIKE
- FANNED
- SENSUAL
- TORMENTING
- HOVER
- DALES
- MISUNDERSTANDINGS
- MANIFOLD
- ABOLISHING
- INFLEXIBLY
- INSERTING
- DITHYRAMBIC
- EVANESCENT
- INNERMOST
- HARSHEST
- WEDLOCK
- PROFOUNDEST
- WITTY
- COMPRISING
- SORCERER
- REASSUMED
- ENTICE
- ACME
- UNENDING
- UNATTAINABLE
- MADONNA
- YEARNED
- POETICALLY
- BRADEMAGUS
- SMITTEN
- RESCUING
- GAWAIN
- CONSIGNED
- TURRETS
- PLEASANTRY
- ENFEEBLED
- DISGRACEFULLY
- SKILFUL
- JEER
- FOOTNOTE
- MIEN
- OVERTHROWN
- UNFASTENED
- VASSAL
- PECHEUR
- SOLICIT
- KINSMEN
- DINSMORE'S
- ENCOURAGINGLY
- FORGAVE
- ROSELANDS
- TEMPERS
- SLOVENLY
- UNFAVORABLE
- STEVENS
- SAFEGUARD
- TAPERS
- EVASION
- PORTAL
- WHIRLPOOL
- GLINT
- DULLY
- MING'S
- DIKE
- DRILLING
- PHOSPHORESCENT
- SENSED
- EERY
- VALLEY'S
- VOMITED
- CUBES
- GEOMETRIC
- PRODIGY
- FLEXING
- SICKENED
- FLAIL
- TOLL
- TWOS
- TRIPOD
- APEX
- TENTACLE
- PLAYFULNESS
- DRAKE'S
- CLUSTERED
- TOTTERED
- DAZEDLY
- HIMALAYAS
- SURGES
- NUDITY
- PITEOUSNESS
- BLOSSOMING
- RECOILED
- CLEFT
- RIFT
- PURGED
- PORTRESS
- SOFAS
- PENDENT
- INVENTORY
- SHRUBBERIES
- WARILY
- ANNUM
- EVINCED
- SELFSAME
- BLUEBERRY
- LOAM
- UNTIE
- BURROWING
- KETCH
- DRAM
- GENTLER
- SPIRAL
- HAW
- WHITTLED
- SCUDDING
- GROOVE
- CAREERING
- KINGSTOWN
- MOTORING
- COVERTLY
- PIANIST
- DEFT
- JIMMY'S
- NUDGES
- INHERITOR
- FREAK
- LORDLY
- MOTORISTS
- SNORTING
- SEGOUIN'S
- GRAFTON
- TWINED
- ENGLISHMAN'S
- SIGNIFICANTLY
- DEVISING
- BUNDLED
- HOPPERSON
- DAFT
- LEVITY
- CURMUDGEON
- GAUGES
- BOOKSTORE
- EVERLASTINGLY
- PRAISING
- SELLERS
- HICKS
- STAMPS
- PANTALOONS
- WARDROBES
- INJUNS
- SINFUL
- QUACK
- ABSOLUTION
- FRANCESCO
- FEVERED
- VIGILS
- ALTERING
- RENEWING
- FESTIVALS
- FLATTERERS
- UNALTERED
- SPRINGFIELD
- FORTE
- ACCOMMODATING
- BACKERS
- FASTEST
- SUPERINTENDENTS
- ORD
- PLATTE
- PHERSON
- STAGER
- HECKSHER
- GORDON
- BREVOORT
- BELMONT
- BACKWOODS
- EMPHATIC
- LOOK'D
- TENET
- WHIMSICALITY
- DIDIUS
- TRIBONIUS
- WARMEST
- NEPHEW'S
- JOSTLING
- SKIRMISHING
- CONTRARIWISE
- INTERROGATIVELY
- FURBISH'D
- POPISH
- UNGRACIOUS
- AIDING
- TARNISH'D
- COCKADE
- THONG
- TASSEL
- MARCH'D
- CELIBACY
- DECLAMATION
- PHIPPS
- MANUFACTORY
- LANCERS
- ANGLING
- CONSIDERS
- PIMPLES
- SATIRICAL
- CLERICAL
- PHERORAS
- COHORTS
- FORTRESSES
- SAMARIA
- OVERRAN
- CONQUERORS
- MIDLAND
- ROBBERIES
- EXHORTATIONS
- THREATENINGS
- SPOILING
- FRIGID
- ARS
- BIRTHRIGHT
- TRIUMVIRATE
- FOULNESS
- GIVEST
- CONTRITION
- UNDERSTANDINGS
- IDOLATORS
- BAPTIZED
- SACRAMENT
- LESCAUT
- DISTRACTIONS
- FRASCATI
- ISOLATING
- GAMBLER
- MARLY
- CLOSES
- COURTESAN
- SEMICIRCULAR
- JOUR
- OSPREY
- NIGHTMARE
- ERST
- THWART
- SULKILY
- GASHED
- GRIMES
- YE'LL
- OFFENCES
- LEVELLED
- MAINLAND
- COMMANDANT
- SCOOPED
- ABYSMAL
- GIRDING
- GORGED
- RETAKEN
- JEM
- SHORTENED
- DUDS
- FORGING
- BUILDER
- DOZE
- OLIVER'S
- CONTENDING
- RIP
- TROLLING
- FLEDGED
- STALWART
- SLACKENING
- BUSHEL
- BREAD'S
- ARID
- INVIGORATED
- SURLY
- EXPENDING
- ENSIGN
- CONNECTICUT
- SUCCESSORS
- TESTIFIES
- CLEAREST
- WOODBURY
- CONGREGATIONAL
- SALISBURY
- GATHERINGS
- HOUSATONIC
- OUTCASTS
- KINSHIP
- VERMIN
- BROADLY
- UNDREAMED
- DYNASTIES
- SIMIANS
- UNLOVELY
- FLIGHTY
- OPERATIVE
- CONVERSELY
- INITIATIVE
- SPEECHLESS
- BANQUETS
- EXTERMINATE
- PUSHES
- APPALACHE
- WEDGES
- HATCHET
- CANAVERAL
- REEFS
- OTTIGNY
- OUTINA'S
- CONJURER
- WELLNIGH
- HOWLINGS
- OUTINA
- TRIBESMEN
- EXULTED
- HOMESICK
- CLEAVE
- COLIGNY
- RIPEN
- YELLS
- BOROUGH
- JOURNEYMAN
- CHILTERN
- NEEDY
- LENDER
- INSERT
- EXPECTANT
- BUCKLED
- DRESDEN
- ASCENDANT
- UNITE
- ILLNESSES
- UNRESERVE
- FRIENDLINESS
- FORETELL
- CLEVERER
- BOASTS
- RANDALLS
- WOODHOUSE'S
- PREDICAMENT
- COMMUNICATIONS
- FIRESIDE
- LOREEN'S
- DISTRUSTED
- ROLLIN'S
- ARNOLD
- REDEEM
- CONSTITUENCY
- DISPLAYS
- INCREDULOUSLY
- DESPAIRINGLY
- RUEFULLY
- CHEAPENED
- MAUSOLEUM
- GRAFTER
- STEALS
- GROCERS
- PLUTORIA
- SLUGGISHNESS
- LAWLESSNESS
- O'HOOLIGAN
- GRATH
- INGRATIATING
- CENTRES
- DAVIDSON
- CLERK'S
- DISCERNED
- UNWELL
- PURPORT
- CLOSETED
- AGHAST
- ESCUTCHEON
- KICKS
- DEMONIAC
- BUFFETED
- JUVENILE
- PEA
- RUEFUL
- HARE'S
- SCORNING
- VASTLY
- CONVULSIONS
- AFY
- JOYCE'S
- YEARN
- DERANGED
- LEGIBLE
- PENS
- PENKNIFE
- CURIOSITIES
- SULPHUR
- COMPRESSION
- MERCEDES
- TRANSPIRE
- SPOTLESS
- DISPOSITIONS
- GUILELESS
- SMOKESTACK
- RAPPING
- JACKKNIFE
- SANCTUM
- RELAPSED
- SIGNALING
- KNOCKOUT
- STOUTNESS
- LAGGED
- BRIGHTON
- WHIN
- POSITIVENESS
- EVERETT
- SHRAPNEL
- POISONOUS
- BOCHES
- FATEFUL
- TRUCKS
- IRREGULARLY
- BRIGADES
- SPASMODIC
- AIRCRAFT
- RIGHTED
- ENCIRCLE
- MOMENTUM
- JUBILATION
- SUBMARINE
- AMIABLY
- PERMEATED
- DEBUTANTES
- ELABORATED
- MORNINGS
- NATALIE
- EVADED
- BARRACK
- CLUSTERING
- HIVES
- WISTFULLY
- BATE
- COPS
- COP
- FINES
- YARNS
- WEAL
- SCRUPULOUSLY
- TRAFFORDS
- EGREMONT
- SCANNING
- INCONVENIENT
- ABSTRACTED
- DUPES
- ASSEMBLIES
- CLAMOROUS
- DEGRADE
- TERRIFY
- OPPRESSORS
- ERMINTRUDE
- JOLLIGINKI
- THICKEST
- POLYNESIA
- DOLITTLE
- CONCIERGERIE
- UNCONNECTED
- ACCEPTATION
- LUCIE
- CITIZEN'S
- THOUGHTLESSNESS
- ASPHYXIATED
- BORINGS
- ICEBERG
- WATERLINE
- STAKED
- SUPERVISED
- WIELDED
- SUPERVISING
- INJECTED
- SCARCER
- SUSTAINING
- PRAISEWORTHY
- NAVIGATORS
- PREDICTS
- NAVIGATED
- BULB
- MIXING
- SPECIFICALLY
- DECIMETERS
- MARVELED
- JELLYFISH
- FESTOONS
- DANGLE
- FERRETING
- APPALLED
- PHRIXUS
- DETHRONED
- CHIRON
- CADMUS
- COXCOMB
- ENCHANTRESS
- SHRIVELS
- GRIPE
- ASSAILS
- GREENSWARD
- BROADCAST
- RIPENED
- CLASHING
- HEWING
- SIMPLETONS
- KINGLY
- ENCHANTRESSES
- FORBIDS
- LOWING
- SNOUTS
- ANTELOPE
- SHATTERING
- BEGONE
- EMBARK
- GRENADES
- CONCENTRATING
- WEAKEST
- UNTENABLE
- INSCRIPTIONS
- VIRGIL
- AMPLIFICATION
- PROGRESSIVES
- PENDING
- CHESS
- CONSERVATION
- AMEND
- RHODE
- MARSHALING
- LETTERED
- DENT
- LIBERATE
- APPOINTEE
- REPORTERS
- FLAUNTED
- BULLETIN
- CONCLUDES
- POLITICALLY
- GRAZED
- DETECTIVES
- SUMMED
- DELEGATION
- FLAMBEAU
- STATUARY
- BROODS
- COSY
- REPENTED
- RITUAL
- FANTASTICALLY
- ADJURATION
- UNLOADING
- BAZAAR
- UNWRAPPED
- COMPANIONABLE
- FINANCIER
- DISARMING
- SPURTED
- FRAYED
- FLORIAN
- GODCHILD
- ANONYMOUS
- SMASHING
- BELVANE'S
- CORONEL'S
- MAIDENLY
- ARMOURER
- ARMOURER'S
- ARCHITECT
- HYACINTH'S
- ELVIRA
- HUMOUREDLY
- COMPUNCTION
- DUGALD
- PRISCILLA'S
- AFGHAN
- BUTTONING
- GASLIGHT
- BRUNWALDE
- SERENADING
- TALENTED
- HANSOM
- JERVIS
- DEPRECIATION
- PREMISE
- DUPLICATE
- CARROZZA
- CABMAN
- UNIFORMED
- WHEREAT
- UNABASHED
- RHAPSODIES
- SPANKING
- STEEDS
- PITIFULLY
- OPPORTUNELY
- TOURIST
- CANVASS
- BRADLEY
- POPPY
- NICKLE
- REEVES'S
- GLEEFULLY
- REEVES
- WAILS
- STOW
- CURING
- SIPPING
- COUNTRYFIED
- REVELED
- MESSY
- BEDROOMS
- CROCK
- SNAPS
- PRIMLY
- GILLIS
- WHITE'S
- SAM'S
- TASSELS
- MAMIE
- TUMBLERFULS
- WATERED
- DETERMINEDLY
- DIZZILY
- DIANA'S
- THOMAS'S
- SERE
- INTOXICATE
- IRRITATE
- UNHOLY
- PESTERING
- OUTSPOKEN
- WILDFIRE
- WHEELBARROWS
- VEERED
- GERANIUM
- PRETTIER
- CHUM
- CANONIZED
- CLAUDE
- BICYCLES
- UNKLE
- CLAIR'S
- REPRODUCE
- WILLIE
- IRVING
- DONNELL
- OFTENEST
- MOONGLADE
- SCUTTLE
- HIRAM
- PRILLIE
- ANGERED
- INSOLENTLY
- ANTHONY'S
- REPENTANT
- HUMILIATIONS
- COMRADESHIP
- CONTINGENCY
- QUIETNESS
- CAMPFIRE
- HENDRY'S
- KNOCKER
- SWARMS
- ANTICIPATING
- MARM
- TANTRUMS
- GRAFTIN
- AIRTH
- ETARNAL
- ABED
- GRAINED
- GRENADIERS
- WHOPPER
- KNOWLES
- WHARTON'S
- REVERSAL
- PROFESSING
- ATTACHES
- MANFULLY
- RATTLERS
- ROBYS
- UNBLUSHING
- INSEPARABLE
- SECURES
- ABSOLVE
- OMNIUM
- AUSPICES
- DROUGHT
- SELECTIONS
- SILVERBRIDGE
- LUCKIEST
- RECEPTIONS
- PICNICS
- FEATHERLESS
- ROBINS
- CATCHER
- RESENTING
- ENTICING
- TUMBLERS
- WINKS
- TORMENTOR
- PERCHING
- TAGGING
- FLAPPING
- LAPPED
- PLUCKY
- SCRUBBED
- GOLDENRODS
- RAVINES
- LIMESTONE
- ADVISERS
- MOUNTAINEERING
- CONE
- ARMPITS
- FATIGUING
- AVALANCHES
- WALLOWING
- HOLLOWED
- DIVERSIFIED
- SIFTING
- NOTEBOOK
- DEPOSITION
- BOSSY
- MATTED
- FLUFFY
- LASSEN'S
- BOSSES
- ENCIRCLING
- DISSOLVING
- FABRICS
- FUMAROLES
- SCALDING
- FINENESS
- COMPELS
- WREATHING
- NORTHEASTERLY
- ENABLING
- VAUNTED
- MORMON
- ZION
- TAINTED
- SAUNTERING
- SAGEY
- COMPARABLE
- LARKS
- SEDIMENTS
- ERYTHRONIUMS
- FRITILLARIAS
- BATTLEMENTS
- ERYTHRONIUM
- SHOWY
- AGLOW
- BULBS
- ATROPURPUREA
- TULIPS
- MORMONS
- GRASSHOPPERS
- SUBSISTED
- GRANDDAUGHTER
- TRANSPLANTED
- COMPOSITORS
- UNWARRANTED
- LOGGING
- BREAKAGE
- PROSPECTOR
- ASPIRING
- BOULEVARDS
- LOGGERS
- ATTRACTIONS
- CORDUROY
- PLUNGES
- HUNT'S
- PARTICLE
- HALLOWED
- INFIDELS
- HAULING
- DAUNTED
- NARRAGANSETT
- HOOPING
- FILTHY
- PSALMIST
- PARCHING
- BEAR'S
- UNSATISFIED
- SAVORY
- TRUMPERY
- PORTUGUEZ
- IRONS
- FORD
- ALLIGATORS
- RESURRECTED
- IMPECUNIOUS
- PLYING
- FURROW
- INSTANT'S
- QUOTATION
- DREARILY
- OMENS
- WA'N'T
- DERNED
- SNAPPY
- JADED
- JACKASSES
- FALK'S
- HURTING
- DOWRY
- DORCAS
- CRESTFALLEN
- SHOEMAKERS
- PRINTERS
- LEVIN'S
- COQUETTISH
- FALANDER'S
- REHNHJELM'S
- RIPPLED
- DISCONCERTED
- COUP
- HJALMAR
- STIPEND
- RENTING
- LEAKING
- SKETCHING
- DIRECTORY
- ADJUNCT
- PROUDER
- EVE'S
- LASCIVIOUS
- BEAUTY'S
- RETENTION
- HERETIC
- ARIZONA
- QUARTZ
- SNUFFED
- INCARNATION
- RADIUS
- IMBUE
- ARMLET
- WITHDRAWING
- ROSTRUM
- BULKS
- DENIZENS
- CAPTOR
- PERFUNCTORY
- PEALS
- JUMPS
- SENSATIONALISM
- LAIR
- BRIGANDAGE
- ENVIRONMENTS
- RECLINED
- BEAUT
- DIVERSIONS
- BANKING
- SOR
- CUDDLE
- MAVOURNEEN
- BREAKIN
- LIVELIHOOD
- SUPREMELY
- HOBOKEN
- FLYAWAY
- FONT
- CHRISTENING
- THREES
- SANNA
- RAVEN'S
- SHOVELS
- SENTENCED
- SALARIED
- ESSAYS
- NOTHINGS
- DRONE
- PEPPERLEIGH
- HAREM
- DROOPED
- MOP
- LENDS
- NEWSPACKET
- JEFFERSON'S
- STROPPED
- EARNINGS
- SCRIP
- JAMMED
- NIPPEWA
- TULIP
- PROSPECTUS
- NETLEY'S
- CAPITALISTS
- CORONA
- INCOMPETENCE
- INSURRECTOS
- RECLAIMED
- DIRECTORS
- FRAUDS
- RECONSTRUCTED
- MABEUF
- TREMBLES
- REVERTED
- SHAKSPEARE
- HIVE
- NUMBERING
- SUICIDES
- FASTING
- VOLUPTUOUSNESS
- EXPIRATION
- PLENITUDE
- THINKERS
- HYDRA
- FICTIONS
- AMPHICTYONS
- SOVEREIGNTIES
- 'NULL'
- CONSCIENCES
- DISMEMBERMENT
- DISPLACE
- EVOKES
- BRUJON
- EPONINE'S
- SPECTRE
- PLUMET
- PARDI
- ICES
- PROUVAIRES
- WHISPERINGS
- VIBRATION
- PROFILES
- ENGULFED
- PONTMERCY
- WIDENING
- UNANSWERED
- LAGGING
- INIQUITOUS
- DEFENDER
- AMBIORIX
- ARTEVELDE
- VIOLATES
- SUFFICES
- PHILIPPE
- REPLACES
- SUPPRESSES
- REASSURING
- SUNSHINY
- GLEESON
- GOODBYE
- UNTRUTH
- MENDER
- EMANUEL
- DIME
- STORIED
- POINTSMAN
- CAUSATION
- WELD
- WELLINGTON
- INFERENCE
- INDUCTION
- HAMILTON'S
- REVIEWING
- RELATIVITY
- REQUISITES
- NOMINAL
- COLLECTIVE
- LOCALLY
- SUCCESSIVELY
- APPETISING
- MANFRED
- STOREY
- SNUB
- SERVILE
- APPLAUDED
- REFINEMENTS
- CURRICULUM
- SUBJUGATE
- TENTHS
- DISDAINFULLY
- PAROXYSM
- DOMINATING
- SWIMS
- LEVELED
- CRESTS
- SIGNIFYING
- DISTINGUISHES
- NEPONSET
- WENDELL
- SUFFOLK
- RESOLVES
- VICISSITUDES
- NURTURED
- RUSKIN'S
- CURSORY
- SOMERVILLE
- LYNDEBORO
- ROLLICKING
- DOMINATE
- MISCELLANEOUS
- TOILSOME
- TOLLING
- DELINQUENTS
- EXACTNESS
- TITHING
- IDOLATRY
- HYMNS
- STIRS
- DEPRESSING
- PEOPLED
- SEVERITIES
- STAMINA
- COCKADOODLE
- PECKED
- BANTAM
- RIPPLING
- BIDDY
- CROAK
- POUNCED
- MATERNAL
- MINT
- DISOBEYING
- SNOWFLAKES
- BLOT'S
- LEAFLESS
- HAWTHORN
- TEASED
- BUN
- VILLAINOUS
- PROMENADE
- KNOTTED
- PAGODA
- CURLING
- EATABLE
- SOBERED
- LAZARETTE
- CHAFING
- PINION
- WEIGHTY
- STAVE
- MANGLED
- SWAMPED
- SABLE
- FOREMAST
- STAVES
- FENDERS
- RANSACKED
- DAZZLE
- WHIRLPOOLS
- INFERRED
- CELEBRATING
- CAMEO
- BIOGRAPHICAL
- HISTORY'S
- GERVINUS
- INTERACTIONS
- EXECUTIONS
- DECORATION
- FOREHEADS
- PARISHIONER
- GOERS
- TAKU
- SOUTHWEST
- NAVIGABLE
- HONEYSUCKLE
- DELTA
- FORESTED
- ABOUNDING
- NORTHEASTERN
- TERRACES
- HEMLOCK
- SPIRES
- HUCKLEBERRIES
- NOTCHES
- MUIR
- FORDWITCH
- DRAWBACK
- INFLICTION
- TOLLER'S
- FOREWARNED
- DEVILISH
- SIXES
- SEVENS
- FAVOURING
- SUPERBLY
- TURBID
- ASSOCIATING
- REMBRANDT
- BLOCKING
- RECONNOITRED
- TROOPERS
- CAPRON'S
- GAUDY
- KEENEST
- KRAG
- DISTILLERY
- ROWLAND
- UPHILL
- CANTEEN
- GAUZE
- STORMING
- DRIFTWOOD
- HEARTSICK
- OLAF
- HARALD
- GREENLANDERS
- GUDRID
- VARIEGATED
- AFFABLY
- FOREARMED
- CROPPERS
- SLATES
- STEELY
- SAPLINGS
- DUCE
- PRIESTLY
- STIPULATION
- PULL'D
- PUZZLE
- NOAH'S
- HARM'S
- POO
- ACCOUTREMENTS
- CHRONOLOGY
- PLOUGHS
- MINERALS
- MARLBOROUGH
- BELBURG
- KERPENORD
- KALSAKEN
- NEWDORF
- LANDENBOURG
- MILDENHEIM
- ELCHINGEN
- GINGEN
- BALMERCHOFFEN
- DEFENCES
- SCHWARTZ
- HAPPEN'D
- LANDEN
- INFUSING
- CLAMBERING
- STAG'S
- DMITRI
- BUZZED
- STEPPE
- TRUMP
- SEDATE
- GODSON
- GROZNOE
- DEALERS
- HAYTERSBANK
- HAMMERS
- WHALING
- COULSON'S
- NEWCASSEL
- PRENTICE
- FAILINGS
- URANIA
- CONJURE
- REID
- ABACK
- YO'RE
- NOAN
- CHARITABLY
- FOSTER'S
- INSOLVENCY
- FORESTALLED
- CULMINATING
- MAR
- LADDIE
- LUCK'S
- THOU'RT
- MOLLIFIED
- BRONZED
- MAK
- THOU'S
- BARABBAS
- JOSEPHUS
- TALMUD
- CORRECTIONS
- SCRIBE
- EGYPTIAN
- LENDING
- WASTEFUL
- DISCLOSURE
- TOL
- WHOA
- ETHEL'S
- CHERUBIM
- CUPIDS
- PELL
- MELL
- UNCHAINED
- SHAMELESSNESS
- SATURNALIA
- CUCKOO
- LANDAU
- APOTHEOSIS
- TRIUMPHAL
- ENTICED
- CONFRONTS
- TINSEL
- PREFECTURE
- CANDOR
- MIRTHFUL
- SUPPOSES
- POTTERY
- BASQUE
- DESSERT
- CAJOLE
- WEIGHS
- ROBESPIERRE
- POMUM
- IMPASSIONED
- MANHATTAN
- RATAPLAN
- MUSCOVADO
- GRANDISSIMO
- BASTINADO
- RENTAL
- EMERALDS
- CREME
- COMMONERS
- FACADE
- SATINS
- HAPLY
- SHEDDING
- PHRONY
- TURNT
- HISSE'F
- INTER
- F'UM
- MO'NFUL
- GOO
- SWOOP
- BLUFFS
- COPPERAS
- CAPTING
- PADDLED
- HEADWAY
- SOWERBY'S
- PREBENDARY
- RECURRING
- PERQUISITES
- JAUNTY
- ENTRAPPED
- APPERTAIN
- EARNS
- CHARIOTEER
- GRISELDA
- CORNISH
- CLERGYMEN
- RAIMENT
- DEANERY
- DELECTUS
- SAVOURED
- GUAVA
- UNLOAD
- INCUMBENT
- DISPENSED
- PLATOON
- INSURGENT
- LYNCH
- SIMONIANS
- MEMOIRS
- UNAIDED
- AUDACIOUS
- ABOUND
- INTERMITTENCES
- MISCARRIED
- ENJOLRAS
- ROLAND
- USHER
- ORDNANCE
- BOSSUET
- CARTOUCHES
- SHARPSHOOTERS
- PECKING
- GAMIN
- URCHIN
- PYGMY
- CROPOLE
- CORTEGE
- FLAMBEAUX
- PANOPLY
- VIVE
- ROI
- PITTRINO
- FATIGUES
- PIERSON
- SORORITY
- MISDEMEANOR
- HIPPY
- APPALLINGLY
- TEAMS
- SPRINTING
- NETTED
- SWOOPED
- EUPHRASIE
- MOIRE
- WATERWORKS
- STRASBURG
- INSIPID
- TROY
- ACHILLES
- HECTOR
- HEW
- NESTOR
- BYGONE
- CAROUSE
- CABBAGES
- BOURGEOISE
- BALLET
- MOUNTEBANK
- SYLPHS
- SWANS
- CAREFULNESS
- EXAMINES
- URINE
- DIFFERENTIATION
- ANALYZING
- SCHEMATIC
- RECOGNIZABLE
- APPENDIX
- GROWTHS
- NEURASTHENIC
- HYGIENE
- REACT
- UNDERMINE
- OVERBUSY
- PLEASANTNESS
- MASONIC
- MASONS
- OUTWARDS
- VICTORIANS
- ENUMERATED
- REFORMS
- PLACARDS
- TRANQUILLY
- INCALCULABLE
- COMMUNISM
- MABEL'S
- FELSENBURGH'S
- BENNINSCHEIN
- UNIMAGINABLE
- PRECEDENTS
- BEGAT
- GOADING
- PROGRAMME
- GULLY
- LICENCES
- BULLOCK
- GROG
- ALLSORTS
- MISCONCEPTION
- SEMITES
- DEFILED
- MENTIONS
- CHILDBIRTH
- SECLUDE
- COMPARES
- SECLUDING
- EMANATE
- ZEALAND
- TABOOS
- DURKHEIM
- PUBERTY
- REAPPEARS
- CHRONICALLY
- BOAS
- MEDIAEVAL
- SURVIVES
- VATS
- ABDOMINAL
- ORCHESTRAL
- TUNED
- REFERS
- PRIESTESS
- PROPHETESS
- SCOTCHWOMAN
- ROMANINOV
- BUSIER
- MASQUERADING
- ADORNMENT
- MOOR
- SHADOWING
- BOATMEN
- GINTLEMAN
- ILLUSTRATIONS
- QUALIFICATION
- PICOTTE
- CURTIS
- UNEQUALLED
- COOLIDGE
- HELPERS
- SAC
- BONNEY
- INTERCOLLEGIATE
- JOURNALISTS
- KENZIE
- CAVIL
- AUBANUS
- BOHEMUS
- HEALTHFUL
- ORACLES
- AESCULAPIUS
- DEITY
- COMMENTATOR
- MOUNTEBANKS
- DISAGREEING
- MEDICORUM
- GUSTY
- MOCCASIN
- ACORN
- NICOLAS
- ROCKETS
- HOGARTH
- UNRESTRICTED
- INQUEST
- FEMININITY
- CURSING
- IBRAHIM
- GOODWILL
- SEEKETH
- GIVER
- SCONES
- BLEMISH
- ASKER
- MAKETH
- BAZAR
- TRENCHER
- HADDEST
- BURDENSOME
- PURVEYORS
- LIABILITIES
- BOUNTIES
- DISHONOUR
- HARDEN
- STRUTTING
- PU
- MAUDIE
- MENACINGLY
- SERPENTS
- REALISATION
- WERMIN
- OLLAYS
- EAMES'S
- PRECAUTIONARY
- BESEECHINGLY
- VERIFY
- DIVIDES
- EMPHASIZING
- VIBRATED
- GIGGLE
- REHEARSAL
- TYRRELL
- SQUALL
- GRUNDY
- SPLEEN
- AGROUND
- STRAYING
- PULSATING
- COWERING
- IMMEASURABLY
- VILLEFORT'S
- TIPTOE
- AFFABILITY
- LAMPREYS
- IMPUDENTLY
- CICERONE
- HONORABLY
- AISY
- WHAT'LL
- GI
- JABERS
- DREAMER'S
- PARTAKES
- HARRELSTEIN
- VOLUPTUOUS
- INTERCEPTED
- WEISHAUPT'S
- WEISHAUPT
- PRUDES
- GUESSES
- SCRUTINIZING
- SERVER
- COGITATIONS
- TWINKLED
- LOANED
- GARAGE
- GREENE
- RODDY
- CALLER
- DISINCLINATION
- SCHOOLMATE
- OUTSIDER
- JACK'S
- HAZEL
- SWASH
- FANNING
- FAUSTUS
- CHEMIST
- ASTRONOMER
- ABLAZE
- SUP
- BABA
- BOURGOGNE
- HEY
- PICARD
- HUSSIES
- CIVILLY
- PAPILLON
- SARDINES
- DISGUSTING
- IDENTIFICATION
- ILLEGIBLE
- DOORKEEPER
- TOINON
- MAGISTRATE'S
- POLYTE'S
- FASCINATIONS
- UNAPPROACHABLE
- BASTARDY
- PRETENSION
- CREATES
- CIRCUMVALLATION
- BOLEYN'S
- ORTHOGRAPHY
- DICTATION
- IMPOSSIBILITIES
- DIRRY
- MOIR
- BREASTPLATES
- TOASTED
- THATCH
- SCRAPED
- REDOUBTABLE
- UNSHORN
- THRASH
- TRAINER
- HERCULES
- CIRCUSES
- FREQUENTED
- FLESHY
- FELONY
- PROSTITUTE
- ANNUITIES
- SUBSCRIBE
- INCIPIENT
- LEGITIMATELY
- DETRIMENTAL
- TACITUS
- MONOPOLY
- JUS
- AGREEMENTS
- ACQUIRES
- DISINHERITED
- OVERSTEP
- LEGISTS
- EQUITABLE
- BIRTHS
- USUFRUCT
- DEATHLESS
- THENCEFORWARD
- STY
- SLEEPERS
- PHANTOMS
- DISSOLVES
- WANDERER
- BRACKETS
- BROWSES
- GRANARY
- RECONSTRUCTION
- HOEING
- FERTILIZERS
- OAKEY
- PLANTERS
- FENCED
- INFLEXIBLE
- BOLTON
- INCLUDES
- GRABBING
- STURDILY
- PREACHER'S
- NEWBORN
- IDEALIZED
- COAX
- UNCLOTHED
- DUBBED
- BUILDED
- VINEYARD
- BARMOUTH
- GRAND'THER'S
- BANNOCK
- PETTISHLY
- FEATHERY
- MORGESONS
- DISTANTLY
- BOMBAZINE
- FLACON
- SALTS
- UNSWERVING
- HOMELAND
- DISILLUSION
- RAMBLES
- DIMNESS
- GIST
- FRAMPTON
- VACATIONS
- MAILED
- COMPREHENSIBLE
- PHOTOGRAPHY
- ILLOGICAL
- RECEPTACLES
- RAMIFICATIONS
- MONALDESCHI
- SKEIN
- SCALY
- HOPE'S
- VOGRAAT
- ROTTERDAM
- PARALYTIC
- MOULDY
- REGISTRATION
- FRAEULEIN
- SCHERIN
- DUES
- DICTIONARY
- FACILE
- RUMPLED
- REGISTRAR
- FERRIS'S
- COMPOSING
- UNTAMED
- MUSINGS
- CYNIC
- COLYUMIST'S
- WILDE
- COLYUMISTS
- JEWELLERY
- SCRAWLED
- CONRAD
- PARAGRAPHS
- FOHRENSEE
- DIETRICH'S
- EXPOSTULATED
- SANGUINE
- SUPPORTER
- RIVERMOUTH
- ASPIRED
- CENTIPEDE
- LANGDON
- VESTIGES
- INITIATIONS
- TEARFULLY
- CLAPHAM
- MEEKS'S
- UNPROTECTED
- PROHIBITED
- SHIELDED
- UNCEREMONIOUSLY
- PATRICK
- CLASSIFY
- JAMIESON
- INFORMANT
- HOBGOBLINS
- CHIMBLEY
- GERAGHTY
- NYMPHS
- DA
- HALIBURTON
- CHRYSOSTOM
- ADJOURNMENT
- MADGE
- RETAINER
- KAJI
- YEARNS
- VIE
- THUNDERSTRUCK
- DIRK
- DANZAYEMON
- TANNERS
- ETAS
- BANJO
- YOSHITOMO
- TIPHYS
- NAUPLIUS
- ARCAS
- ADMETUS
- THESEUS
- AFFRIGHT
- REPORTER'S
- TABOR
- PENCROFT'S
- EMBRASURES
- FABRICATION
- INCONTESTABLY
- RESISTS
- ACCOMPLICES
- CONTUSED
- STRANGULATION
- PERFORATION
- SUPPURATION
- COMPRESSES
- SUBSIDE
- DOZED
- HAMATH
- NAIRI
- ARMENIA
- INAUGURATED
- TIGRIS
- PRETENDER
- FLAYED
- PAL'S
- SUKHI
- CHALDAEA
- NINEVEH
- PHOENICIAN
- EXCAVATED
- LAYARD
- HEBREWS
- MOLTEN
- BETHEL
- REBELLIONS
- ZIMRI
- TIRZAH
- GIBBETHON
- CONSPIRED
- QARQAR
- CLAIMANT
- HADAD
- SHALMANESER'S
- INAUGURATING
- THWARTED
- WHOSO
- INVENTOR'S
- WURTEMBERG
- LOTTERY
- BALLOONS
- ELIMINATED
- PRUSSIAN
- AWARD
- MEASURABLE
- GALL
- DESCENTS
- SYMPATHISED
- FORGE
- TEUTONIC
- RAIDING
- PERTAINING
- ILLUSTRATIVE
- CAXTON
- PAMPHLET
- FURNISHING
- ENLARGEMENT
- DOCKETED
- CRAVAT
- BECKY'S
- URN
- REGENT
- GRUDGED
- COUNSELLOR
- CURZON
- TOILETTE
- AMELIA
- JOS
- HUMBLEST
- BOOMERANG
- BADGE
- WARES
- OPPORTUNE
- OPPONENT'S
- SUBSIDIZED
- POLICIES
- SOUTHERNERS
- DISOWNED
- MUTILATION
- CONFEDERATES
- DAMNING
- COWED
- CRAVEN
- CALHOUN'S
- CREOLE
- DIAZ
- RUFFIAN
- DESTROYER
- RESTORER
- TABOOED
- HUCKLEBERRY
- GOSHEN
- HUMBOLDT
- BUCKEYE
- DELIVERS
- POPULARLY
- FREDONIA
- GRITTY
- CATTARAUGUS
- DEIGNING
- PEASANTRY
- DISPLACING
- VISCOUNT
- PEEVISHLY
- LANCET
- RESPIRATORY
- OUNCES
- CRAYTURE
- GALLON
- UTTERMOST
- TAY
- SHEFFIELD
- JANIUS
- DISCOORSIN
- ARISTOPHANES
- TAXED
- LOVELIER
- CELTS
- GLASTONBURY
- TITULARY
- METAPHYSICS
- PROGRAMMES
- HEEDS
- APPLAUD
- SOPHIST
- ANATHEMAS
- ECONOMISTS
- SPARING
- KNEAD
- POETICAL
- FANATICS
- PRECEPT
- POLYGON
- COLLATERAL
- HINDUSTAN
- SWIRL
- WITCHLAND
- WASHERWOMAN'S
- MARTYRDOM
- NORWICH
- TYRANNICAL
- PROCLAMATIONS
- BELGIANS
- BISSING
- LANCKEN
- LEGATION
- THEREABOUTS
- ANNIE'S
- REDS
- LEATHERN
- NORTHWICK
- CONSECUTIVELY
- BANGS
- EXPLICIT
- GROUNDED
- HAWTHORNE
- REVEL
- BIGNESS
- ROSELEAVES
- SENTRIES
- BELINDA
- OVERDOING
- GRATIFIES
- SWEETS
- CONTROLLING
- REVOLUTIONARIES
- MASCULINIZATION
- UTOPIA
- CIRCE
- CAMPDEN
- HOTHOUSE
- MILKY
- DECLARATIONS
- BASENESS
- TILT
- BUTTONHOLE
- DIO
- DAUB
- COMMISSIONS
- OXTED
- ECHOING
- SWEETMEATS
- GIRDLES
- BENI
- PUPPETS
- UNALTERABLE
- MINX
- BELITTLE
- SANS
- VERSION
- VERSIONS
- HYPERION
- EMERGES
- ANALYTIC
- BOLEYN
- PATE
- OFFENDER
- WICKET
- THREADING
- DURABILITY
- LINNAEUS
- HUNKERVILLE
- IMMUNITIES
- UNDERSIGNED
- LAUSANNE
- VEVEY
- PROHIBITIONS
- GAIZDORRA
- CAPGAROUPE
- GOURD
- WICKER
- ADHERED
- UNSEALED
- EXECUTIONER'S
- SERJEANT
- WAIF
- TALBOT
- INSCRIPTION
- EFFRONTERY
- CHRISTINA
- ASSASSINATIONS
- POISONS
- DEDUCTIONS
- COUNTRYSIDE
- QUARLES'S
- MADHOUSE
- GENTLEMEN'S
- TANTALUS
- SIDEBOARD
- UNDERESTIMATE
- SUBVERT
- CONSECUTIVE
- JUT
- COUNTERACTING
- LYNDON
- PENT
- WINNEBAGO
- BOWLDERS
- HULK
- FERRYMAN'S
- EJACULATIONS
- ORGIN
- PEDALS
- NECESSITATING
- CROONING
- DAMPER
- DAMES
- SAVANNAS
- RETRACING
- ARMFUL
- PARALLELOGRAM
- DAILIES
- JOURNALS
- PLUTOCRATIC
- CLIQUE
- UNMAKE
- ENTHUSIASMS
- DISABILITIES
- ICONOCLASTS
- DISPARATE
- DISJOINTED
- MOUTHPIECE
- STIFLE
- METTLE
- LOUDEST
- HEEDING
- RECONCILE
- COLERIDGE
- GODLIKE
- UNPRACTISED
- GENDARMES
- IMPORTUNATE
- SMACKED
- ALIENATE
- DEADLIEST
- VOCIFERATED
- PALERMO
- STRIPLINGS
- INMATES
- STUPEFACTION
- BRAVO
- SACRA
- DECESARIS
- BLUNDERBUSSES
- PARISIAN
- PAMPINARA
- BORGO
- FRIENDSHIPS
- DIAVOLACCIO
- PORTRAITURE
- CHARACTERIZATION
- DEPICTS
- EFFECTIVELY
- MOLIERE'S
- SATIRIST
- DISPARAGE
- REIN
- SEASHORE
- LEVERS
- CLAVICHORD
- VIRGINALS
- ELIZABETHAN
- IMMORTALIZED
- BYRD
- SCARLATTI
- SONATAS
- MODULATIONS
- SOULLESS
- PROGRESSIONS
- CHROMATIC
- EXPLOITATION
- CONSECRATE
- CONFIDENCES
- ELEVATING
- ZWYNY
- HARKENED
- GOSSAMER
- NUANCES
- STACCATO
- INVALUABLE
- FLUCTUATION
- REGULATE
- AUDIENCES
- REPLACING
- GRATIFIERS
- RAILWAYS
- FRANCHISES
- UNEARNED
- INCREMENT
- ALGONKIN
- UNIFORMITY
- SYNONYMOUS
- CONEY
- HOLTS
- BUCKBOARD
- ASPENS
- SUPPOSIN
- SORENSON
- LICENSES
- LIZ'BETH
- QUEEREST
- SPIKE
- JERRINE
- RUMBLING
- GROS
- VENTRE
- TRIANGLE
- MURRY'S
- ACCORDION
- MURRY
- ALDENHAM'S
- SAMUEL
- FINANCIALLY
- BULGER
- DESGAS
- BRANDED
- GAGGED
- BLOODLESS
- JEW'S
- SOUNDEST
- MEDDLESOME
- BLAKENEY
- SECTS
- CREDULITY
- FRUGALITY
- ADHERE
- MONASTERIES
- ACQUISITIONS
- AUTHENTICITY
- LUTHERANS
- SECONDING
- BLANDEST
- FREAKS
- ANCESTOR
- BIAS
- INGLORIOUS
- RIGIDLY
- INSULATED
- SUSCEPTIBILITY
- STYLES
- DETECTING
- FERVID
- ELEVATE
- CHRYSALIS
- BU
- CUB'S
- GOODWIFE
- SCORPIONS
- RENAL
- CLINGS
- ABHORRED
- CLERVAL
- ERNEST
- ADVERSITY
- KNEEL
- GIBE
- WALTON
- CENTERVILLE
- BENCHLEY
- DARREL
- CLENCHING
- RAVAGED
- VEHEMENT
- SEER
- HERETICAL
- CYCLE
- OMAR
- ABOMINATION
- JUSTINIAN'S
- ABBOT
- EXPEDIENTS
- NORTHERNMOST
- KHAN
- KHAZARS
- KHAZAR
- BOSPHORUS
- ICONOCLASTIC
- CONTROVERSIES
- CONSTANTINE
- GRADES
- BERRY
- FILLETS
- RAZORS
- COMBS
- LEGGINGS
- KILT
- CLUSIUM
- ETRURIANS
- LARTIUS
- HERMINIUS
- SCHRINER
- BREADS
- BAKERIES
- PATRIOTICALLY
- RATIONING
- BRISTLES
- CHANTS
- PAMPERED
- REARS
- TINNEKONK
- INFLATED
- MIGHTINESSES
- TWANGING
- COMMODITIES
- FESTIVITY
- NEDERLANDS
- MOSQUITOS
- LOMBARDS
- MANEUVER
- MILLSTONES
- UNHESITATINGLY
- CELESTIN
- DIGGER
- MENDICANT
- LITTERED
- JETSAM
- GULP
- ELECTRODES
- VISUALIZE
- SEAWEEDS
- LOOMING
- DERELICT
- PROCURING
- RUPTURE
- INSATIABLE
- COKE
- PHILIPS
- SEYMOUR
- WENTWORTH
- PREPOSTEROUSLY
- ENCROACHMENTS
- CHARLES'S
- EXTANT
- BUCKINGHAM'S
- PREROGATIVES
- SEDITIOUS
- CONCERTED
- SELECTING
- GARFIELD
- CUYAHOGA
- CANTON
- GREYS
- PODINA
- EQUATOR
- PLATTER
- DONKS
- TOMATO
- POOREST
- ALCHEMY
- CYMBALS
- DRYADS
- MANHEIM
- SIGUNA
- NARI
- LOKI'S
- PORCINE
- PENNANT
- CATERPILLARS
- OVERSHADOW
- OAK'S
- KIMBOLTON
- APPRENTICES
- SKIRMISHES
- ASCRIBE
- DIGBY
- CIRCUMSCRIBED
- LITURGY
- INFUSION
- INGREDIENT
- CONSUMMATE
- REFINEDLY
- DUELLI
- LEX
- SCRIPTA
- PERUSED
- SE
- HERMANN'S
- TENOR
- MEDIEVAL
- REVENANT
- SPECTER
- MAGNANIMITY
- HELVOETSLUYS
- BRAZIER
- VENTRILOQUIST'S
- EMPORIUM
- CRETONNE
- NETHER
- SKIMPY
- CONTUSION
- SHERLOCK
- ABRASIONS
- DRAPER
- REVEALS
- HOWE
- HOOPDRIVER'S
- VICUNA
- YESSIR
- UNDERFOOT
- GODDAUGHTER
- FLORINA'S
- ENCHANTER
- BURROWS
- WEASELS
- BACHELOR'S
- SCORPIONFISH
- CONTOURS
- SEAFLOOR
- ELECTRICIANS
- KILOMETERS
- CAKED
- AVENGER
- WARTY
- SPRAWL
- BOTHERED
- TRIPTYCH
- BROWNS
- UNREAL
- INFUSED
- FOGGED
- UNENDURABLE
- FLEES
- MINISTRATIONS
- TORTOISES
- TAPPAN'S
- SAVANNAH
- SHOPTON
- DAMON'S
- REMISS
- INDUBITABLY
- AMABEL'S
- WEBB'S
- ZABELS
- MONICA'S
- PATRICIUS'S
- AMPHITHEATRE
- BEFRIEND
- MANICHEANS
- REIGNETH
- DEGENERATES
- CONSTELLATION
- SIGNET
- COMPREHENDS
- THINK'ST
- FESOLE
- FLORENTINES
- GILROY
- PHILOSOPHIC
- BERYL'S
- ABE
- STANTON
- ARTEMUS
- SEWARD
- CHANCELLORSVILLE
- KOREA
- HORSESHOE
- UPSO
- FENG
- WANG
- SQUATS
- CAUCASIAN
- REJUVENESCENT
- IMITATOR
- RETHUMB
- INCARNATE
- GRIMY
- FEES
- Y'ERE
- OOP
- YOONG
- MONTGOMERY'S
- OUTED
- FAWCETT
- CANTAB
- SAYIN
- WITHERIN
- TYKE
- FLINCHING
- CA
- HITTING
- MAUN
- ABOOT
- MAISTER
- ICELANDER
- OTTO
- TEMPESTS
- MASTODONS
- PALAEONTOLOGICAL
- JAWBONE
- GENUINENESS
- PLEIOCENE
- JOHANNAEUM
- TERTIARY
- ASSENTING
- INFANTILISM
- DIFFERENTIATE
- BANAL
- OUTLIVED
- ABSTENTION
- EGOISTICAL
- RESTRICTION
- ELIGIBILITY
- APPOINTING
- CONSTITUENTS
- MUTABILITY
- COINCIDE
- PROHIBIT
- DUALISTIC
- CONTEXT
- THESIS
- DENTS
- CONTEXTS
- ABORIGINALLY
- PROJECTIONS
- AFFECTIVE
- VISCERAL
- EARDROPS
- FERRETED
- WHYS
- WHEREFORES
- WARP
- QUILTS
- HOUSEKEEPERS
- GAUNTLET
- PINKY
- MORRISON
- BLAIR'S
- SHYEST
- CUTHBERTS
- CUTHBERT'S
- SOCIABLY
- RUTTED
- RUTHER
- BACKYARD
- LOMBARDIES
- OVERBRIMMING
- SMARTLY
- GREENED
- UNMYSTERIOUS
- DISSIMILARITY
- INDICATIVE
- UNDERSTANDINGLY
- JAUNTING
- UNACCOUNTABLY
- UNSUPPOSABLE
- JOLT
- DISAPPROVINGLY
- CANNERIES
- SOUNDER
- RISKY
- MERCY'S
- STRYCHNINE
- ORPHAN'S
- FARMSTEADS
- BALSAMY
- GRAYNESS
- SIDLED
- RIGIDITY
- JAUNTILY
- GARBED
- DISCERNING
- LUDICROUSLY
- KNACK
- LACY
- MIGHTN'T
- DONATED
- BLOOMIEST
- CHARLOTTETOWN
- PITY'S
- SIDLING
- GOBBLE
- LUCIEN
- RIDDING
- BIPEDS
- PLATO
- FOURTHS
- DEBRAY'S
- INSINUATIONS
- INFLICTS
- PARTICIPATE
- UNDERLINGS
- ANNOYS
- JESTING
- VINDICTIVELY
- FORGERY
- CORSICAN
- COMPIEGNE
- GRUDGING
- FALTERINGLY
- MAUVE
- SILLIES
- DARLING'S
- INTERRUPTS
- REDSKINS
- MERMAID'S
- LAGOON
- NIGHTGOWN
- DIFFIDENTLY
- NIGHTY
- MISINTERPRET
- ORGANISMS
- UNSHRINKING
- ADVENTURESS
- ADVENTURESSES
- NIECE'S
- RESTRICT
- DETESTS
- STACKPOLE'S
- CARAVANSARY
- CHAMBERMAID
- TENUE
- SURVIVAL
- FEUDALISM
- ELAPSE
- GOODWOOD'S
- SIMPLIFYING
- HENRIETTA'S
- OWNERSHIP
- INCONSTANT
- POSTMARK
- ERECTS
- JUDGEMENTS
- UNDEMONSTRABLE
- ACCRETIONS
- SIGNORINO
- REMO
- FLORID
- COARSELY
- STRUMMED
- INCOME'S
- REFLEXION
- ADMIRATIONS
- FIFE
- PREDILECTIONS
- SENTIMENTALLY
- BREAKWATER
- CANDIDLY
- DERIVING
- HONOURABLY
- SITTINGS
- PERSUASIVENESS
- FRIENDLIER
- JOACHIMI
- DESPATCHING
- REGICIDES
- MERGE
- NETHERLANDERS
- DEADLOCK
- EXPULSION
- PARLEYING
- IMPORTATION
- ALIENS
- WITHDRAWAL
- MAGNUS
- INTERCURSUS
- MARQUE
- RAKING
- EVENTUALITIES
- AGGRESSORS
- PAUW
- PAUW'S
- LEANINGS
- STUARTS
- RUPERT
- CRUISES
- ROUTES
- FIRESHIPS
- SUSPEND
- ORANGIST
- CORNELISZ
- WITTE
- DESERTION
- DESERTERS
- CONVOYING
- MANOEUVRING
- REPLENISHMENT
- SUBORDINATES
- EVERTSEN
- FLORISZOON
- SEAMANSHIP
- INITIATE
- DISSOLUTION
- DICTATORIAL
- CONVOYS
- GABBARD
- ADMIRALS
- POLITICO
- RESPUBLICA
- NEGOTIATORS
- REINFORCE
- MAAS
- KATWIJK
- MANOEUVRED
- SCHEVENINGEN
- MORTALLY
- REORGANISING
- PROTECTOR'S
- FORMULATED
- AMBOINA
- STIPULATED
- RESUMPTION
- PASSPORTS
- CLANDESTINE
- STADHOLDER
- PROPRIO
- MOTU
- COGNISANCE
- ADJOURNED
- PERSUASIVE
- DILATORY
- CLAUSE
- PRINCIPALS
- BURGOMASTERS
- ADVOCACY
- ILLEGALITY
- OVERRULED
- BEVERNINGH
- NIEUWPOORT
- UNDELIVERED
- FAIT
- ACCOMPLI
- ACIDULATED
- L'ALLEMANDE
- TARRAGON
- HORSERADISH
- PARBOILED
- FULFILMENTS
- RIPENS
- UNCONTAINABLE
- FORELOOKING
- ANTICIPATES
- PLEDGES
- ENHANCES
- HEYDAY
- PEDANTRY
- STOICISM
- FORSAKES
- ENLARGES
- WARMS
- DEFACED
- SINCEREST
- COMPUNCTIONS
- EMBITTER
- CLEAVES
- USURPS
- FASTENS
- TEASES
- SATCHEL
- COLDEST
- UNSAY
- TREASONABLE
- OUTLASTS
- REVISING
- WITCHCRAFT
- PLUTARCH
- ENAMELLED
- WHERE'ER
- SYMPATHIZES
- DILATES
- AKIMBO
- SOLILOQUIZES
- ACCOSTS
- WETS
- APOLOGIES
- MONOPOLIZING
- DESPOT
- BLACKMAILERS
- ORACULARLY
- URBANE
- DONATIONS
- PINCE
- NEZ
- CROOKEDLY
- CREASED
- TYPED
- HUMP
- EXPECTIN
- MOROCCO
- DURAZZO
- ALBANIAN
- SPLUTTERING
- CANDLEMAS
- PEZARA
- UNOFFENDING
- JOVIALLY
- COMINGS
- ACRIMONY
- SHARPNESS
- SHOCKINGLY
- ANTE
- DANCING'S
- WALTZES
- FIDDLING
- CRUDELY
- DISBURDENED
- SUBTERFUGE
- ELDER'S
- FALLACY
- SOPHISTICAL
- ABATEMENTS
- HARBOURING
- REFLEXIONS
- CONTEMPLATIVE
- EXECUTES
- PASTIME
- BYWAYS
- SANDED
- DESULTORY
- EXTRANEOUS
- LURIDITY
- POSTIK
- HOUARN'S
- FREES
- WIZARDS
- AVEN
- BOATMAN
- GROAC'H
- COCKCHAFERS
- RACKS
- CHURNS
- MORLAIX
- TRAITRESS
- DREAMINGS
- HUMANLY
- OPTIMISTIC
- LUMPED
- MANNERISMS
- GUIDEPOSTS
- REQUIREMENT
- SIDETRACKED
- OUTLAST
- SPOUSE'S
- STAGING
- CRAVES
- SWADDLE
- CHANGEABLENESS
- FORGES
- TOPHEAVY
- SUPERSTRUCTURE
- CHAMELEON
- BEING'S
- NAGGING
- STAGNATES
- ENGROSSMENT
- OVERCLOSE
- OVERCONCENTRATION
- UNDERLIES
- SPECTACULAR
- STAKING
- MELODRAMA
- SETTINGS
- FLOP
- INTERLUDES
- UNCOUTHNESS
- VEXES
- TERRIFIES
- NARROWING
- WINS
- PRIDES
- BABYING
- OVERSENSITIVE
- WIFELY
- STIMULATIVE
- UNADAPTED
- UNGUESSED
- SATISFACTIONS
- BACULUS
- BACKBITER
- CONVERSATIONALIST
- SHAVES
- MONDAYS
- SCRIMMAGE
- SAVERS
- BENEDICTINE
- BIGAMY
- BILLIOUSNESS
- EATABLES
- BIOGRAPH
- STEREOPTICON
- BIRDIE
- ERYTHEMA
- CALORIFIC
- AETEOLOGIZED
- PERCEPTIVENESS
- FACIAL
- CAPILLARIES
- PRAECORDIA
- BOODLE
- BRACE
- BRACER
- BRUM
- BELUM
- MEDULLA
- OBLONGATA
- BUNCO
- STANDER
- ABSENTEE
- SUBSTANTIALLY
- CROPPING
- CRANBERRY
- HARRINGTON'S
- UNBUCKLED
- BESMIRCHED
- SEARCHERS
- CONSTABULARY
- BESTIRRED
- TOWNSHIPS
- PLACATE
- ABNORMALLY
- EARTHWARD
- MARIA'S
- ACCUSER
- DISARRANGEMENT
- BANKNOTE
- BAIL
- BONDSMAN
- PUFFIER
- IDLED
- SATANIC
- INTAKE
- PROPHYLACTIC
- EXPULSIONS
- DUMBNESS
- INTERMENT
- KATHERINE'S
- OBLITERATING
- DRAPINGS
- FORECASTED
- HARTLEY
- HOWELLS
- BLACKBURN'S
- CATALOGUED
- MISERLY
- CONNED
- FOOLING
- ASSEVERATION
- VARIANT
- SKATE
- UNTOOTHSOME
- AVERT
- TUMORS
- ABSCESSES
- GOITRE
- CATARRH
- RHEUM
- ECZEMA
- MIXER'S
- SYRUP
- INSURES
- STRONG'S
- SUNBURN
- CHAPPING
- COLLAPSIBLE
- PREPAID
- COUGHS
- COLDS
- BALSAM
- CATHARTIC
- CAPSULES
- GILL'S
- SUPPOSITORIES
- GRUBE'S
- ERADICATION
- BUNIONS
- CALLOUSES
- REMOVER
- ERASER
- DEBILITATION
- BOWEL
- INSOMNIA
- SLEEPLESSNESS
- ASSIMILATION
- CATARRHAL
- KELLOGG'S
- ORDWAY
- PLASTERS
- BACKACHE
- LUMBAGO
- BRONCHITIS
- STRENGTHENING
- NICKEL
- PLATED
- DETROIT
- AUDITORIUM
- PHARMACY
- IMPAIRS
- TONING
- REJUVENATING
- DENTRIFICE
- ROSSMAN'S
- HEMORRHOIDS
- KIDNEYS
- STOMACHIC
- CLEANS
- POLISHES
- DAMPENED
- BLUED
- ALCOHOL
- TURPENTINE
- BENZINE
- PARAFFINE
- MOOSULMAUN
- WERT
- SUPPLICATED
- CALAMITOUS
- AFFLICTING
- RESOUND
- KINSMAN
- BAIRAM
- REDOUBLING
- MALLET
- METAMORPHOSES
- APPEAREST
- DELIVERER
- YUNAUN
- SYRIAC
- CEREMONIALS
- BETIMES
- ENVIOUS
- EXECUTING
- FANATICALLY
- SWEATER
- BRONCOS
- BUYER
- SLED
- UNIMAGINATIVE
- UNCARED
- STOWBODY
- PENSIONS
- BERT
- TYBEE
- LYMAN
- CASS
- GASSED
- HAMLETS
- MILLERS
- BOOSTING
- HUSTLER
- LIL
- WIFEY
- DOC
- IMPRIMATUR
- ERIK
- ROUGHNECK
- CHUCKED
- SNUBBED
- MENUS
- INJUDICIOUSLY
- WHITEFISH
- FILLET
- PEP
- AGITATORS
- KNUTE
- AMERICANISM
- BOOSTER
- BELCHING
- NORTHWESTLAND
- BURGS
- SNOBS
- ZOB
- YAHOOVILLE
- MINNESOTA'S
- BURG
- BLOOMIN
- BOOSTERS
- REORGANIZED
- GLORIES
- MIDDLEWEST
- POWERED
- HON
- BOOKLET
- PLOVER
- BEAUTEOUS
- GAMEY
- RESIDENCES
- EGOMANIAC
- NEIGHBORLINESS
- ARMISTICE
- CLIQUES
- SCANDALS
- BANGKOK
- PUTATIVE
- TWISTY
- MURDERESS
- MOORS
- DEADENED
- KREISLER
- KINDLIER
- RITE
- MIGNONETTE
- BUTLERS
- LIMOUSINES
- FICTIONAL
- SHERWIN
- COLIC
- SCALLOPED
- DULLNESS
- CONGRESSMAN
- MAJORS
- GEOGRAPHERS
- FISCAL
- ADDRESSER
- MOBBED
- PICNICKING
- PERSISTENCE
- ELFISH
- LOFTS
- IMPRACTICAL
- THEORISTS
- CLARKS
- VILLAGER'S
- ORGANIZERS
- TRAGICALLY
- INTELLECTUALITY
- HOUSEMATE
- MIDDLEWESTERN
- SCABBED
- BLAINE
- TUMOR
- PIANISTS
- LECTURERS
- SCRAWLS
- UNIONS
- TOUCHILY
- DYER
- MARMOT
- CHUCK'S
- WOODCHUCKS
- JUMPER
- WEASEL
- UNDERTAKERS
- PROCUREUR'S
- STUN
- MORCERFS
- MITIGATES
- MOULDINGS
- HAITIAN
- CROESUS
- ILLUMINED
- DEPUTE
- CHECKS
- BOVILLE
- CREDITS
- ROTHSCHILD
- LAFITTE
- RETIRES
- ROTHSCHILD'S
- LAFITTE'S
- IRREPROACHABLE
- ENCLOSING
- CRISTO'S
- LIFEWARD
- GREYING
- ASSERTIVE
- SHEENS
- PERVERSELY
- SENSUOUSNESS
- SWARM
- STRIDENT
- OILSKIN
- QUERULOUS
- BACKGROUNDS
- STUFFINESS
- VILELY
- DIRTIER
- LOATHSOME
- FILTH
- UNCHANGEABLE
- UNMORAL
- DELMONICO'S
- BUS
- ANSWERER
- INGENUOUSNESS
- UNSOPHISTICATED
- GIRLHOOD
- WETNESS
- BEATRICE'S
- INFLUENZA
- BRAINIER
- UNIVEE
- BAYNE
- FAYNE
- SAYNE
- ALEC
- ISABELLE
- SOUTHPAW
- OUTFIELD
- HITTER
- HUMBIRD'S
- ALUMINUM
- SCREWS
- LENTICULAR
- MOLES
- BURNER
- CARBONIZED
- MAHOMET'S
- SPHEROID
- GUARD'S
- COUCHES
- DONKEYS
- REIMBURSE
- TICKING
- CORRESPONDINGLY
- HARVESTED
- CONFRONT
- VERIFIES
- JOURNALISM
- OPALS
- CORSET
- TOBOGGAN
- VAPORY
- LOCOMOTIVE
- QUADRUPEDS
- DRAYAGE
- HEATHENISH
- PRESCRIBES
- LAG
- AFFLUENCE
- DISBELIEVED
- AXIOMATIC
- AMBASSADORSHIP
- GLOBULES
- COHERENTLY
- JAWS'S
- CAMPSITE
- SPALLS
- BALLED
- BLOB
- BOLE
- SURVIVOR'S
- FOREARMS
- IRIS
- CORNEA
- HEARTENED
- WORM'S
- SKYWARD
- RAMP
- PRONG
- CUBS
- WORLDER
- QUICKSAND
- LOOSEN
- SINGED
- DISPASSIONATELY
- TRUCULENT
- CREDENTIALS
- GUIENNE
- SCOTCHMEN
- BLENAN
- ANTOINE
- INTENDANT'S
- SCHEVELING
- CONYNGHAME
- FACTOTUM
- UNSHACKLED
- INSTALLMENTS
- WOOED
- RATCLIFFE
- PABLO
- HOUSEFUL
- GIPSIES
- PHOEBE
- STANDPOINTS
- MISUSED
- HYPNOTIST'S
- PRESUPPOSITIONS
- MESMERISM
- PARANOIA
- INSANITIES
- INTERPRETS
- MISINTERPRETATIONS
- ARGUMENTATION
- INTERPLAY
- SUGGESTIBLE
- INTERDICT
- DRUNKENNESS
- COCAINE
- UNDERMINED
- PERVERSITIES
- WRONGDOER
- NEGATES
- DEPRIVES
- INVENTS
- ANOMALOUS
- PATIENT'S
- PSYCHOTHERAPISTS
- PRESUPPOSITION
- TREATMENTS
- MORPHINIST
- ALKALOIDS
- INJECTION
- SANITARIUM
- HYPODERMIC
- REDUCTIONS
- TABLETS
- INHIBIT
- TRUEST
- AMATEURS
- EXPERIMENT'S
- DEVASTATES
- EARMARKS
- BARBARISM
- ACCREDITED
- EDDY'S
- CULTURAL
- PHILOSOPHICAL
- IMPLICATIONS
- REENFORCES
- PSEUDOPHILOSOPHY
- SAPS
- FRUCTIFIED
- DEMONSTRATES
- RESHAPING
- PATHOLOGICAL
- INTEMPERATE
- CRIMINOLOGISTS
- PERSPECTIVES
- REENFORCE
- ABNORMALITIES
- INHIBITIONS
- CRIMINOLOGICAL
- SUPPLANT
- RECKONS
- OPOSSUM
- POUCHED
- OPOSSUMS
- BUSTER
- CROTCH
- HANDIEST
- DEADEST
- REINTEGRATION
- INTERWEAVING
- STIRRINGS
- DREARINESS
- CONFLUENT
- PATTERING
- BENEFITED
- INSECURITY
- GREYLY
- DISTENDED
- BLADDER
- AROMA
- INTERMEDIATION
- VEINED
- ULTRAMARINE
- UNDEVIATING
- DRONING
- ALERTLY
- BOSCASTLE
- UNSTAINED
- DRUNKARD
- COLOURINGS
- REITERATION
- COERCIVE
- SOPHISTRIES
- STIMULATING
- EVINCES
- EXCRUCIATING
- DAMAGING
- ADVENTITIOUS
- FLOUTED
- UNPRACTICAL
- EFFECTIVENESS
- TUNEFUL
- VITALS
- UNALLOYED
- SLEDGEHAMMER
- DIRECTNESS
- DISHEVELED
- INEXHAUSTIBLY
- RESONANCE
- LISP
- UNDISCOURAGED
- FLAGGING
- IMPERILED
- COMMENDABLE
- TERSE
- IMPERISHABLE
- MORALIZES
- FLASHY
- PENETRATIVENESS
- NAUSEATING
- INEFFABLY
- UNPICTURESQUE
- INSATIABLY
- DOGMATISM
- ENSLAVES
- STIGMATIZED
- PLATITUDE
- NOTORIOUSLY
- PRACTISES
- COPIOUSNESS
- RATIONALLY
- PALLIATIVES
- MITIGATIONS
- QUIXOTIC
- OVERWEENING
- INSUPERABLE
- PROMISCUOUSLY
- PALPABLY
- PARADING
- PATENTLY
- INIMICAL
- COMPROMISES
- CATCHWORDS
- EXEMPLIFIED
- GLOOMIEST
- PORTENTOUS
- MOUSTACHES
- TROUP
- ISCHIA
- CAPRI
- SCORIA
- CALLOW
- VESUVE
- ERUPTION
- VESUVIUS
- FS
- LEONARDI
- MILANO
- ITALIA
- POSING
- TITLED
- DULLARD
- BERATING
- KIDNAPPED
- CARETAKER
- ENDEARING
- HOLDINGS
- BATTLEGROUND
- INDUSTRIALISM
- SHERLEY
- WITTED
- GIGGLED
- STRADER
- NEANDER
- NECTAR
- CRATER
- CONGEALED
- NASMYTH
- INHABITABLE
- PROPITIOUS
- ANATOMICALLY
- ORGANIZING
- ROTARY
- INSOLUBLE
- PRIMORDIAL
- RESPIRABLE
- UNINHABITABLE
- UNDERGOES
- HAZARDOUS
- FANTASTICAL
- TANGIBILITY
- INCUBATION
- PUNNING
- SHOEBLACK
- BLUNTNESS
- WAVELET
- GAPED
- BUBBLES
- ENTRAPPING
- JESUIT
- UNSUPPORTABLE
- CUNEGONDE
- CADIS
- EFFENDIS
- LETHARGIC
- SQUANDERED
- PAQUETTE
- PHILOSOPHISING
- MEDDLEST
- VIZIERS
- SHERBET
- KAIMAK
- CANDIED
- MOCHA
- BATAVIA
- EGLON
- MOAB
- EHUD
- ABSALOM
- NADAB
- BAASA
- ATHALIAH
- JEHOIAKIM
- JECONIAH
- ZEDEKIAH
- DIONYSIUS
- PYRRHUS
- PERSEUS
- HANNIBAL
- JUGURTHA
- ARIOVISTUS
- OTHO
- VITELLIUS
- DOMITIAN
- UT
- OPERARETUR
- EUM
- JOINER
- CONCATENATION
- PROMPTER
- MUMMERY
- EXECRATING
- VAPORS
- IMPURITY
- CHIPPEWAS
- RECREANT
- FIGURATIVE
- BOASTFUL
- VEILING
- NOISELESS
- PROFITABLY
- CANADAS
- BEAVERS
- AMITY
- MAISE
- EMPHATICALLY
- MOCCASINS
- BAUBLES
- DONOR
- OBDURACY
- DIPLOMATIST
- PREJUDICIAL
- ANNUNCIATION
- VENERATE
- LONGUE
- CARABINE
- UNGUARDEDLY
- INJUDICIOUS
- DELIBERATIVE
- ARMLETS
- CINCTURES
- TUTELAR
- TAMENAY
- SQUASHES
- GARNERED
- INEXPENSIVE
- HANDCART
- CREPE
- THOAP
- BRANDS
- COMPANY'S
- VENDER'S
- IMMERSE
- ELIJAH
- ELISHA
- RIGMAROLE
- RO
- COSM
- SIMPSON'S
- RUSTLY
- NOKOMIS
- APOSTROPHIZED
- DREST
- UNSTOPPING
- UNGLUING
- HUSKING
- FATS
- JOYLESS
- CAPSIZING
- UNBUSINESSLIKE
- REMINISCENCE
- ELERGANT
- DRAB
- COMFORTINGLY
- READINGS
- OUTGROW
- MANSIONS
- BEECHER
- STOWE
- HOME'PATH
- GAB
- MAKIN
- INTEMPERANCE
- YOUS
- PERSPIRED
- EXCUSING
- PAINSTAKINGLY
- REWROTE
- DEARBORN'S
- FONDLES
- WASHES
- SMELLIE'S
- SMELLIE
- PRAYIN
- SWEARIN
- MINNIE'S
- BEATIN'EST
- SAWYERS
- IMPROVIN
- ABNER
- SWAPPIN
- MERCIES
- ACCOUNTIN
- STACKED
- KEROSENE
- WICKS
- CRANBERRIES
- DIGNIFYING
- PIANOLA
- BEIN
- TUBBS
- RAPTUROUS
- GOIN'S
- UNCONVINCED
- TENEMENTS
- LADDS
- MAYFLOWER
- COCHERE
- WAGONETTE
- URSULA'S
- POSTILIONS
- YOUSE
- EQUESTRIENNES
- SOUVENIR
- JELLINGS
- CHINS
- WHERE'D
- LAPS
- MAM'SELLE
- NURSEMAID
- DISAPPOINTEDLY
- COOMSDALE
- FEELINGLY
- PORTENTOUSLY
- INTERSTICES
- PASTED
- TOBACCOEY
- RUMMISH
- PATTON
- HISSOP
- DARINGLY
- SQUEALED
- HARTWELL
- PAJAMAS
- UNKINDLY
- WYATT
- RELOCKED
- BANISTERS
- GLADDEN'S
- COUNTRYWOMEN
- EXCULPATE
- RESPELL
- DISPLEASE
- AVAILING
- NOURON
- NIHAR
- MINIONS
- IMPUTE
- UNBOUND
- OBJETS
- FORGIVES
- DISPELLED
- DEFER
- AFRICANS
- KOOLLOOB'S
- CAUZEE
- HISTORIOGRAPHER
- GROSBEAKS
- BROADWING
- FUSSED
- CONTORT
- OGRESS
- ENGRAFTED
- GENDARME
- HANGMAN
- COQUETRY
- CARTERS
- SWINDLER
- LILLE
- ENTAILS
- SUTLERS
- SUTLER
- RECALLS
- SEMINARY
- ORTHOGRAPHICAL
- SLOTHFUL
- GIANTESS
- BANKRUPTCIES
- MAMMIFEROUS
- MATERNITY
- BROWSE
- HITCHED
- POACHER
- APHORISMS
- TRUSS
- EXHORTATION
- BESEEMETH
- CONSTRAINETH
- NIHIL
- GAINSAID
- REASONINGS
- CONCISELY
- PLAINWARD
- WEND
- TORRENT'S
- GRANDFATHERS
- FIRSTLINGS
- MULTIFARIOUSNESS
- LABYRINTHS
- STELLAR
- MORALITIES
- SUBLIMER
- VOLTAIREAN
- FREETHINKER
- LITANIES
- GOODY
- FLAUBERT
- ASTUTENESS
- MEDIOCRITY
- SPIRITUALISES
- SPIRITUALISING
- DESINTERESSE
- DISINTERESTEDLY
- SACRIFICER
- RARER
- MORALISTIC
- BONHOMME
- EXHORTED
- OVERSHADOWING
- UGLIFYING
- DOCUMENTARILY
- SUITING
- BAROCCO
- MORIBUS
- ARTIBUS
- ARISTOPHANIC
- PARODISTS
- CULTURES
- SUPERIMPOSED
- ESPRIT
- VASTE
- DETERMINES
- UNFAVOURABLY
- TRUCKLING
- ATHENIAN
- SHAKESPEARE'S
- CHIAJA
- GOLDENNESS
- GODSENDS
- GLORIFICATIONS
- HEDONISM
- UTILITARIANISM
- EUDAEMONISM
- MISFORTUNED
- HEREDITARILY
- INVENTIVENESS
- ANNEALED
- CAPTIOUS
- DISENGAGE
- HONESTY'
- DEVILRY
- TEDIOUSNESS
- PHILOSOPHIZING
- UTILITARIANS
- RESPECTABLY
- HOMERIC
- BENTHAM
- SENATEUR
- POCOCURANTE
- MORALISTS
- MORALIST
- AUTHORITATIVE
- BIFURCATION
- MAIRE
- STATENLAND
- SPOKESMAN
- PATAGONIANS
- FREISCHUTZ
- MIMICS
- SQUINT
- MIMICRY
- CAFFRES
- AUSTRALIANS
- HOSTAGES
- JEOPARDY
- MATTHEWS
- AQUATIC
- DIRTIED
- SKYLARK
- LANDSMAN
- UNTRIMMED
- OURANGOUTANG
- THRIVING
- BETULOIDES
- SOLANDER
- GUANACOS
- GLOOMINESS
- BARNEVELTS
- ANCHORS
- GREENSTONE
- HAYCOCK
- SOLSTICE
- WOLLASTON
- OTTER
- BAITED
- FUNGI
- CANNIBALS
- CONCURRENT
- DOGGIES
- FIRESIDES
- LIMPET
- CORDILLERA
- CHILE
- CALIBAN
- SETEBOS
- FREAKISH
- VAGARIES
- BLEND
- HALED
- ELF
- PORTENTS
- STUFFILY
- UNBLENCHING
- TOMAHAWKED
- IDIOSYNCRASY
- PITILESSLY
- HAZELS
- PHANTASMS
- IMPORTANCES
- ILLUMINATI
- CONSPIRACIES
- TROW
- NEIGHBOUR'S
- FRIENDLIEST
- PORKER
- CULPRIT
- OLYMPIAN
- SADDENING
- ARCADIA
- AWAKENINGS
- BLUEST
- GERMINATING
- KINDLE
- PRIMROSE
- CRAZE
- STAUNCHLY
- FAD
- WAYFARERS
- COS
- MISPLACED
- GROWLINGS
- UNRECORDED
- HEROISMS
- EFFLUENCE
- SQUELCHING
- SPLASHED
- SKYWARDS
- UNRHYTHMIC
- COMPANIONABLY
- EXPRESSIONLESS
- TRICKSTER
- BLUSTER
- MISRULE
- SHEERED
- THWARTWISE
- CHAINLESS
- TOMFOOLERY
- KINDLINESS
- GYRATING
- LARCENY
- PLUMMET
- PLAYBILLS
- CHAFFINCH
- HEDGEHOG
- DECADENT
- JACKETED
- CANCELLED
- NIBBLED
- EARTHBOUND
- SLUNK
- NOTHINGNESS
- SHACKLES
- TETHERED
- STERNE
- OUTPUT
- COVETING
- DUCKWEED
- BEDABBLED
- FRITZ
- SHIPOWNERS
- BHAERS
- LEIPZIG
- BONS
- SWEETEN
- JOSIE'S
- NAN
- DOSED
- DAISY'S
- BESS'S
- MEG
- PLUMFIELD
- STUDIOUS
- TEDDY'S
- OCTOO
- SNOWDROPS
- DUSTING
- EMIL
- CASABLANCA
- FIDDLED
- HEIMWEH
- BEARABLE
- HERR
- LINDENS
- MINNA
- NAT'S
- STEADFASTNESS
- CADDIS
- BROOKSIDE
- CONFIDINGLY
- THROATED
- FLYCATCHERS
- PEWEE'S
- CRESTY'S
- BLUEBIRD
- TIT
- WOODCOCK
- TUSSOCK
- LONGBILL'S
- FREEZES
- SNIPE
- FUNNIEST
- TEETERED
- SANDPIPER
- TULES
- OBSIDIAN
- GRANITIC
- TOTEM
- MYTHOLOGICAL
- QUAVERING
- UNSOUGHT
- SCURRYING
- CRUSTS
- RETRACE
- FIERCER
- SNAKELIKE
- MONOPOLIZED
- UNTRODDEN
- INCONGRUITY
- PALPABLE
- ROUNDING
- SOAKS
- SYMBOLIZES
- INEXORABLENESS
- INVERTED
- JUTS
- CONSTRUCTS
- SOLOMON'S
- MATING
- PULSELESS
- BENDINGS
- ENCHANTS
- LIMBLESS
- UNRESTFUL
- FULFILS
- WAVY
- SHAMBLED
- CANTER
- UPGRADES
- SURENESS
- DOWNGRADE
- ASKIN
- Y
- LEARNIN
- REJOINDER
- OVERHEARING
- BUNKING
- GIMME
- MA'S
- CUSSING
- CHOW
- SQUATTERS
- LEASTWAYS
- STRAIGHTER
- MIMICKED
- WONDERIN
- NOTHIN'
- JOSIAH
- EDDICATED
- BUCKER
- FIGGERS
- LYIN
- AW
- POLO
- OFF'N
- FLOPS
- OUT'N
- B'LIEVE
- RAISIN
- PERFORMIN
- BUCKIN
- TWISTIN
- DRAWIN
- S'LONG
- ROOMING
- THUMPS
- IDLING
- UNPRESSED
- MODELED
- CHISELING
- HOOKED
- ERECTNESS
- CLUMPED
- SOURLY
- SKUNK
- HARRY'S
- TRYST
- UNEVENLY
- RONICKY'S
- ABDUCT
- MERRYMAKING
- RELIGHT
- ADIEUS
- LAGREE'S
- PERSECUTOR
- TIRING
- MELODIOUSLY
- SYREN
- RAINBOW'S
- BRAVES
- POUCH
- UNHURT
- BOOKSELLER
- DORMOUSE
- CANARY
- PREFERRING
- BEGGAR'S
- SHIRKED
- PROFLIGACY
- PLUMPNESS
- BEETLE
- SUFFERANCE
- WILTSHIRE
- LISTLESSNESS
- UNCANDID
- SARAH'S
- HEROINE'S
- OVERRATED
- DOUBLING
- TREBLING
- LEGACIED
- OPENNESS
- VOUCHERS
- WEAKENING
- RECONCILIATION
- RHODOMONTADE
- OVERTURE
- RELATOR
- DEVOLVE
- READER'S
- RETRACTION
- INTERSTICE
- FORESTALL
- LUDLOWS
- WONDERMENTS
- LUDLOW'S
- OSMOND
- INHERITING
- NEPHEWS
- EUSTON
- BANTLING
- CANDOUR
- CEASES
- ACROPOLIS
- SALAMIS
- HARLINGS
- SKATED
- BONFIRES
- CHOPPY
- HARLING
- IMMOBILITY
- FIELD'S
- BOOTH
- BARRETT
- PAPERY
- KENTUCKY
- BUXOM
- LAUNDRESS
- D'ARNAULTS
- FIDGETS
- MARTHA'S
- LILACS
- PICKANINNY
- HOLLYHOCK
- STRADDLED
- MASTIFF
- MASTIFF'S
- PRESENCES
- VITALIZED
- SPECT
- TRANSOM
- TONY
- SODERBALL
- RESOURCEFUL
- USURY
- HATRACK
- POULTICING
- HANDBAG
- AVOUCHED
- HORSELIKE
- CUTTER'S
- EXACTIONS
- PROSAIC
- ELEMENTAL
- PAIRFECT
- MEERACLE
- OVERVALUED
- BATTENED
- DEVASTATION
- TAMIL
- SMUGGLE
- UNWEARIED
- PYJAMAS
- FRONDS
- THOROUGHFARE
- GARLANDED
- ISLETS
- UNDEFACED
- SEVERER
- UNSUBSTANTIAL
- RENEGADE
- BISMARCK'S
- BRUTALISED
- HATCHWAYS
- CISTERN
- CRANNIES
- COASTING
- PRAUS
- CAMPONGS
- FULGOR
- STAGNANT
- PATNA
- AMIDSHIPS
- SMOULDERING
- SPELLBOUND
- PRESIDING
- IMPASSIBLE
- ASSESSORS
- LOGGED
- FOREHOLD
- BULKHEAD'LL
- MALEVOLENT
- SERRIED
- PITH
- PEONS
- WAYFARER
- VERANDAH
- UNRUFFLED
- MARLOW'S
- REKINDLED
- INDEFINITENESS
- DISENCHANTMENT
- IMPRECATION
- PROD
- PLACATED
- INTERJECTED
- NAUSEOUS
- HAIR'S
- INCISIVELY
- CONFOUNDEDLY
- THROVE
- EXCELLENTLY
- DWINDLE
- DILUTED
- DISCOURAGE
- HUNGERING
- BURGE'S
- GREYSTONE
- INTERPRETERS
- BLASPHEMOUS
- REJOICES
- EVIL'S
- SORROW'S
- CRUDER
- WOODLESS
- OVERARCHING
- WEBLIKE
- UNSCREENED
- OVERSTARTLED
- MONITIONS
- CUMIN
- PEPPERCORNS
- SPAWNING
- OPPIAN
- STARFISH
- WIDEN'D
- PRAWN
- PINTS
- TOMATOES
- VERMICELLI
- SHRIMP
- STUPIFIED
- WHITESIDE
- LYNE
- SMIRKING
- EXTRADITION
- COMPLICITY
- UNLOCK
- MILBURGH'S
- PROVIDENTIALLY
- YONKERS
- UNRELIEVED
- POCK
- UNCHALLENGED
- OVERRIDE
- MOTORCYCLE
- SCHEDULE
- HAB
- OHDAHD
- DARKY
- BRUNG
- POWFUL
- TELEPHOME
- AST
- AFTAH
- HONGRY
- DRAT
- LIAH
- DOLLAH
- CAIN'T
- GROGAN
- FELDERSON'S
- PARALLELING
- SYNAGOGUES
- SOMEDAY
- CRAFTIEST
- AFRITES
- JINNS
- WORKADAY
- CORANTO
- BUSKIRK'S
- IMPERSONATION
- IMPERSONATED
- CHANCING
- ABJECTLY
- INORDINATE
- THEREON
- JANET'S
- JUGS
- BASKETFUL
- FOREGATHERED
- TELLERS
- MASTERFULLY
- FINGERTIPS
- MILESTONE
- DAFFODIL
- SUNSETS
- SKEERED
- EERIE
- ALEC'S
- HEN'S
- VISITANT
- EYEHOLES
- WOEFULLY
- REEKING
- SAUCERFUL
- RESERVEDNESS
- ENSURED
- PROMONTORIES
- THISTLES
- INTERNALLY
- CENSURED
- FERRARS
- DASHWOODS
- CAREYS
- WHITAKERS
- EDGAR'S
- UNFORGIVINGNESS
- AMENABLE
- TILNEYS
- ADMITTANCE
- BEDFORD
- CLEANEST
- RECEIV'D
- EXPRESS'D
- HYRCANIA
- DETERMIN'D
- PROCUR'D
- ACKNOWLEDG'D
- AMPHITHEATRICAL
- COMPLEATLY
- MAGI
- PERPLEXT
- TOURNAMENTS
- CONTINU'D
- WATCH'D
- ALLOW'D
- COVER'D
- VAIL
- INDULG'D
- ABOVEMENTION'D
- MARTIAL
- INSCRIB'D
- CADOR
- COMPLEAT
- ENRICH'D
- AMPHITHEATRES
- ACQUIR'D
- HOVER'D
- FLATTER'D
- ENAMELL'D
- RIBBANDS
- ITOBAD'S
- HEAV'N
- PITCH'D
- BABYLONISH
- SCEPTER
- CRUPPER
- QUIV'RING
- ITOBAD
- DISDAINING
- UNHORS'D
- CONQUER'D
- VANQUISH'D
- GAIN'D
- MIXT
- PALPITATION
- VOLTA'S
- WISH'D
- BUTTOCKS
- GRASP'D
- JUMP'D
- WHEEL'D
- INCENS'D
- OFFER'D
- ADVANC'D
- CLOS'D
- DISARM'D
- DESTIN'D
- RECONDUCTED
- PRESCRIB'D
- ORDER'D
- FATIGU'D
- ZADIG'S
- REPAIR'D
- OVERWHELM'D
- OBLIG'D
- HISS'D
- RIVAL'S
- RECTIFY
- PLUNG'D
- IMPROPITIOUS
- BALEFUL
- IRRETRIEVABLE
- COQUET
- PROV'D
- DISPENSATIONS
- GOVERN'D
- FAIL'D
- CROWN'D
- FRANTICK
- INDOLENT
- RAGOUT
- ABUSING
- CHEAPSIDE
- LOO
- DESPISES
- DERBYSHIRE
- INATTENTION
- CAPTIVATION
- DESPICABLE
- SOLACED
- DUETS
- TRESPASS
- ESTIMABLE
- STUDIER
- NEIGHBOURHOODS
- GARDINER'S
- FORSTER
- TAROSS
- PERCY'S
- ESKE
- LIDDEL
- JULY'S
- UNREFRESHED
- UNTASTED
- PEYTON'S
- AEOLIAN
- MASSA
- VISITOR'S
- SUBALTERNS
- BENIGNANT
- PLAGUES
- NAVIES
- OCEAN'S
- WHELMS
- LIGHTSOME
- UNGENIAL
- IMBIBE
- SUPERABUNDANT
- UNVEILED
- THRONED
- COMEST
- SHORELESS
- SWEEPEST
- VIEWLESS
- VALLIES
- HOLDEST
- GOVERNANCE
- ROARINGS
- DESPOILED
- WRENCH
- MEREST
- WIELDERS
- INTEGRAL
- DOMESTICATE
- VISAGED
- DISQUISITION
- HUNDREDTH
- IMMEDICABLE
- GULLIVER
- BROBDIGNAGIANS
- ERUPTIONS
- RIFE
- QUITO
- PARTIZANS
- HINDOSTAN
- EMPOISONED
- INHALES
- UNINFECTED
- UNCULTIVATED
- DESTROYERS
- CLIMES
- CELT
- UNCOMMUNICATED
- INNOXIOUS
- SPICY
- CIRCASSIA
- ENCREASED
- EQUALIZATION
- PROTECTORATE
- TWELVEMONTHS
- UNLAMENTED
- QUAILING
- UNERASEABLE
- REVULSIVE
- SPECIE
- UNPAID
- NURSLINGS
- PENURY
- PARSIMONIOUSLY
- IMPORTS
- ANTLERED
- PROTEGES
- OFFCASTS
- PLEADINGS
- AGRICULTURIST
- INDULGENCIES
- PALANQUINS
- SECEDE
- PARTERRES
- EMIGRATION
- CONCOMITANTS
- LUDOVICO'S
- CARLO'S
- CARLO
- ROBERTO
- RAMPART
- UDOLPHO
- SIGNORA
- LIVONA
- CONJURED
- SIGNOR'S
- RANSOMS
- ANIMATE
- RETRACTING
- NIGHT'
- DETER
- MONTONI'S
- ACQUAINTANCESHIP
- BROKERS
- ROUNDERS
- DROUET'S
- STUCCO
- STEWARDSHIP
- SOLITAIRE
- ENGRAVING
- INFORMALITY
- FREQUENTING
- TACTFUL
- BARKEEPER
- EVANS
- MILWAUKEE
- BAREST
- MODIFYING
- SELTZER
- PACER
- SCHEMERS
- AUGUR
- GAINSAY
- INFESTED
- SPIRITUALIST
- THINNING
- ALARUM
- LIKINGS
- BIDED
- PIKESTAFF
- BOUTS
- LONGBOW
- BLAND'S
- APLENTY
- HALFSCORE
- COWHIDES
- TANNER'S
- CLOVEN
- BROADSWORD
- WOODCRAFT
- VARLET
- CRAFTSMAN
- BACKTALK
- SOVEREIGN'S
- TALKEST
- CALF'S
- ANCASTER
- FORFEND
- WINDOWPANE
- BELIKE
- MURRAIN
- CUDGELED
- NOTTINGHAMSHIRE
- ELY
- JOCK
- SCATHELOCK
- BELABORED
- COWSKIN
- TWANG
- BOWSTRING
- EPISODES
- AUBURN
- BEHOLDERS
- COMPLYING
- GENTLEFOLK
- BEEHIVES
- OSTENTATION
- COYNESS
- PERJURED
- UNLACE
- BEFOOLED
- SLIGHTED
- ABHOR
- CRIER
- HERDSMAN
- FASTED
- BATHES
- DEWDROPS
- LIMPING
- KNOCKERS
- HYACINTHS
- LOVEABLE
- WARRANTY
- SCABBARDS
- VIEUVILLE
- CARMES
- DESCHAUX
- CAHUSAC
- EDICTS
- SWORDSMEN
- HILTS
- LUNGE
- EJECT
- REPRIMAND
- ADJURE
- EMINENCE'S
- VILLE
- REREAD
- REKISSED
- MUSKETOON
- ENTERTAINS
- EXPATIATED
- WICKETS
- EXEMPLARY
- TINE
- RECOMPENSED
- DELAYING
- SWORDSMAN
- DOMICILE
- EXPLICATIVE
- SAVORS
- PARDIEU
- COINING
- GARDES
- MOONLIGHTED
- SOLITARILY
- MOVELESS
- UNDULATIONS
- SADDENED
- CAEN
- STEMS
- GLADES
- SYLVAN
- PILLARED
- FRONTED
- LICHEN
- HAGAR'S
- QUINCE
- SWEDENBORGIANS
- RUSK'S
- INACCURATE
- SUPERSEDING
- GOGGLED
- UNDERCHAMBERMAID
- IMPOUNDED
- ROUGIERRE
- JUSTER
- ORDAINED
- GRIMACING
- RUTHYN
- GOUT
- AV
- SWEDENBORGIAN
- DURE
- SPITEFULLY
- HOITY
- TOITY
- ELICIT
- FRENCHWOMAN
- OAKLEY
- QUAITE
- APPY
- VOUS
- SAVEZ
- MALADES
- TARTNESS
- PARFOIS
- SHAMMING
- TRANSITIONS
- TESSELLATION
- DETESTABLY
- AUSTIN
- IRREVERENT
- VIALS
- COURTING
- REPUGNANT
- UPROARIOUSLY
- COURSED
- IMPERSONATING
- BLISTER
- MALVOLIO
- WOOING
- HUG
- FAKIR
- SUPPOSITIOUS
- MARTHIEU'S
- BELIEVER
- NUMB
- UNCONSCIOUSNESS
- DEVOTEES
- O'SHANTER
- WARTHIN
- WOTAN
- SIGMUND
- VALHALLA
- CLAIRVOYANCE
- HEALERS
- PROFESSIONALS
- PROSTITUTED
- COCKE'S
- QUINCEY'S
- PROFUNDIS
- MESMER'S
- PROLONGING
- ZOLA'S
- CRUSADES
- POET'S
- DEMONIACAL
- DERVISHES
- FAKIRS
- IMPERSONATIONS
- SUMNER
- CUSHMAN
- COOKE
- PELS
- BUTTS
- MACKWORTH'S
- BLURRING
- JOAN
- FISHIEST
- GRIMNESS
- SCORCHING
- CLOAKING
- MINSTREL
- GROSSER
- SLUSHY
- SNOWBALLS
- MOAT
- SKATES
- GROAT
- JACKDAWS
- YULE
- SUET
- CALDRONS
- MALMSEY
- BROACHED
- DAFFODILS
- FARAWAY
- FALWORTH'S
- BAREHANDED
- BAREARMED
- OUTFIELDERS
- DICCON
- DEVLEN
- SHINNED
- CLEMATIS
- YEW
- GREENNESS
- SUNSTROKE
- TRADE'LL
- THANKY
- BRINGIN
- YE'VE
- THORT
- AN'
- ARRYSTOCRACY
- UNCOILING
- FURLED
- WHARFMEN
- TRADE'S
- TRYIN
- HANKERCHER
- GYPSY
- DERE
- FREND
- THEVES
- WIL
- PARDNERS
- ILE
- CUMS
- ENNY
- FELER
- TRISE
- THATS
- THERES
- YURE
- PERVIDED
- FUST
- RISTOCRAT
- MINNIT
- HARRISON'S
- DUBIOUSNESS
- DORINCOURT'S
- OFTTIMES
- GIRDERS
- HYDROGRAPHIC
- SHIPMASTERS
- VENTURESOME
- COLLATED
- TABULATED
- BASER
- SUBLIMATION
- FILTRATION
- CRYSTALLIZATION
- ANTIMONY
- PHOSPHORUS
- OILS
- REDMAN
- GENOESE
- NAVIGATOR
- LIVINGSTONE
- EVANGEL
- UNIFIED
- PURGING
- DROSS
- TRIVIALITY
- PIVOT
- SWINGS
- DEPRIVATION
- TRUER
- RESURRECTION
- SHOWINGS
- EVOLVING
- COMMENTATORS
- DISHEARTEN
- UNSTRAINED
- FLUIDS
- LODESTONE
- CHARCOT
- METALOGY
- PSYCHIATRICALLY
- ORIENTED
- OBSTETRICIAN
- GYNECOLOGIST
- CATALYZED
- VISUAL
- IMAGERY
- SEMANTIC
- HYPERACUTE
- INTERPERSONAL
- CRUX
- INHIBITED
- ASIANS
- JANET
- WOLBERG
- PSYCHOANALYST
- THEORETICAL
- RETROGRESSIVE
- UNCOVER
- TRAUMATIC
- DUPLICATING
- SCHNECK
- CLINICAL
- EQUATED
- IMMOBILIZATION
- EQUATE
- ANALOGY
- DISSOCIATION
- DREDGED
- AUTOMATICISM
- NEGATE
- OPERATIONAL
- THERAPIST
- THERAPIST'S
- CONNOTATES
- VERNAL
- DONALD
- DAVIS
- SELFLESSNESS
- WITHERS
- COMPLAININGS
- INERTIA
- GLORIFY
- FUNLESS
- PITTSBURGH
- REHEARSING
- INSTITUTIONALIZED
- UNDIVIDED
- STAINLESS
- BRINDISI
- GARTH
- OCULIST
- DALMAIN
- CURT
- PENCE
- APENNY
- TEARLESS
- STEAMED
- CHARING
- STANDSTILL
- CHILDHOOD'S
- DUCHESS'S
- PILOTED
- BROUGHAM
- TRAFALGAR
- HOMICIDAL
- MONGERS
- UNSIGHTLY
- UNCORDIAL
- MEDIATOR
- ACCOSTING
- INCLINING
- PERFIDIOUS
- UNCLOSING
- UNCIVIL
- WILSONS
- MAMMA'S
- LATISH
- FATHERLY
- OFFICIOUSLY
- AFFIRMATORY
- IRATE
- BREWED
- AUDITOR
- SLAMMING
- AT'TI
- ATTILA'S
- TIEW
- THOR'IS
- MOND
- THEODORIC
- THORISMOND
- AETIUS
- HIPPO
- VALENTINIAN
- ALARIC
- JO'RI
- PLUNDERING
- LEO'S
- VANDAL
- TOWING
- JUSTINUS
- STEAMSHIPS
- OSTROGOTHS
- VIT'I
- GES
- RAVENNA
- OSTROGOTH
- TRIB
- O'NI
- HAM'ME
- MUS'SUL
- MANS
- AMIN
- WORSHIPED
- HIRA
- DI'NA
- MOHAMMED'S
- HEJ'I
- RA
- MUSSULMAN
- GIVETH
- KA'A
- SULTANS
- WHIMSICALLY
- SCULPTED
- SINGHALESE
- REENTERING
- REAPS
- INDUSTRIALIZED
- PERCIVAL
- KAFFIR
- WORKDAY
- NOW'S
- CRESPO
- ANDAMAN
- FANTASIZING
- ENVISIONING
- PELVIC
- OFFHAND
- PERENNIAL
- LEAFED
- GLEEFUL
- NATURALIST
- INSINUATING
- OYSTERBANK
- OCCUPATIONAL
- PHOSPHATE
- CARBONATE
- GELATIN
- FESTERING
- SECRETION
- MOLLUSCA
- ACEPHALA
- ABALONE
- TURBO
- SALTWATER
- SCALLOPS
- MOLLUSK
- MELEAGRINA
- MARGARITIFERA
- GLOBULAR
- EMBEDDED
- NUCLEUS
- CONCENTRIC
- YELPED
- PLIERS
- SORTERS
- MEATY
- DAPPLED
- PARAGONS
- MOLLUSK'S
- FARMED
- FIANCEE
- WOW
- AMMONIA
- REWARDING
- SWALLOWING
- GULPS
- HARPOONER
- SWIVEL
- UNDERBODY
- UGLIEST
- GIRTH
- DEGRADEMENT
- BLEARY
- BITCHES
- BITCH
- TRIMMER
- MILLINERY
- EXPECTANTLY
- SICKENS
- DENTIST
- DISPATCHER
- INSTALLMENT
- EXALT
- OPPRESSING
- HOMOLOGY
- INFAMIES
- TURKS
- OUTRAGES
- MEDDLED
- ARBITRATION
- DISBAND
- INTENTIONED
- WRONGING
- OUTSIDERS
- DISARM
- IMMUNITY
- SUDANESE
- INTOLERANCE
- SONNETEER
- VANTAGE
- DISARMAMENT
- ARBITRATE
- VEGETARIANISM
- VACCINATION
- ILLS
- REPUDIATED
- CLAMORS
- BUNDEFJORD
- JOERGEN
- STUBBERUD
- LINDSTROEM'S
- INSULATION
- CELLULOSE
- OBLIQUE
- ROOFING
- LINOLEUM
- LUX
- VENTILATING
- ROENNE
- SEWED
- OUTFITTERS
- CLAMPS
- SHOEING
- EBONITE
- NARRATIVES
- PALSGAARD
- JUTLAND
- ALUMINIUM
- HANDIER
- WINTERED
- FRAM
- LAPPS
- NETCHELLI
- ANORAKS
- BRANDT
- FURRIER
- BURBERRY
- ANORAK
- ROOMY
- WINDPROOF
- FRAMHEIM
- TASMANIA
- WOOLLEN
- TRANSITIONAL
- CRAMPONS
- BOOTMAKER'S
- SHOEMAKER'S
- COOKERS
- COOKER
- UTILIZES
- TANDEM
- UTILIZING
- SEXTANTS
- DILUTE
- INCONVENIENCED
- BINOCULARS
- ZEISS
- GOERTZ
- GOGGLES
- SCHANZ
- CAMERAS
- ANEROIDS
- HYPSOMETERS
- HYPSOMETER
- VENDOR'S
- JAEDEREN
- HANTS
- BASINGSTOKE
- POPHAM
- REJECTS
- HENLEY
- PARSONAGES
- WHITEWASH
- QUICKSET
- SYCAMORES
- RECTORIES
- ASSESSED
- BUILDERS
- SONNING
- PUBLICATIONS
- HANCOCK
- GUILLOTINE
- INCIVISM
- ARABLE
- DETENUS
- BUONAPARTE'S
- TOURNELLE'S
- PROLOGUES
- EPILOGUES
- REVIEWER
- BERTRAM
- UNSUSPICIOUS
- TINGES
- UNNAMED
- CANDLESTICKS
- FURMITY
- TANSEY
- NOVELTIES
- TENANT'S
- CARPETING
- INVALIDS
- BOOKCASES
- COURTESIES
- PALMY
- GRANDISON
- CLOWNISH
- MINUETS
- LAPPET
- IMMACULATELY
- COTILLONS
- REELS
- RECONDITE
- COOKERY
- SCANTILY
- PERFORMING
- PANTRIES
- UNPACKED
- LANTHORN
- CLIPPING
- GAMEKEEPERS
- STUD
- GRANDSONS
- INFORMS
- PATTENS
- TRIVIA
- ASCRIBES
- GALOSHE
- EPIGRAM
- PERROT
- FOOTE
- PLAITING
- GADDING
- HANDMAIDS
- MYTHOLOGY
- OBLIVIOUS
- LUDDY'S
- DIATRIBES
- INDORSEMENT
- DOPE
- TELEGRAPHING
- INDIVIDUALISE
- ELITE
- PROFITLESS
- UNREHEARSED
- EXHILARATION
- PREMONITION
- PRONOUNCEMENT
- BURTON
- FLUKE
- INDULGENTLY
- DISCONCERTING
- FASHIONABLY
- MOYNE'S
- GRITTED
- LAVISHNESS
- CASHIER'S
- MONOTONE
- MOYNES
- CIRCUMSPECT
- MONOTONOUSLY
- SPLUTTERED
- TELLTALE
- PRUSSIC
- HYDROCYANIC
- ADHESIVE
- TOPMOST
- GRILL
- VISUALLY
- SERO
- ACCENTUATED
- PLATINUM
- ENTWINED
- MYSTICS
- CONISTON
- ACKNOWLEDGEMENT
- HALSEY
- INTERSTELLAR
- CONISTON'S
- PSEUDOMAIL
- SPINDLY
- CLANKING
- THROATY
- LINGUISTS
- SEAR
- PLANETARA'S
- HANDLER
- CALCULATORS
- LOCATING
- CHARTING
- MIKO'S
- STARLIT
- PORTHOLE
- NEARBY
- HOLA
- SHEATHED
- PLANETARA
- LISTED
- ENTERTAINER
- OILER
- HALSEY'S
- MASQUERADERS
- PARRIED
- SOMETHING'S
- GAMBLERS
- ARDLEY
- STACK
- CUTE
- MICROPHONE
- DRAPE
- PURSER'S
- SHAC'S
- PROJECTOR
- MOA
- NIGHTROBE
- RASPED
- AUDIPHONED
- TAMPERING
- MARAUDER
- UNSEAL
- ELECTRONIZED
- CIRCUITED
- NUMBED
- SPOTLIGHT
- CARBIDE
- PENNICOTE
- MAIDENHOOD
- GASCOIGNE'S
- WITCHING
- UNARGUMENTATIVELY
- FULFILLMENT
- SUED
- IRREVOCABLY
- HUMDRUM
- PROMOTIONS
- SYLPH
- ARROWPOINTS
- LIMPNESS
- KNOWABLE
- IMPRESSES
- LAUNDRESS'S
- HARLETH
- CRISPEST
- DECAMPMENT
- UNUSUALNESS
- UNHESITATING
- CREASES
- DREGS
- COMPUNCTIOUS
- IRIDESCENCE
- MACBETH'S
- BASELY
- DIAL'S
- HENLEIGH
- MALLINGER
- STILLER
- VACILLATING
- SPANIEL
- FOREPAWS
- MALTESE
- UNIMPASSIONED
- INTERRUPTEDLY
- RELIGHTING
- CUSHATS
- DRAWL
- KLESMER
- ORATORY
- RUMPUS
- MARKEDLY
- INSPECTING
- HANDINESS
- GRANDCOURT'S
- CONFIDANT
- RELISHING
- MARCUS
- AURELIUS
- HINDERING
- VITIATION
- DERONDA'S
- DOOMS
- FULFILLS
- HUGO
- DIPLOW
- LEUBRONN
- INAUGURAL
- LECTURESHIP
- UPTON
- IMPLICITS
- UNDERLIE
- KEMPIS
- TRANCES
- AUTISTIC
- CULTS
- EDUCATIONISTS
- CORPORATE
- UNFAILING
- OLDNESS
- DESCANT
- FIGARO
- COMELINESS
- UNDECIDED
- AN'T
- BROODINGLY
- MESSES
- KEENNESS
- CHANGEABLE
- UNHEALTHINESS
- REPINED
- HUME
- HOULDSWORTH
- TRAMMELS
- SENSUOUS
- OVERBALANCED
- STANSFIELDS
- SOUTHAMPTON
- SHOPPY
- SEVERALLY
- WEEDED
- THOMSON'S
- HAYLEY'S
- WARDS
- BERESFORDS
- RUTLANDSHIRE
- BAFFLE
- WITNESSING
- ARBUTUS
- CRAMPTON
- CRITICISING
- KNICK
- KNACKS
- SHRINKAGE
- COHERENT
- BLAMELESS
- OSTENSIBLY
- OSTENSIBLE
- CONSUMMATELY
- PONDERABLE
- MAUD'S
- PERFORMER'S
- NOTATION
- PREPONDERANTLY
- MERTON
- SPECTATORSHIP
- AMPLIFIED
- SHARERS
- CONTRACTILE
- INNOCUOUS
- PACIFIED
- PARAGRAPHED
- THEALE'S
- WEARERS
- EXPERTNESS
- UNMISTAKEABLY
- SAMPLED
- CHALKED
- BRITON
- BRITON'S
- FORMULATION
- DENSHER'S
- THEALE
- SEASON'S
- GREGARIOUS
- SCRIBBLING
- JOURNALISING
- BOOMABLE
- PREDICATED
- TWADDLE
- KATE'S
- OVERTLY
- DISCLAIMER
- CHARMER
- PLEASANTRIES
- CROYS
- SUSIE
- CROY
- MANNINGHAM
- WAIVING
- UNOBJECTIONABLE
- MATCHLOCKS
- NUMBNESS
- BLEEDS
- DORCHESTER
- SHERLOCK'S
- FEAR'ST
- CLIMBS
- SLANTS
- LANDER
- DOT
- FAUGH
- REBUKING
- ELZEVIR'S
- FRESHER
- PICKABACK
- FRESHENING
- GUILLEMOTS
- BUDGE
- ZOUNDS
- SUNPATH
- SPANGLED
- MACKEREL'S
- BRASHY
- BUTTRESSED
- PRYING
- THIMBLEFUL
- SOUND'
- BUTTED
- THEREABOUT
- POWDERY
- NUMERATION
- BLUNDERBUSS
- TOPP
- BUZZARD
- TEMPTER
- FOWLING
- WESTERING
- ANVIL
- ASKEW
- QUARRYMEN
- WINCHES
- MANDRIVE
- HEWED
- OVERGREW
- LOTH
- SUBORNING
- JARRINGS
- GARBOILS
- VERIEST
- LOUT
- INNOCENTEST
- EXHORT
- GAOL
- INARIME
- TYPHEUS
- DISPERST
- SHRINKETH
- GODERAN
- GLOSSES
- DOTE
- SUBORNERS
- CHASES
- RACKETS
- GUTS
- FRESNO
- TENAYA'S
- CAPITAN
- POHONO
- BOLING'S
- MONO
- MONOS
- TOURIST'S
- MANN
- HITE
- NEAL
- CUNNINGHAM
- RECESSION
- INTIMATES
- SEASICKNESS
- OVERSEAS
- THIRTIES
- TERRIFYINGLY
- MONOLOGUE
- CONNOISSEUR
- TATTOOED
- DEPLIS
- PINCINI'S
- PACKLETIDE'S
- SHOEMAKER
- ALIAKHIN
- EMPLOYER'S
- IKON
- LARDER
- SNEEZE
- BELABOURED
- VODKA
- CUCUMBERS
- GUZZLE
- FLOG
- FEDYA
- UNDERHERDSMAN
- SHEAT
- SNOWDRIFT
- QUADRILLE
- ALIONA
- TEGOR
- TROIKAS
- CANDIA
- CLERKENWELL
- EXTENUATIONS
- KNAVERY
- COLLUSION
- WHITECHAPPEL
- RUMOURED
- TURNPIKES
- CONFUTED
- PROVIDENCES
- COMPLEXLY
- LISBON
- SNARE
- FOWLER
- BUCKINGHAMSHIRE
- BEDFORDSHIRE
- VILLAINIES
- LEVITIES
- DESOLATING
- ABSTRACTLY
- INTERMITTED
- UNENCUMBERED
- WAPPING
- RATCLIFF
- STEPNEY
- PREFERMENTS
- CONFLUX
- CONDOLE
- ORDEALS
- CANNIBAL
- ORPHANED
- WANTONLY
- CURDLING
- CAIN
- FALLON'S
- DIEM
- RESCUERS
- KESEBERG'S
- CACHE
- WINNERS
- RANCHERIA
- MOULDERS
- ELITHA
- LEANNA
- COON'S
- SHIPMENT
- BROTHS
- APPETIZERS
- METE
- NIMBLER
- LAGGARDS
- PROVOKINGLY
- COYOTES
- OFFICERED
- STONEMAN
- DARKEY
- LAWD
- TOLE
- MUDDER
- SPRINKLING
- HOLLYHOCKS
- PANTALETS
- PREARRANGED
- MUS
- YOS
- CAISSON
- CHAPEAU
- GAUNTLETS
- FANNIE
- COUCHED
- LAMB'S
- JOSEPHINE
- BUGLERS
- POODLE
- SNUGGLED
- EMPRESS'S
- CRIB
- FLEECY
- FITCH
- LEESE
- HEMMING
- VALLEJOS
- HOED
- LUXURIANTLY
- GEORGIA'S
- STINGING
- WELTS
- WABBLING
- PALING
- DROVERS
- BRUNNERS
- LEVYING
- DOLE
- WHIGS
- CORONATION
- PARABLES
- BOSWORTH
- SCOT
- BARONS
- LOCKE'S
- DISSEMBLED
- INVEIGLE
- CLANSMEN
- CLANS
- CAMPBELL
- ATHOL
- EVERARD
- DYKVELT
- CUTTHROATS
- BUTCHERY
- ANNOYING
- BATAVIAN
- UTRECHT
- COMMONWEALTHS
- NASSAU
- FUNCTIONARIES
- BEVIL
- ZUYDER
- ZEE
- STADTHOUSE
- MISINFORMATION
- TEXEL
- TELESCOPES
- UNWISELY
- ARGYLESHIRE
- CAMPBELLS
- HERDSMEN
- CALLUM
- ARMAMENT
- CAMPBELLTOWN
- KINTYRE
- SCURRILITY
- ELIACHIM
- MANASSES
- NABUCHODONOSOR
- ANGE
- PILLAGED
- THARSIS
- MAMBRE
- DESTROYETH
- SOBAL
- LIBYA
- APAMEA
- ESDRELON
- FASTINGS
- PROFANED
- DWELLETH
- CHARAN
- HETHITES
- HEVITES
- AMORRHITES
- HESEBON
- FORSAKEST
- BESIEGETH
- LOOKETH
- DOTHAIN
- BELMA
- CHELMON
- SCOTIA'S
- DISCREDITED
- MONITOR
- SURFACED
- EXCERPT
- POPULATING
- ICHTHYOLOGICAL
- CETACEANS
- BASICALLY
- CATALOGED
- FIVEFOLD
- SHANNON
- PERFORATE
- CENTIMETERS
- FRIGATES
- RAUCOUSLY
- MILLENNIA
- FANTASIES
- FABLED
- DAUNTING
- LLOYD'S
- FRANCE'S
- PACKETBOAT
- WAGS
- TAMPICO
- SHANGHAI
- BREATHER
- BUNKERS
- CREWMAN
- STOKE
- UNFORGIVABLE
- HOBSON'S
- GOVERNMENT'S
- PUNCTILIOUS
- HARDWORKING
- BIOLOGICAL
- ACROBATIC
- SUBCLASSES
- SUBGENERA
- SUITCASE
- CONGO
- AILMENTS
- UNDERHANDED
- ARCHAEOTHERIUM
- HYRACOTHERIUM
- OREODONTS
- CHEIROPOTAMUS
- EXPERTLY
- MEZZANINE
- VOMITING
- COMPLEMENTED
- WHELK
- MOORINGS
- RELAYED
- ACTIVATED
- PISTONS
- STEAMBOATS
- LINERS
- MIZZEN
- LIGHTSHIP
- SKILLS
- AUTHORIZED
- HARPOONED
- CABO
- LAS
- VIRGENES
- HOMETOWN
- HOORN
- NYCTALOPIC
- PERCENT
- STERNRAIL
- COWLS
- COMPANIONWAYS
- CETACEAN'S
- SOUTHERNMOST
- PERIMETER
- CHIDED
- NOTHING'S
- SPOTTING
- NARWHALE'S
- VIOLATE
- BOSUN
- SHIPBOARD
- CROSSTREES
- WHALES
- VEERING
- REVERSING
- BEACHES
- MESSROOM
- FARRAGUT'S
- STUBBORNNESS
- CREW'S
- FUNCTIONED
- LONGBOATS
- LEEWARD
- STEMPOST
- ROOSTING
- CASSON'S
- CORROBORATING
- HUP
- HIN
- PIG'S
- BUGLE
- MIKE
- HOLDSWORTH'S
- LOAMSNIRE
- SHOULDNA
- TENANTRY
- ALLAYS
- MISBEHAVE
- RICK
- HETTY'S
- CURTSY
- TOTTY
- CHISELLED
- SUGARY
- BOARDING'S
- COVETOUS
- SMELL'S
- UNROLLING
- BLANKNESS
- SPEARING
- WORRET
- OVERWORK
- DINGALL
- MEASLES
- WELLY
- BETHELL
- GENTLEFOLKS'S
- GELLS
- SHANNA
- HOPPIN
- TUMBLES
- ON'Y
- MAGGOT
- SPINNIN
- UNDERHAND
- ISNA
- PORRIDGE
- WAGGONER
- GANDER
- PONY'S
- UNSPEARING
- THEE'ST
- DRIBBLE
- LEAKY
- TH
- KNOW'ST
- WORRETING
- UNFASTENING
- APPORTION
- FORESAW
- ENTAIL
- DONNITHORNE'S
- POYSERS
- METHODISTS
- PRINCIPLED
- PERSUADER'S
- COMMANDMENTS
- IRREVOCABLE
- GRENETTE
- DANDIES
- SLOVENS
- APATHETICALLY
- SUPERVENED
- VS
- DEMISE
- BART
- SUPERSCRIBED
- ESQUIRE
- LEGITIMACY
- SLEETY
- SLOPPY
- IGNITING
- MILADY
- UNGLOSSED
- DETERMINATELY
- FALSENESS
- UNBORN
- DIVORCED
- INFANT'S
- OUTRAGEOUSLY
- STRATAGEMS
- SEVERN'S
- VORACITY
- DAUNTLESSNESS
- CAPSIZE
- WISTING'S
- HANSSEN'S
- RECOMMENCED
- SAHARA
- PROMISINGLY
- ADAMANTINE
- OLAV
- OVERLYING
- DOOMSDAY
- ICEWAVE
- HUMMOCKS
- TACKING
- HASSEL
- CIRCUMSPECTION
- CIRCUITS
- MERGING
- SNATCHES
- LULLABIES
- CROONS
- MEMORY'S
- MEW
- OFFSHOOTS
- LULLA
- COLE
- PERRAULT'S
- RIQUET
- CINDERELLA
- HURD
- HOUGHTON'S
- TRANSCRIPT
- REPRINT
- BARCLAY'S
- UNSUPPORTED
- CROWNINSHIELD
- ANTIQUARIAN
- BOSTON'S
- CROON
- GOOSEFOOT
- RONALD
- BARCLAY
- WORTHINGTON
- MELODIOUS
- MEAGER
- NEWBURY
- QUOTA
- OYE
- BAUM
- EWES
- SHEPHERDESSES
- LAMBKINS
- TIPPETS
- BEWAIL
- PUSSYCATS
- THUNDEROUS
- COUNCILLORS
- TYRANNISED
- HYPNOTISE
- ORGANISATIONS
- FLAUNTING
- PAGEANTRY
- AMPHITHEATRAL
- CONGESTED
- BRANDISHING
- THEATRICALLY
- CHINAMEN
- CLAMBERED
- UPLANDS
- REPLACEMENT
- VIADUCT
- OVERARCHED
- STARRED
- CARCASSES
- POTTERS
- OVERLOOKERS
- VIGOURS
- PHYSIQUE
- MANAGERS
- FOREWOMEN
- DRAY
- PRODUCERS
- MINDER
- CHESTED
- DRUDGE
- EUTHANASY
- AGGREGATED
- SPARSELY
- FILIGREE
- GROTESQUES
- GEOMETRICAL
- MOTIF
- APOLOGISED
- CRUDITIES
- FLOURY
- LIFTS
- ARMATURE
- TANNING
- REEK
- BREWERY
- MASSIVENESS
- BRICKWORK
- ANEMIC
- HOOTING
- OSTROG'S
- TIERS
- IMPASSABLE
- CHAPLETS
- PERVERSION
- AMBROSIAL
- PALLAS
- ROUSES
- PROVOKES
- CONTUMACIOUS
- VENUS'S
- SUSPENDING
- DEPLORED
- BEHOLDER
- NECTAREOUS
- UNGRATIFIED
- INHABITS
- SHEAVES
- RAKES
- REAPERS
- ALLAY
- PROPITIATE
- STOREHOUSE
- VETCHES
- GODDESSES
- CARRIER
- STYGIAN
- AMBROSIA
- ALLEGORICAL
- PURIFIED
- COMUS
- UNSPOTTED
- FANCY'S
- GROTTO'S
- TRACERY
- SPARS
- NEVERMORE
- APULEIUS
- PHOEBE'S
- REGIONED
- VESPER
- EPICUREAN
- TIPPLE
- SHEEP'S
- FETA
- MIXER
- CELLARED
- VINTAGES
- TASTERS
- PALATES
- NARY
- BURGUNDY
- COTES
- BAUNE
- BURGUNDIAN
- SALUT
- MARGAUX
- NOTABLES
- SAINTE
- MAURE
- VENDOME
- LOIRE
- VOUVRAY
- CHABLIS
- CLARET
- PROVOLONE
- CHIANTI
- NEUFCHATEL
- UNIQUELY
- BRINZAS
- TOKAY
- OLOROSO
- SHERRY
- LEICESTER
- AMONTILLADO
- APPETIZER
- STILTON
- CHEDDARS
- FRAUDULENT
- COUNTERFEIT
- GOLDWASSER
- LIQUEUR
- SEEDED
- SAUCER
- JUNIPER
- FROMAGE
- HOMEMADE
- REDUNDANT
- BUTTERMILK
- MUTABLE
- CARTIER
- QUEBEC
- EXTERMINATED
- CAROLINAS
- SOUTHEASTERN
- ERIES
- CHAMPLAIN
- HARVESTS
- VERMONT
- PROLIFIC
- MICMACS
- PAPINACHOIS
- BERSIAMITES
- TADOUSSAC
- MUSTERING
- CANNIBALISM
- ATTICAMEGUES
- MARAUDERS
- OTTAWA
- UBIQUITOUS
- NIPISSINGS
- SAGUENAY
- CATHOLICITY
- INSENSATE
- POPULATIONS
- HEALTHILY
- DISSIMULATION
- UNDID
- STIMULATES
- DISMEMBERED
- SCANTLINGS
- SUAVITY
- DEFINITENESS
- HYPOCRITICAL
- HARK
- JODO
- HAKATA
- SHO
- CHIKU
- BAI
- PLUMFLOWER
- MYSTICALLY
- WEIRDER
- ASSAULTS
- LIKEN
- ESOTERIC
- TEXTS
- WINDLASS
- MOXA
- KAJIWARA
- UMEGAE'S
- CHOZUBACHI
- TATAITE
- KOKUSHI
- HERMITAGES
- PRIEST'S
- TOMYO
- HEADMAN'S
- UNHARMED
- RECLUSE
- IMPIETY
- REBORN
- KUNSHI
- AYAYUKI
- NI
- CHIKAYORAZU
- DAIMYO
- REESTABLISH
- KARMA
- UNBARRING
- TOPKNOT
- REUNITED
- SHAMELESSLY
- NAPE
- LEVERANCE
- DETACHES
- ISOGAI
- HEIDAZAEMON
- TAKETSURA
- MIYAGE
- KOROMO
- GOBLIN'S
- TOMBSTONE
- SEGAKI
- SKINNERS
- DEPREDATIONS
- COMMISERATION
- TRITE
- LOWLAND
- BESPOKE
- DEERSKIN
- CHINKING
- ATTAINMENTS
- PHIALS
- APPLICANT
- PREDICARY
- SECUNDEM
- ARTEM
- INDISPOSITION
- DOCTORED
- YARBS
- UNLETTERED
- RIG'LARS
- RIG'LAR
- COUNSELORS
- BEDRIDDEN
- CONDEMNABLE
- FEBRILE
- HOARFROST
- PHLEBOTOMY
- CONSTITUTIONS
- MASON
- DEBILITATED
- COGENT
- WELLMERE'S
- WELLMERE
- CLINTON
- LEECH
- SINGLETON'S
- FORBEARING
- CONSANGUINITY
- AFFIXED
- IMPROBABLY
- UNIFORMLY
- VARIABILITY
- GAZER
- SOLILOQUY
- SHAMBLING
- QUALIFY
- FATNESS
- MANDATE
- CONVALESCENT
- FORBORE
- UNTOWARD
- YANKEES
- INCLEMENCIES
- VANQUISH
- TRENCHANT
- SPIKES
- BLASPHEMY
- BOBBINS
- PINCHBECK
- CABALLERO
- CENSORIOUS
- MARAVEDIS
- DISTRIBUTES
- ORDAINS
- UNREACHED
- FALTER
- KNICKKNACK
- APOLLONIA
- FOCILE
- BENE
- QUIDEM
- IMPEDE
- DEFRAUDING
- ERRANT'S
- SHATTER
- PANZAS
- HOOPS
- CASK
- REVOKED
- SAMSON'S
- ALFORJAS
- DIEGO'S
- ASSES
- GIBBERISH
- PAR
- PEDIGREES
- SOLDER
- FLAWS
- UNGRUDGINGLY
- THROWER
- WRESTLER
- BOWLS
- EWE
- GAZES
- GRAVELLING
- TANNERIES
- MAJALAHONDA
- CANON
- SALAMANCA
- SWORDSMANSHIP
- LICENTIATE'S
- CASSOCK
- WRESTLE
- FENCERS
- FLUTES
- PSALTERIES
- GAMBOLLING
- PHOEBUS
- SLEEPEST
- UNSTINTING
- WORSHIP'S
- STEWPOTS
- WHITEST
- CAULDRONS
- SEWN
- SKIM
- SKIMMINGS
- ARCADE
- JABBERED
- VOUCHSAFED
- EGGSHELL
- APE'S
- DENIZEN
- RESCUER
- ELICITING
- ZEALOUSLY
- UNSCATHED
- HOLSTER
- CASING
- BOLTING
- BREASTPLATE
- MAUDLIN
- WEAKLING
- DEFAMER
- CLINCH
- KOVA'S
- SINUOUS
- INTENTNESS
- BOMBARDED
- JIBBERING
- LOOTING
- NEGOTIATED
- IMPREGNABLE
- JEDDAK'S
- SALVERS
- PADLOCK
- ENSHROUDING
- LONGSWORD
- ZODANGAN
- SHAMBLES
- FIGHTERS
- KAN'S
- ZODANGANS
- BARSOOM
- DEPOSED'
- BUNGLING
- DUFF
- HORNPIPE
- VEHEMENCE
- GUM
- NECK'S
- JANGLE
- CLOVE
- TOON
- CUR'OSITY
- DEPPOSED
- OVERTURN
- MERRY'S
- CRUTCH
- TEETOTUM
- HAFT
- MALARIA
- DIAGONAL
- LIVESEY
- GIGS
- INLET
- ANCHORAGE
- GIG
- DERELICTION
- QUADRILATERALS
- AMASSING
- SCUTTLED
- ME'LL
- ABSENCES
- WISPS
- SLIGHTS
- UNWEARYING
- CONNIVED
- CHEAPLY
- SMIT
- RIGGED
- SEAFARING
- DEMONSTRATING
- GEOMETERS
- COMPENSE
- REFUTE
- AGITATE
- DIGRESSION
- PERTAINS
- REDUCES
- HARDENS
- TRANSMUTATION
- VEGETATIVE
- ANNEXED
- VENTRICLES
- CAVA
- ARTERIOSA
- ARTERIA
- VENOSA
- WINDPIPE
- VENTRICLE
- REFLUX
- ORIFICE
- AURICLES
- VERISIMILITUDES
- COUNTERWEIGHTS
- STRAITNESS
- COPIOUSLY
- COVERINGS
- COMPRESS
- EVINCE
- RAREFIES
- REPASSING
- SENSUS
- COMMUNIS
- AUTOMATA
- FABRICATED
- INCOMPARABLY
- VOCABLES
- APPOSITELY
- EDUCED
- EDITED
- INTRODUCTORY
- CONFORMABLY
- HEADINGS
- MASTIFFS
- THACKERAY
- AYALA'S
- WORTLE'S
- FROHMANN
- BLACKWOOD
- HARTING
- GARRULITY
- DIRECTOR
- CHANCERY
- JULIANS
- LIVERED
- CURS
- CINCINNATI
- SUBMITS
- TUITION
- DRACO
- THRASHINGS
- BI
- COLUMNED
- SUPERVEILLANCE
- PARIAH
- STOPPAGE
- DEFALCATION
- APPURTENANCES
- BAILIFF'S
- SIZAR
- ACERBITIES
- ADOLPHUS
- DENOMINATIONS
- SUBDIVISIONS
- UNFLAGGING
- ALPHABET
- LEXICON
- GRADUS
- HOOKHAM'S
- ORLEY
- EXTRAS
- FERULE
- FERULES
- SCOURGINGS
- PROSE
- PETYA
- TROITSA
- DISBELIEVING
- TAPES
- REPLAITING
- UNPLAITED
- REPLAITED
- MAMONOV'S
- SONYA'S
- FLEXIBLE
- ORDERLIES
- BOLSTER
- WRY
- MORTIFYING
- CHARRED
- SEMIDARKNESS
- COLLAPSING
- COMPASSIONATELY
- DAMMING
- SQUIRREL'S
- MISSPELLED
- SPELLED
- CLOTEL
- TRADER'S
- COUNTEFIT
- GENEWINE
- ARTEKIL
- HEARTRENDING
- GENTMAN
- GIB
- SHINEY
- DUS
- FIREMEN
- VALVE
- BOILERS
- SCALDED
- REDEEMING
- STEAMBOAT'S
- BOWIE
- NECKCLOTH
- WALKER'S
- TRAFFICKER
- AFRIC'S
- DEPLORE
- SUFFERER'S
- PIRATE'S
- HAVOC
- PROTECTS
- PRESUMES
- SCAN
- PROTEUS
- UNROLL
- MANTLED
- ASSAIL
- DISSIMULATION'S
- ILYIN
- OVERRESIST
- UNCONCIOUSLY
- ROSTOV'S
- FATTENING
- COMMUNE
- CONSCRIPTED
- WRATHFUL
- UNMEANINGLY
- BICKERING
- YAKOV
- PROPRIETOR'S
- BAST
- DICTIONARIES
- OBTRUDE
- YANKOVO
- BLUSHINGLY
- DUNYASHA
- HEIRESSES
- ROSTOPCHIN'S
- TRADESMEN'S
- VYAZMA
- WITTGENSTEIN
- MUTINOUS
- BONAPARTE'S
- MALEVOLENTLY
- LOCKUP
- CAPTURES
- CAJOLERY
- RELEASING
- FRENCHMAN'S
- PELISSES
- PUCKERED
- LUBYANKA
- LOBNOE
- EVSTAFEY
- PERKHUSHKOVO
- CAISSONS
- IRRATIONALLY
- ACCUSTOM
- MILORADOVICH
- VOYNA
- SEMENOVSK
- GRIDNEVA
- PONIATOWSKI'S
- UVAROV'S
- FELICITATIONS
- PERSISTS
- HERTFORDSHIRE
- GEORGIANA
- INTERMARRIAGE
- BOURGH
- WILFULLY
- DESPONDING
- BENNETS
- LUCAS'S
- SLYNESS
- PATRONESS
- PLURAL
- SPACED
- NORMALLY
- SUBJECTIVELY
- ACCENTING
- CONSTITUENT
- CLASSIFIES
- SPECIFYING
- SEXTUPLE
- TRIPLET
- SEPTUPLE
- STEWARDESS
- BERTHA
- BRAID
- COY
- STRINGED
- RISIBLES
- MAL
- MER
- LIQUIDS
- GOONIES
- ALEUTIAN
- UNIMAK
- GREENS
- SUPPOSEDLY
- TUNDRA
- MISCREANTS
- GORGES
- GULLIED
- ALASKAN
- FLOES
- SCHOONERS
- ELMORE
- EPISODE
- HEARTFUL
- NEUKLUK
- PROSPECTING
- AGEETUK
- SALADS
- MEATS
- DIETING
- DAWSON
- OASIS
- FROGEITY
- MALADJUSTMENT
- TOULOUSE
- PONTUS
- ARAGO
- RAINFALL
- INDISTINGUISHABLES
- EJECTAMENTA
- TAHITI
- HOMOGENEITY
- EXCLUSIONIST
- IDENTIFIES
- SYMONS
- ACCELERATIVE
- ANCESTRALLY
- EXCLUSIONISTS
- ABEDARE
- EXCLUSIONISM
- PUBLISHES
- APOLOGIZES
- PAILFULS
- AXIALLY
- TANGENTIALLY
- THINKABLE
- CARMATHON
- GRIFFITH
- LEIRUS
- GASTEROSTEUS
- BIFURCATE
- FIFESHIRE
- SHIRE
- SERIAL
- MAGNET'S
- DISLODGMENT
- REPULSIONS
- CONVENTIONALISTS
- CONCEPT
- HAYSTACKS
- FERREL
- CALCUTTA
- CUBIT
- FUTTEPOOR
- CUTTERCOATS
- SUNDERLAND
- DIGESTIONS
- UNASSIMILABLE
- BOVINA
- HAILSTORM
- PRESBYTERIANISM
- HAILSTORMS
- INCREDIBILITIES
- POUNCING
- DIVERGENCE
- WAFTING
- HAILSTONES
- UNACCEPTABLE
- INOPERATIVE
- DODOES
- MOAS
- PTERODACTYLS
- CARBONIFEROUS
- DISINTEGRATE
- MUDS
- TROVES
- PALAEONTOLOGISTS
- CYCLONES
- OMNIPRESENCE
- HETEROGENEITY
- TENTATIVELY
- PROVISIONALLY
- LARVAL
- STATIONARINESS
- REDRUTH
- FABULOUS
- RUSTICS
- SHOVELED
- DAS
- PADERBORN
- SIGNIFICANCES
- SEGREGATIONS
- LOCALIZED
- SEGREGATE
- GRADATION
- SELECTIVENESS
- HIBERNATING
- INFINITUDE
- ENORMOUSNESS
- MIGRATORY
- MIGRATE
- FLUTTERINGS
- STRASBOURG
- SAVOY
- ADS
- CONDEMNS
- FRACTIONS
- ASTRONOMY
- LOCATES
- DECIMAL
- THERE'RE
- LOADER
- KANKAKEE
- HAMMERLESS
- HOUND'S
- JAG
- FOOTERS
- TIERCES
- STOREROOMS
- TIMEKEEPER
- TIMEKEEPERS
- EFFECTING
- TYPEWRITERS
- SICKING
- TYPEWRITING
- JIM'S
- CONSUMER
- STEERS
- REPRIEVE
- GAUGED
- FERTILIZER
- ELABORATING
- GUNNING
- OBSEQUIES
- SHORTS
- WORKINGMAN'S
- SYRUPY
- HEIFER
- PAVILION
- DIVIDENDS
- REFINERY
- CUCUMBER
- SUNBEAM
- DRAGS
- CATFISH
- CLYTEMNESTRA
- VOODOO
- SNIFTY
- JINKS
- STRINGING
- AMAZONS
- MEMORIAL
- SLAT
- CLIP
- TRIGGER'S
- SUBSPACE
- MANTELISH'S
- SPECIALTIES
- CORRELATED
- GALACTICS
- BASIC
- BEASTIES
- PLANETARY
- SUBPLANETARY
- UMPTEEN
- LORDY
- BIOLOGIST
- MACCADON
- CUBICLE
- FIDGETED
- NOSY
- INDUCES
- MODIFIES
- FAYLE'S
- FEDS
- SPILLED
- REFLECTIVELY
- SHEINBERG
- HORNSBY
- CYN
- THIA
- TILDEN
- BURRITT
- BROOKLINE
- CAMILLA
- DANIELS
- MONTCLAIR
- SHANE
- KENTFIELD
- DECKER
- GENEVIEVE
- MANITOU
- BARNES
- MARTINETTE
- WILEY
- EASTON
- CONSECRATION
- MULLOWNY
- ADVISEMENT
- SUMMARILY
- HOUSING
- ILLEGALLY
- MALONE
- DISPENSARY
- MATILDA
- MARINES
- QUANTICO
- RIVALED
- BURN'S
- SMUGGLED
- SARDINE
- HANDCUFFED
- MORALE
- STRIKERS
- INTERROGATED
- INDICTED
- TUMULTY'S
- INTERCEDE
- TUMULTY
- INADEQUATELY
- DANA
- COUNSELOR
- LIBERTARIAN
- NARRATING
- TERRORIZE
- O'BRIEN'S
- WADDILL
- MOOT
- JURISTS
- HAVEN
- RECRUIT
- REDRIFF
- SLUR
- NARROWNESS
- MISMANAGING
- ENGROSS
- DISTASTED
- LANDLADY'S
- SHEPTON
- BED'S
- IMMODEST
- RAVISHER
- BUNTING
- BULLION
- DESPERADO
- MUNDANE
- IMPROPERLY
- THOROUGHBRED
- FORBEARANCE
- BOSWELL
- ASSURES
- JAWED
- MARINERS
- SCRAPES
- DISQUIETED
- RECREATIONS
- DISPORTED
- YUCATAN
- TRIBUTOR
- CHRONICLER
- TRUCULENTLY
- CHRONICLES
- DRAPES
- POESY
- TROPE
- BLENDINGS
- INTERTWININGS
- IMPAIR
- LUCINDA
- UNDULATES
- CRADLED
- FRIVOLITIES
- REFASHION
- INCONTESTABLE
- BECKONS
- PROGENY
- THIRSTS
- WOMANLINESS
- SEPARATIONS
- WANTONNESS
- ANTITHESIS
- FRIVOLITY
- MUTUALITY
- CONSTRAINING
- DESECRATES
- SANCTITIES
- CLARITY
- FUSE
- ILLUMINATES
- GERMINATED
- DOUBLES
- WILHELMINA
- FATHOMLESS
- DISSONANCES
- WALKERS
- SICKLINESS
- SUNDERED
- IRRADIATED
- IDOLIZED
- GODLINESS
- REASSURANCE
- MOCK'D
- AGRIVAINE
- BRANDILES
- SAGRAMOUR
- DESIRUS
- DODYNAS
- SAUVAGE
- OZANNA
- LADYNAS
- PERSANT
- INDE
- IRONSIDE
- PELLEAS
- MAYED
- LASHED
- PRIVILY
- LOVETH
- LAMBETH
- DITCHES
- CUMBERED
- OFFAL
- VORACIOUSLY
- CARTED
- MISREPRESENTATION
- LORDES
- FAIRE
- GERE
- SAWEST
- SWOONED
- DESIREST
- PELENORE
- PATER
- NOSTER
- FLASKS
- BOAR'S
- STOMACHER
- BAWBLE
- ORACULAR
- URIEN
- FOREBORE
- SANGREAL
- HERMIT'S
- SURNAMED
- UNADVISEDLY
- UNCOURTEOUSLY
- THEREAT
- UNGENTLE
- UPRIGHTNESS
- HANDMAIDENS
- PRAYS
- RENEW
- SINNER'S
- ASKETH
- MEEKNESS
- TRAVILLA
- HEALTHIER
- CARRINGTONS
- FLATLY
- WEEPER
- CROSSEST
- BANISHMENT
- UNLADYLIKE
- PENITENT
- PLEADINGLY
- DOLL'S
- FLORA'S
- GLUE
- UNTRUTHFUL
- FLATTERETH
- SPREADETH
- ARNOTT
- LESLIE'S
- LORA
- IMMENSITIES
- RAYED
- POPPIED
- UNBELIEVING
- CAROLING
- WARBLERS
- AMBERING
- CORSELETS
- BATTLECRIES
- MILLING
- MART
- TUNNEL'S
- OUTFLANKED
- WOLFLIKE
- DYNAMITE
- WHINED
- LACQUERED
- CLUBBED
- AWESOME
- RELAXING
- PENNONED
- LUCENT
- SPEARSMEN
- PIKEMEN
- AUTOMATONS
- VIBRANT
- WAILINGS
- DRAGONED
- COLUMN'S
- IMMOBILE
- UNLEASHED
- BOXER
- SPIKED
- GLADIATORS
- PREENING
- GAPS
- JAVELINED
- HUDDLING
- DISINTEGRATED
- SCAMPERING
- FLAILING
- PRONGS
- TRIDENT
- TEETERING
- MADDENED
- SCYTHE
- FROTH
- JE
- LAMMERGEIERS
- SCAVENGERS
- LINGERINGLY
- SATED
- SCANTINESS
- OLDSTER
- RIMMING
- CINCTURE
- CHIMINGS
- VASTNESSES
- THRUMMING
- EFFORTLESS
- STRIDING
- TAPS
- DRUMMING
- ENIGMATICALLY
- ETERNITIES
- CRESCENDO
- FACETS
- AIRLESS
- WATERLESS
- SUNLESS
- SMARTER
- REUTER
- CURSORILY
- PLAT
- PENDULE
- MANTELPIECE
- CONSOLES
- CHIFFONNIERE
- TENDRILS
- DEMESNE
- PARTERRE
- ENGLISHWOMAN
- RESTE
- STEEPS
- FURRINERS
- RHODODENDRON
- COVERT
- SLOUCHED
- HOODED
- UNDERMIST
- FOOTFALLS
- DRIP
- TINKLED
- LIMBER
- GILLS
- HOWDYE
- HAIN'T
- AFFIRMATIVELY
- HIT'S
- SHET
- PURTY
- NONCHALANT
- HUMOUROUS
- RUMBLED
- HYEH
- TWUSN'T
- GITTIN
- DEBATING
- UNAVAILINGLY
- SPECTACLED
- NAIVE
- THAR'S
- REMONSTRATING
- KINDLIEST
- UNJOINTED
- CRAWFISH
- CHINKED
- GOAD
- SPURNED
- PELLETS
- NAAS
- VIRTUAL
- TRIMLY
- ELECTRICIAN
- OPTIMIST
- NATIONALIST
- REMONSTRATIVE
- VILLONA'S
- CONFUSE
- ELATES
- CONTINENTALS
- GONGS
- TRAM
- GAZERS
- EQUATION
- UNPURCHASEABLE
- SUPPED
- VOLUBLY
- MADRIGAL
- INGENUOUSLY
- MECHANICIANS
- LUTES
- SHEPHERDED
- TORPID
- NOISIEST
- ROUSSEL
- HOHE
- FORM'S
- HUNGARY
- VOLUNTARIES
- OBSCURELY
- LOSERS
- COMMITTEES
- CONCEALMENTS
- ENNOBLED
- DROLLERY
- PUB
- DOCS
- CONGRESSMEN
- ERASING
- HOPPERSON'S
- SANCTIMONIOUS
- COMMITTEEMEN
- COMMINGLING
- PERSIFLAGE
- FRANKING
- FACETIOUS
- OFFICIO
- ENNOBLE
- BRIBERY
- LEONI
- SPOLETO
- HANGINGS
- GIRALAMO
- USURPER
- COMMANDANTE
- GIOVANNI
- ORGY
- FERRARA
- NOVICE
- PRONUNCIATION
- ABSTINENCE
- PENANCES
- REVERENTIAL
- LUTHER'S
- THEOLOGIAN
- DEATHLIKE
- VANITIES
- ADULTEROUS
- OBSCENE
- FRESCOBALDI
- MONK'S
- PIERO
- CONSTITUTIONALLY
- ENERVATED
- KILLER
- NOISED
- ARTA
- BREECH
- WINDWARD
- SCALPED
- GROVER
- FLEETNESS
- CENTERS
- HAYES
- AVOCATION
- RAFFLE
- CONTESTANTS
- BONHAM
- POKY
- THOROUGHBREDS
- WILCOX'S
- ALEXIS
- SHERIDAN'S
- RUCKER
- LEONARD
- INFORMAL
- OVERTON
- BUNTLINE'S
- BENNETT'S
- CODY
- BELMONT'S
- LIEDERKRANZ
- INSPIRITING
- NIBLO'S
- JARRETT
- PALMER
- MAEDER
- DRAMATIZED
- STUDLEY
- FRELEIGH
- GUSS
- S'DEATH
- PANTAGRUEL
- ENNASIN
- UNTWISTED
- JOINTURE
- ENTER'D
- CAWL
- SUCK'D
- PICK'D
- CHEW'D
- PEEL'D
- DIGESTED
- LAWYER'S
- EXSUDATIONS
- BACKSIDE
- DIGESTING
- INTRENCH
- CIRCUMVALLATIONS
- TRANSGRESSION
- DEVOUREDST
- AMBLE
- WEEDER
- DISPUTATIONS
- CIRCUMVENTION
- PASS'D
- PROWESSE
- BEGUINE
- BEGUINES
- FUSTY
- BAGGAGES
- NEGLECTEST
- ENTERPRIZE
- OFTNER
- WEAVES
- FACETIOUSNESS
- ENTEREST
- RESOLV'D
- FAG
- VENTED
- CONJUGALLY
- CALL'D
- SENSIBILITIES
- CHRYSTAL
- MOTE
- ABETTING
- THEREUNTO
- TRAGICOMICAL
- REVERENCES
- RAMALLIE
- CURL'D
- ASSIMILATED
- SOVEREIGNLY
- TAFFETA
- METALLICK
- TUCK'D
- PUFF'D
- CLIPP'D
- VAPOURINGLY
- BEGUILED
- STEDFASTLY
- DRAGG'D
- SYLLOGISMS
- NULLIFIED
- PREMISS
- LANDOR
- COMPENSATING
- PHIPPS'S
- STUMPY
- AUREOLE
- CZERLASKI'S
- MISDEMEANOURS
- UNIMPEACHED
- QUADRAGENARIAN
- MATHEMATICIAN
- PACHYDERMS
- BRIDMAIN'S
- DEBATABLE
- COMMONALTY
- PAROCHIAL
- INITIATED
- PAS
- QUADRILLES
- PROBLEMATIC
- COUSINSHIP
- BLEMISHES
- DISQUALIFICATIONS
- SEVEREST
- UNDENIABLY
- PAUCITY
- CONGREGATIONS
- WEDNESDAYS
- SHUNS
- GORGON
- SOPHOCLES
- ABSORBINGLY
- NEE
- UNSKILLFUL
- IDUMEA
- GITTHA
- DAPHNE
- ANTIOCH
- FOREBODED
- IRRUPTION
- FOREFRONT
- CANA
- HINDERMOST
- PAPPUS'S
- ARISTOBULUS
- MARIAMNE
- PHOENICIA
- AMBUSHES
- SOSIUS'S
- CENTURIONS
- PROPORTIONABLY
- CALUMNIATED
- MALICHUS
- PARTHIANS
- APAMIA
- PACIFY
- PARTHIA
- MINISTRANT
- NATURA
- INSTILLANT
- HOWSOEVER
- PRIMOGENITURE
- DEPENDANCE
- DICTATES
- MEUM
- TUUM
- OVERALL
- INTERMIT
- SAKES
- PESTILENTIAL
- EXALTS
- FILIATION
- SOUGHTEST
- FOUNDEST
- JOYFULNESS
- DISEASEFUL
- MALIGN
- PESTILENT
- PASSETH
- FILLEST
- SACRAMENTS
- DISLOYAL
- HYPOCRITES
- INFECTIONS
- BEDCHAMBER
- RELUCTATION
- VIPERS
- CATECHISED
- RECEVEUR
- DIPLOMA
- NAIVETE
- PAYMENTS
- D'ANTIN
- SUBSTITUTING
- HYGIENIC
- CHAMPS
- ELYSEES
- GAMING
- NANINE
- STEWED
- ARNOULD'S
- LAMARTINE
- SCUDO
- EFFACE
- EMBODY
- PRUDENCE'S
- PROFESSEDLY
- UNSUSPECTINGLY
- GAINER
- SERMONIZED
- GAUTIER
- DUPRAT
- BOUQUETS
- USELESSNESS
- SHIRKING
- BOMBAST
- PRISTINE
- BATES'S
- GENERIC
- TAUNTER
- BARKED
- BLOODTHIRSTY
- UNHEEDING
- STERNWAY
- SPLINTERED
- BRIG'S
- CHESHIRE'S
- TWITCH
- BEGOTTEN
- UNRISEN
- GABBETT
- NAB
- TROUGHS
- LEVIATHAN
- WALLOW
- CREAMED
- BANDAGING
- VILLAINY
- BEACHED
- LAGGARD
- HUMOROUSLY
- UNBUCKLING
- HELMSMAN
- BISECTED
- VETCH'S
- BLUNT'S
- IMPENITENTLY
- SYDNEY
- STUMPED
- CHARTS
- KILN
- PURFOY'S
- WAXING
- HOYSTERIN
- CANARIES
- NOTION'S
- MADAM'S
- EAGLEHAWK
- WIND'S
- ISTHMUS
- LAMBENT
- VULCAN'S
- SMITHY
- DEMIGOD
- ROUSTABOUT
- DICKERING
- MELVILLE
- BARSTON
- SWERVED
- HOMESITE
- LAZIEST
- SIGHTSEEING
- HIGHWATER
- CALDRON
- RAINIER
- WINTHROP
- PUYALLUP
- TOWNSEND
- SHACKS
- RASCALLY
- DEBAUCH
- BUFFETING
- MILESTONES
- SLOUCH
- VENTILATION
- PIONEERS
- BLOOMER
- STUMPER
- HARDTACK
- PROVENDER
- BIGHT
- YAKIMA
- TETHERING
- ROUSING
- HOBBLES
- NATCHESS
- WHEELMEN
- OBSTRUCTION
- OBTAINABLE
- NISQUALLY
- UNEVENTFUL
- BACCALAUREATE
- NATHANIEL
- AUGUSTUS
- FENN
- VAILL
- HAZZARD
- MISSOURI
- BERKELEY
- BRIDGEPORT
- GLAZIER
- HOPSON
- GARWOOD
- MERWIN
- REUNIONS
- REGIMENT'S
- ARLINGTON
- NAUGATUCK
- UPRISINGS
- ANCESTRY
- AMPHIOXUS
- DINOSAURS
- BARBELLION
- DISGUSTEDLY
- VAUDEVILLE
- MONKEYISH
- GREATGRANDSONS
- WRYLY
- COMPETING
- HEADSHIP
- COSMOS
- LEMURS
- AESTHETICALLY
- UNEMPLOYED
- EUGENICS
- DARWIN
- REPRESSIVELY
- HYPER
- RUCK
- PLAUSIBLY
- TRUSTFUL
- EXPLOITABLE
- PARASITIC
- GROSSNESS
- STUPORS
- COSMOPOLITANS
- TOLERANCE
- OUTLAWS
- TERRORIZED
- SLINK
- URBANELY
- PARVENUS
- PLUCKS
- SWISHES
- FLANKS
- UNLEARNED
- ATROPHIES
- QUENCHLESS
- SLOGS
- CONFORM
- ENERGIZED
- PERPETUATE
- ITCH
- MANTLES
- GROTAUT
- HOSTAQUA
- HUMBLER
- GAMBIE
- EDELANO
- VERGED
- SUPERNAL
- CABOOSA
- OATHCAQUA
- CALOS
- FASTNESS
- THIRNAGOAS
- BARRENS
- CONTORTIONS
- SATOURIONA
- MALTREATED
- RIBAUT
- MISERIE
- FINDE
- BEATE
- EFTSOONES
- BEGANNE
- NEERE
- SKINNE
- SOULDIERS
- SKINNES
- THOROW
- PARTES
- EXORBITANT
- TOOKE
- VILLAINES
- ANSWERE
- CHURLISHLY
- MARCHANDISE
- VASSEUR'S
- BRIGANTINE
- ASTINA
- VASSEUR
- DEEPELY
- QUICKE
- PARED
- SCRIVENER
- STATIONER'S
- LENDER'S
- GAUDS
- VAPID
- FLAVOURLESS
- INTIMATING
- CHANCELLOR'S
- DISESTABLISHMENT
- BARSETSHIRE
- DOWNING
- BALDOCK
- BRIBED
- ELECTOR
- DREADS
- RETRICKED
- LOUGHLINTER
- COMPLAINS
- EMMA'S
- UNEXCEPTIONABLE
- TENDERER
- VALETUDINARIAN
- LIEU
- MITCHELL'S
- YORKSHIRE
- SURPRIZED
- ENSCOMBE
- COMPETENCE
- PORTIONLESS
- TYRANNIC
- WESTON'S
- SURPRIZE
- WESTONS
- ELEGANCIES
- QUICKSIGHTED
- SIXPENCES
- CONTRIVING
- PROSINGS
- GODDARD'S
- SHEWING
- ARTLESSLY
- BETWEENS
- CONSTRAIN
- LITIGATION
- SLUMS
- LOREEN
- DISPENSER
- FURNISHES
- ORCHESTRAS
- CONVERSIONS
- SPECIFICATIONS
- TOYNBEE
- SLUM
- DISCIPLESHIP
- CHRISTIANIZING
- CRANK
- MEMBERSHIP
- RIGOR
- TRANSFORMING
- REDEMPTIVE
- CESSPOOL
- WARINESS
- PERAMBULATOR
- SUNDAYFIED
- HOW'D
- FELLA
- AFFIRMATION
- DESOLATELY
- SIBYL'S
- THOROUGHGOING
- LAMHORN
- ROSCOE
- MISTED
- SHAMEFACEDNESS
- STENOGRAPHERS
- STENOGRAPHER
- FASCINATE
- SACRILEGE
- FLICKING
- ARGUMENTATIVELY
- DECADENCE
- WITENAGEMOT
- PHEW
- BYWORD
- BLAZES
- CHECKERED
- REASSERTION
- EXPROPRIATE
- EXPROPRIATION
- UNDERCUTT
- PARENTAGE
- GORFINKEL
- EUREKA
- KOSSUTH
- DAVIDSON'S
- LOVELORN
- DILL'S
- AFY'S
- REPAYMENT
- LABORER'S
- PREMISED
- EQUANIMITY
- DELIBERATED
- BROADCUT
- BETHEL'S
- ARCHIE
- FERRET
- FLURRIED
- GRAVER
- GIBBETED
- DESERVINGS
- LYNNE'S
- TAGRAG'S
- POKES
- BUCKETFULS
- RAGTAG
- SHIVERY
- TAGRAG
- RIMS
- INQUISITIVELY
- TATTERS
- HISSES
- UNWASHED
- JIG
- PINNER'S
- PLOWMAN
- LATIMER
- DUCKING
- GOVERNESSES
- IRASCIBLY
- ANNUITY
- LYNNEBOROUGH
- LIGHTENING
- ELLIPSE
- GUZERAT
- GOLCONDA
- DISUSED
- DEPOSITORY
- PAPYRUS
- FINIS
- NIB
- ILLUMINATIONS
- FARIA'S
- FENESTRELLE
- UNHEMMED
- IMPUTING
- EVAPORATED
- LECLERE
- NUPTUAL
- D'IF
- RADCLIFFE'S
- IMITATORS
- RHUBARB
- SPECKS
- LENIENT
- MEMENTO
- CIRCUMSTANCED
- COCOA
- HOUSEMAIDS
- WAIVE
- INCONSTANCY
- DEFUNCT
- ANYONE'S
- UNDERVALUE
- ABHORRENT
- COIL
- TRANSMITTER
- HURTLED
- BLURTED
- CONNECTS
- BLASTS
- OILERS
- LIEUTENANT'S
- SUMMING
- TELEGRAPHS
- WAR'S
- COUNSELED
- SLIM'S
- CONVALESCING
- PERISCOPE
- WINCE
- MACHINIST
- FOAMY
- CORRALLED
- WIRY
- AMBUSHED
- OUTNUMBERED
- GRENADE
- STARK
- RECUPERATING
- DETONATIONS
- BATTLESHIPS
- FLASHLIGHTS
- ROCKET
- BREASTWORKS
- REPORTING
- FORMATIONS
- CARNAGE
- CLOCKWORK
- REPULSES
- PUNCTURED
- MOBILIZATION
- AEROPLANES
- BOCHE
- MANNING
- ONCOMING
- ARMADA
- OBSERVER'S
- PARACHUTES
- VERBIAGE
- ENTOMBED
- NEUTRALITY
- WAGERS
- SINKINGS
- BILLIARDS
- MARION'S
- CALLERS
- PUNGENT
- HIGHBALLS
- CIGARET
- TOOTS
- ARBITRARILY
- RANKLING
- INDISCREET
- CALCULATING
- PREEMPTED
- PERCENTAGE
- DIVINELY
- CUSTOMARILY
- CLAYTON'S
- CLAYTON
- BARONIAL
- INTERSECTED
- SQUARES
- SUBTERRANEOUS
- PURCHASERS
- SARVED
- KNAVES
- BOTTOMED
- SNICKS
- SCREW'S
- SNICKEY
- SCRAG
- WINDER
- TREACLE
- BABBY
- VEAL
- IMPERENCE
- HUCKSTER'S
- TRAFFORD'S
- WURNO
- SHODDY
- FAULTERING
- SHUTTLE
- PREPOSSESSING
- DETRACT
- RECOLLECTING
- CHARTIST
- DELEGATE
- FORETHOUGHT
- UNRESERVEDLY
- PSEUDO
- REFINE
- LAWFULLY
- MEMORANDA
- BUMPO'S
- COCOANUTS
- MAMMOTHS
- JIP
- GOLLY
- PROSECUTOR
- GUILLOTINED
- GRATES
- REFILLED
- DARNAY'S
- DAGGERS
- OVERLADEN
- THEOPHILE
- GABELLE
- ALEXANDRE
- CITIZENESS
- MANETTE'S
- ABBAYE
- JURYMEN
- REFERABLE
- PREDOMINATING
- ACQUITTAL
- STREW
- FORASMUCH
- COMPENSATE
- CONCOURSE
- DYE
- BRAWLING
- PROSS
- UNKINDNESS
- GLOUCESTERSHIRE
- ENTREATING
- GUESSWORK
- SHORTAGE
- EMOTIONLESSNESS
- THINNEST
- NED'S
- DETACHING
- CHUNK
- WIELDING
- ATMOSPHERES
- FORECASTS
- SOLIDIFY
- DAMPENING
- WARDING
- SOLIDIFIES
- DOGGEDLY
- SATURATING
- NOXIOUS
- POTASSIUM
- HYDROXIDE
- MINER'S
- UNDERBELLY
- CONGEALING
- SOLIDIFICATION
- BREATHABLE
- CENTIGRADE
- MONITORED
- OPERATION'S
- INSPECTIONS
- DISLOCATED
- ENKINDLING
- DAZE
- WINDED
- EXPERIENCING
- VIBRATIONS
- CONVULSION
- CANNONBALL
- VACUUM
- EXPEL
- RIVETS
- WHIFFS
- HEEDLESSLY
- LUNGFUL
- AHHH
- INHALATIONS
- CIRCULATING
- THROATFULS
- BANALITY
- COASTLINE
- SARMIENTO
- SHALE
- CRUSTACEANS
- STEWS
- ALGAE
- FUCUS
- GOBY
- GUDGEON
- MEDUSAS
- SEMISPHERIC
- PARASOLS
- QUIVERINGS
- LEAFLIKE
- TENTACLES
- EVAPORATING
- CAPRICORN
- FRIO
- BRAZIL'S
- DIZZYING
- PERTURBED
- VRONSKY
- UPSETS
- FERRETS
- UNLUCKILY
- BREATHINGS
- FLEECED
- WRONGFULLY
- VULCAN
- REPROBATES
- SUITABLY
- SCHOOLMASTER
- COMPLAISANTLY
- CUDS
- SICKLE
- CLANGOR
- BELLOW
- PROTRUDED
- STABBING
- LOPPING
- SURVIVOR
- CONCEIT
- IOLCHOS
- TAMING
- REFUSES
- SULPHUROUS
- MEDEA'S
- GAPE
- DUSKINESS
- DISPORTING
- LYNCEUS
- PERPENDICULARLY
- DEDICATE
- FURL
- IMPERIALISTIC
- EDITORIALLY
- VOTING
- ENFRANCHISING
- SUBMARINES
- PERTINENT
- INSCRIBED
- ENHANCED
- BALFOUR
- VIVIANI
- RUMBLINGS
- ANDREAS
- SPARGO
- HINSHAW
- AUTHORIZATION
- ALABAMA
- UNEX
- PECTED
- FEDERATIONS
- SUF
- FRAGE
- MENT
- UNIVER
- ANDS
- MICHIGAN
- CONTROVERSIAL
- CONSCRIPTION
- ESPIONAGE
- NATIONALLY
- CZARIST
- AMERICA'S
- IMPOVERISHED
- AMERI
- PRESI
- TIONAL
- ENFRANCHISEMENT
- HOODLUMS
- ROUGHS
- EMBODYING
- CONFEREES
- ARNEIL
- LAVINIA
- DECLAIMING
- VACATE
- SUFFRAGETTES
- DISSEMINATION
- EXHORTS
- IMPRISONMENTS
- MYER
- ROWDIES
- GEORGINA
- STURGIS
- EMBEDDING
- FLATHER
- STONE'S
- TESTIMONIALS
- EVANGELICAL
- CITE
- JAMISON
- PANELLED
- RICHE
- DICKENS'S
- TODDLING
- BLOOMLESS
- UNOBTRUSIVELY
- BESTRIDDEN
- CLARION
- AUSTERELY
- SCARVES
- FURRY
- FEZ
- SIRDARS
- SATURNINE
- RADIANTLY
- AFRICAN
- SUPERCILIOUS
- CONJUROR
- EXTERNALLY
- ACROBAT
- PREDATORY
- MAGISTERIAL
- BLACKING
- BONFIRE
- SNIVELLING
- MORALISING
- BEARD'S
- SOCIALISM
- HARLEQUINADE'S
- TOWEL
- POLICEMAN'S
- THIGH
- FLORIAN'S
- COSTUMIER
- PHONE
- PHONES
- BOSH
- FLAMED
- TAMENESS
- BENEFACTORS
- CHANDELIERS
- WIGGS'S
- DOTAGE
- WIGGS
- MEGRIMS
- CURTSEY
- UNDECIDEDLY
- AMU
- SMALLNOSE
- BOWMAN
- MERRIWIG'S
- LANDMARK
- BAYED
- H'R'M
- APOLOGISE
- ABDICATED
- FONS
- ORIGO
- SMALLNOSE'S
- FLAGSTAFF
- EAVESDROPPERS
- COUN
- SUITORS
- BARODIANS
- SWINEHERD
- TREGONG
- LEONORA
- VESTAL
- GOWERS
- ALBEIT
- UNENTHUSIASTIC
- OBSTREPEROUS
- COLLECTEDLY
- ANTIMACASSAR
- EULOGIES
- ABASEMENT
- CEREMONIOUS
- EMBARRASSINGLY
- UNGIRLISH
- MINERVA
- UNADORNED
- EULOGIZING
- UNBIASED
- SAFEST
- DUGALD'S
- GUILELESSLY
- DOWNPORT
- SUCCUMBED
- CONVERSATIONALLY
- PAMELA'S
- PAM
- CROCHET
- RESET
- DESPERATENESS
- EFFUSIVE
- PAM'S
- ELDERBERRY
- SACQUE
- INTERJECTION
- SPAS
- REDLY
- HENCHMAN
- LANGDALE
- INTENTIONAL
- COLLEAGUE'S
- EMANCIPATED
- GALTON
- EVIDENTIAL
- TRANSFERS
- HORNBY'S
- BESTA
- INA
- DESERTO
- ILLUSTRISSIMO
- VETTURIO
- BELLISSIMA
- CONFRERES
- SIGNORINI
- BELLA
- VETTURE
- DILAPIDATION
- NAGS
- MISNOMER
- APOSTROPHIZING
- SALERNO
- OVERHANGS
- HEADFOREMOST
- WEDGED
- POSITANO
- ARBORED
- FORMER'S
- ANNULLING
- BEQUESTS
- PATRICIA'S
- SUPERCEDES
- INHERITS
- MERRICKS
- BOOKKEEPING
- VIOLET'S
- BRACKET
- SQUANDERING
- SMOKES
- BAKERY
- SEEDY
- EGRESS
- SHIRTSLEEVES
- CHEF
- SECURITIES
- UPTOWN
- DOWNTOWN
- WHEEDLED
- TUCKING
- BRONZY
- SUNNED
- AFTERMATHS
- DECORATE
- CLUTTER
- ADDLEPATED
- UPPISH
- DRYAD'S
- SWEETINGS
- GERTIE
- WARTS
- SLOANE'S
- BOULTER
- SASSED
- MATTIE
- CROSSOVER
- WILSON'S
- TUMBLERFUL
- BRAGS
- PROB'LY
- ROSEBUSH
- PITCHERFUL
- NUN
- CLOISTERED
- STORMIER
- FIDDLESTICKS
- STRICTER
- GLASSFULS
- SUPPLIANT
- PREPENSE
- TRIBULATIONS
- MINDING
- BUSYBODY
- TIMOTHY
- LIBERALLY
- INSTILL
- ITALICS
- SHINGLING
- MOPING
- DAVY'S
- UNSOCIABLE
- BANDBOX
- HEARTBREAKING
- UNMERCIFUL
- IMPROVER
- CONTRARIES
- ANIMADVERTED
- SASSIETY
- SHINGLED
- LOTTIE
- STELLA
- BECKET
- TYNDALE
- MARJORY
- BOSSED
- NAUGHTIEST
- OLL
- FROZ
- WOUDENT
- CHARE
- ORGINALITY
- LESSUNS
- HALLOW
- NEWBRIDGE
- AFECKSIONATE
- FESSED
- BLOTS
- WHACKED
- INAGINARY
- MERMAIDS
- KELPIES
- RAINBOWY
- MUSSEL
- CLARICE
- ALMIRA'S
- FIRECRACKERS
- PINWHEELS
- DONNELL'S
- ROGERSON
- PAILFUL
- PRILLIE'S
- TEACHER'S
- ODORIFEROUS
- F'S
- IRVING'S
- UNASHAMED
- NIPPED
- HEARTEN
- IMPISH
- OUTFLASHING
- CULMINATION
- VOLCANOES
- ERUPT
- PREDICTIONS
- FORKING
- OPPRESSIVENESS
- THUNDERCLAP
- INGOT
- FOUNDRY
- TUNS
- INCOMPLETION
- BARMAIDS
- TYCK
- APOLLINARIS
- KATHARINE
- WASHINGTON'S
- SQUASHED
- BISHOP'S
- GLOAMING
- LORNA
- DOONE'S
- EVENSONG
- SCHUYLER
- ABBREVIATE
- KITTY'S
- OUTGREW
- AMHERST
- ONSARTIN
- GOSLIN'S
- BUDDIN
- NOHOW
- GRAFTED
- CHOICEST
- PUCKERY
- FRONTISPIECE
- COPAL
- KICKIN
- PASTERN
- EXTINGUISHING
- CLOCKMAKER'S
- ROGUE
- YALLER
- MAMA
- CRITTERS
- ARTER
- NATUR
- CRITTUR
- T'OTHER
- SNEEZER
- CHARLESTOWN
- AMAZIN
- HANDSOM
- GALS
- RECIPROCATE
- CLIPPER
- CLAPPER
- SPARKLIN
- TWINKLIN
- TEETOTALLY
- DEFLESHED
- WALKIN
- DYIN
- SHAKIN
- GRIT
- LEFTENANT
- OBY
- FOSTERING
- IMPALPABLE
- EMIT
- ALLURE
- TROWSERS
- UNCHANGEABLY
- UNFITTING
- DRUMMOND
- WOOLSACK
- GREGORY
- INEFFICIENT
- RAMSDEN
- FITZGIBBONS
- MACPHERSONS
- MACPHERSON
- BUNGAY
- CARLTON
- MILDMAY
- CONSOLIDATING
- RETICENT
- STRUGGLERS
- COALITIONS
- CREDITING
- UTILISED
- SPEAKER'S
- SECRETARYSHIP
- DEMONSTRATIVE
- ERLE
- SYMPATHISE
- READJUSTED
- RAMPANT
- BLITHELY
- THRUSHES
- WRENS
- UNINITIATED
- MADDEST
- BATTING
- GRUFFEST
- SQUABBLING
- PROXY
- UMPIRE
- FUSSING
- KITTERIDGE
- DEFRAUDED
- PYROTECHNICS
- TOGGERY
- WAGGLED
- CUTTIN
- SMITHERS
- BERRYVILLE
- MANES
- CONGLOMERATION
- GIRAFFE
- ELEPHANT'S
- ZEBRA
- LUNCHING
- CROCODILES
- CRICKY
- DODGES
- MONEY'S
- DOUBTER
- GROWLS
- SISSY
- SANCH'S
- BAB'S
- MISSIS
- CHICKERBERRY
- LOZENGERS
- CANES
- GRIG
- EXCITEMENTS
- BAKER'S
- MUNCHING
- COOKIE
- MOOING
- BECKON
- PLAYFROCK
- CHIRRUP
- PLAYFULLY
- TICKLISH
- MUSK
- SPATTING
- POISONED
- SHASTA'S
- SNOWBOUND
- AFTERGLOW
- PEDESTRIAN
- INTERROGATIONS
- SLIDES
- SISSON'S
- FROSTS
- LOOSER
- MEALY
- LOATH
- STEEPNESS
- STEEPEST
- MAPLIKE
- PATHWAYS
- EXULTINGLY
- ROSIN
- FIERCEST
- SNOWSHOES
- SIFTED
- WEARILESS
- WOODPILE
- LENS
- TRACHYTE
- BUMBLEBEE
- ZIGZAGGED
- RHETT
- KLAMATH
- DISKS
- MODOC
- SISKIYOU
- CLOUDLAND
- DUSTINGS
- SHOWERING
- PROMPTINGS
- FIBROUS
- WEBS
- CONVOLUTIONS
- SPRAYS
- PRECIPITOUS
- INCONCEIVABLY
- WHITNEY
- EVINCING
- FORDING
- OVERSWEEPING
- FLOWERING
- GLINTING
- STUPEFY
- SUBSIDENCE
- CRUSTY
- AUGMENTING
- ACRID
- INCRUSTATIONS
- SUBLIMED
- VENTS
- SMOULDERS
- PRECLUDES
- RESINOUS
- CAMPFIRES
- STARGAZING
- SUMMIT'S
- ACCELERATING
- BOLES
- ULTIMO
- BENEFACTIONS
- OBSCURING
- ORNATE
- REVELING
- UNGARISH
- LINEAR
- OVERCOMBING
- ECLIPSING
- STREAMERS
- ONRUSH
- REFINING
- IMPURITIES
- UNFOLD
- PLUSHY
- INTERBLENDED
- REDWINGS
- MALVA
- ABRONIA
- CACTUS
- GLENS
- GEOLOGIC
- RIFTS
- CRATERS
- NORTHMOST
- WEEDY
- LOFTINESS
- ROUGHENED
- FLORAL
- COMPOSITAE
- LEGUMINOSAE
- COROLLAS
- GRANDIFLORUM
- PUDICA
- MOUNTAINSIDE
- NESTLIKE
- RECURVED
- LILIUM
- SUPERBUM
- PANICLE
- CARNIVOROUS
- DARLINGTONIA
- SECLUSIONS
- LINNETS
- ARROWY
- ZIGADENAS
- ALLIUMS
- CALOCHORTUS
- DESTITUTION
- DROUTH
- SEGO
- GUNNISON
- STROLLS
- LUMBERMEN
- SAPWOOD
- HEARTWOOD
- ROAMER
- SINEWY
- ROVERS
- THICKENS
- TRANSCENDENTALISTS
- RANCHES
- OPENER
- ELLIOTT
- BROADWAYS
- FOUNDRIES
- BABEL
- GUMMY
- EXULTING
- NEBULOUS
- INFLAMMABLE
- PITCHY
- BOATING
- SNOQUALMIE
- SPIRAEA
- MEANDERING
- PICKERS
- SQUAK
- RADIATING
- GRAVELLY
- DEFORESTED
- KINNIKINIC
- SURPASSINGLY
- HEREBY
- WOEFUL
- REMOVES
- DOLEFULEST
- ROASTING
- DISMALNESS
- BESPEAKING
- SAVAGENESS
- BRUTISHNESS
- MARLBOROUGH'S
- MOSELY
- INHUMANE
- STINK
- QUINNAPIN
- SAGAMORE'S
- AFFLICTIONS
- MEDFIELD
- IMPORTUNITY
- STOUTEST
- DECREPIT
- PAPOOSES
- SORENESS
- SPOONFULS
- SINNERS
- PANCAKE
- THURSTON
- WITHALL
- INSOLENCY
- HEATHENS
- SAMPSON
- WIST
- BARTHOLEMY'S
- RUMMAGINGS
- TRACKED
- PRIVATION
- OSIERS
- MANGROVE
- CRUISED
- DEFENCELESS
- HATCHES
- BARING
- FOILED
- THUDDING
- REVERBERATIONS
- BATTLEFUL
- SKIRMISHERS
- WRANGLE
- RAPPAHANNOCK
- ARMY'S
- MEBBE
- CANED
- DUMFOUNDED
- BRIGADIER
- REG'MENT
- YESTIRDAY
- LUNKHEAD
- CONCILIATING
- ILLUMINATING
- PEALINGS
- CRACKLE
- SHELLING
- SLASHED
- GAWD
- CUSSED
- INTERPOSITION
- JAWIN
- SECH
- GABBLING
- CRASHES
- BURR
- ARVID
- YOUR'S
- HOMAN
- CONTROLLER'S
- UNSUITABLE
- PETITIONER
- JOINERS
- TAILOR'S
- NYSTROEM
- PAYABLE
- HORATIO
- OPHELIA
- POLONIUS'S
- JACQUETTE
- HARBINGER
- PALED
- ARAIGNEE
- FIANCE
- SPEECHLESSLY
- SELLEN'S
- ACADEMICIAN
- GENRE
- PAWNBROKER'S
- CHATTERERS
- OLLE
- ACIDS
- UPSALA
- MISCREANT
- RAIN'S
- RAPS
- INSTALMENTS
- PROMISSORY
- INKSTAND
- GUARANTEEING
- DEPRECIATE
- SLEEPER'S
- STRUVE'S
- STRUVE
- FANGLED
- MANY'S
- BASEST
- OUTBRAVES
- SOUREST
- FESTER
- DISPRAISE
- BLESSES
- LOV'D
- BARENESS
- WIDOW'D
- WOMBS
- UNFATHER'D
- DREADING
- PIED
- DRESS'D
- VERMILION
- DY'D
- STOL'N
- ANNEX'D
- FORGET'ST
- SPEND'ST
- ESTEEM'D
- DEEM'D
- ADULTERATE
- FRAILTIES
- FRAILER
- BEVEL
- BADNESS
- MISS'D
- TALLIES
- FOIST
- REGISTERS
- THRALLED
- LEASES
- NUMBER'D
- HUGELY
- DROWNS
- WERE'T
- EXTERN
- HONOURING
- DWELLERS
- FORGOING
- THRIVERS
- OBLATION
- MIX'D
- SUBORNED
- INFORMER
- IMPEACH'D
- TIME'S
- WRACK
- GOEST
- SLANDER'D
- FAIRING
- ART'S
- PROFAN'D
- SLAND'RING
- ENJOY'D
- SWALLOW'D
- TAKER
- DAMASK'D
- MOSSLIKE
- OUTCROPPINGS
- CARICATURES
- SNOWIEST
- FASTENINGS
- REPLICA
- NAILLESS
- NOISELESSNESS
- FAUNA
- MAMMAL
- HOOFED
- HARMED
- HURDLING
- RESPITE
- RADIUM
- THEORETIC
- FINDERS
- SIGHTERS
- FIREARM
- UNCLASPED
- COCKING
- TENANTED
- RUDIMENTS
- EMBARKING
- AVIATION
- LONGEVITY
- EVIDENCED
- SCINTILLATED
- INGENIOUSLY
- FURNISHINGS
- DENOTING
- BLANCH
- INCITANTS
- SKIPPING
- BRUISING
- BOORISHNESS
- FELLED
- WONDERMENT
- SAKKED
- FRONTING
- MURAL
- MOSAICS
- SHETLAND
- LOCALITIES
- UNDUE
- SICILIAN
- DUCA
- TATO
- TAORMINA
- MAFIA
- DOYLE'S
- SNOOZING
- EYELASH
- CHUNES
- CHIFUL
- IMITHATION
- MESELF
- PRISENCE
- AMPUTATING
- STUDIOUSLY
- UNCONVENTIONAL
- DISCOMFIT
- D'S
- EUROPE'LL
- RELIABLY
- CENTHRAL
- EUROPE'S
- WHINEVER
- BROGUE
- MOPPED
- DARLIN
- STAYIN
- REMOINDS
- DESTHROY
- RECLAIM
- DUFFER
- CHILDISHNESS
- UNHERALDED
- DRESSER'S
- GRAF
- CLOVERTON
- WIDOWED
- INTRIGUED
- ENSNARE
- UNINFLUENCED
- COSIEST
- BRUSQUE
- NOTIFY
- MILKMAN
- RILED
- GLOWER
- REG'LAR
- DISREGARDS
- PROPRIETIES
- HUMBUGGERY
- STEALER
- GODCHILDREN
- MOUSE'S
- FISHPOND
- PLANED
- SOOTHSAYER
- COW'S
- FITTINGLY
- MIGRATED
- SIMCOE
- THANKLESS
- EMOLUMENT
- POSTMEN
- CONDUCTORS
- TRINIDAD
- TURCO
- TRANSCONTINENTAL
- LAPSES
- ECONOMIST
- DISCLAIM
- ECCLESIASTIC
- BAGSHAW
- RESUSCITATION
- TOWELS
- RUMS
- ESSENCES
- REVIVERS
- RENOVATORS
- FLOWERED
- SULTAN'S
- THEODOLITE
- PRETENTIOUS
- LEGEND
- SPELT
- FLOURISHES
- GERANIUMS
- SCULL
- ELECTROCUTION
- MUG
- PARTITIONED
- SHAMPOO
- MASSAGE
- DROWSE
- SPECIALTY
- ACQUIREMENT
- SQUIRL
- KEESAR
- HAUL
- TRACTION
- CONSOLIDATED
- GEOLOGY
- ROBERTSON'S
- MULLINS'S
- LOAFED
- KHAKI
- TAMAGAMI
- PETE
- GLOVER
- TEMISKAMING
- MACARTNEY
- DAMPERS
- INTERIM
- STABLEKEEPER
- MATTAWA
- PROSPECTUSES
- SLAM
- BOUGHTEN
- JOHNSON'S
- BARKEEPERS
- LAGER
- CAFF
- JABBING
- CONNECTING
- PLUGS
- CLEGHORN
- PORTIA
- HACIENDAS
- MACHETES
- OUTLANDISHNESS
- ENDERS
- RESURRECTOS
- CORELLI
- PARENTHESES
- CUBEY
- AHOLD
- PORFORIO
- GOMEZ
- MAXIMO
- MOREZ
- INCURABLES
- SHOP'S
- EMBRYO
- DISENCUMBERED
- TRUSSES
- MORTUARY
- SPARTAN
- JEANNE
- JOLY
- CORRECTIVE
- ERRATUM
- HARMODIUS
- ARISTOGITON
- CHEREAS
- CORDAY
- SURPASSES
- PROUVAIRE'S
- TRANSLATORS
- GEORGICS
- COURNAND
- MALFILATRE
- DIATRIBE
- MAEVIUS
- VIOLATOR
- RUBICON
- EMANATED
- SEETHING
- FAUBOURG
- VANISHES
- PRATTLE
- TRACTABLE
- BABBLES
- PRATTLES
- CHATTERS
- MASTIC
- NECKER
- DISSECTING
- EGOISTS
- EGOIST
- SHIPWRECKS
- SOMNAMBULISM
- BEHOLDS
- THERMOPYLAE
- ANACHARSIS
- CLOOTS
- POLARIZATION
- COMBEFERRE'S
- BROADENING
- DEFINITIVE
- QUADRIGA
- HATREDS
- WORKSHOP
- TALONS
- PROMETHEAN
- CHIMAERA
- GRIFFIN
- GESTATION
- ABDICATION
- CONCEDES
- APTITUDES
- OBLIGATORY
- TYRANNIES
- DYNASTY
- RUFFIANISM
- DESOLATIONS
- LACONIC
- ENVELOPS
- LIGATURES
- MARTINGALE
- LISETTE
- UNDERPINNING
- PROWLERS
- FANCHONS
- DISENTANGLE
- GUELEMER
- DECAMP
- VENTRILOQUIST
- PARDINE
- NINNIES
- PANTIN
- ESPLANADE
- INVALIDES
- RIVOLI
- LAITER
- DENSITY
- ARBRE
- SEC
- ASSEMBLAGES
- SMOCK
- FROCKS
- BLOUSES
- CADAVEROUS
- ROULE
- BIVOUACKING
- PATROLS
- BETHISY
- POTERIE
- GULLIES
- CONTRAT
- ITINERARY
- INDENTATIONS
- FOGGY
- BATTALIONS
- SHOCKS
- CORINTHE
- BIVOUAC
- PRESENTIMENT
- MARENGO
- FRIEDLAND
- AFLAME
- RECTIFICATION
- LIBERATOR
- MARCEL
- ARNOULD
- BLANKENHEIM
- MARNIX
- PELAGIUS
- REGAINING
- ELECTRIFIES
- DIDEROT
- DANTON
- HANDFULS
- PARCELLED
- RECTILINEAR
- CONFISCATION
- COMBATED
- EQUITY
- ANNIHILATES
- UBIQUITY
- CABUC
- NETTLESOME
- TEASINGLY
- BISBEE
- LOON'S
- INFERRING
- BELLMAN
- RANCHERS
- SWAPPING
- MIS
- FELLAR
- SPECS
- FARLEY'S
- BUSTIN
- BRONCS
- ADOBES
- SHUNTED
- EXPOUNDER
- MALTHUS
- RICARDO
- DISPARAGEMENT
- SYNTHETIC
- SYLLOGISTIC
- SANCTIFYING
- INCONSEQUENCE
- IOTA
- INTERPOLATING
- FORMULAS
- INFERENTIAL
- INDUCTIONS
- OPUS
- MAGNUM
- FORTIFYING
- BAIN
- UPROOTING
- INCONCEIVABILITY
- GROTE
- INEXPEDIENT
- FARRAGO
- INHERE
- REDUCTIO
- ABSURDISSIMUM
- AUGUSTE
- COMTE
- CONTEMNING
- LIBEL
- INFUSES
- ENUNCIATION
- POLITY
- UNEXAMPLED
- RETAINING
- DEPENDENTS
- APPEASING
- COEXISTENCE
- UNFITTED
- PASSIVENESS
- DESPOTS
- FEUDATORIES
- EMANCIPATE
- SUBORDINATION
- PERTURBATION
- GROVEL
- AGONISING
- PIQUANCY
- FILTHINESS
- AUSTERLITZ
- OBSCURANTISTS
- AMNESTY
- COMO
- SYETOTCHKIN
- PERSPIRING
- EQUABLE
- GROVELLED
- SWAGGERED
- EPAULETTES
- JESTINGLY
- TIER
- TWIRLING
- CORPULENT
- RUSSIANISED
- DERIDING
- WORSHIPPERS
- ZVERKOV'S
- CONCEITEDLY
- FLUNKEY
- TACTLESS
- JIBES
- DISPROPORTIONATE
- MOROSELY
- NAUSEA
- RAKISHNESS
- EXAGGERATING
- SNIGGER
- CURRY
- ABJECTNESS
- UNLITERARY
- PITCHFORKED
- SPIRITLESS
- ACUTEST
- ROUBLE
- VIOLONCELLO
- MANORS
- URNS
- PATRICIAN
- MARCHES
- SKYEY
- MASSACHUSET
- SHIMMERS
- OBSERVATORY
- ILLUSIVENESS
- BUOYANT
- HUTCHINSON
- DECORATIVE
- TORY
- BELCHER
- RIMMER
- VOSE
- BOSTONIAN
- FOOTPATHS
- UNDERLAY
- SCIENTIST'S
- GEOLOGIST
- ROTCH
- COASTLAND
- AGAMENTICUS
- GRANDE
- MALDEN
- ANDOVER
- GEORGETOWN
- HEADLANDS
- GLOUCESTER
- LIGHTHOUSES
- THATCHER'S
- CHICKATAWBUT
- NANTASKET
- MINOT'S
- MANOMET
- DUXBURY
- STANDISH
- MONADNOCK
- JAFFREY
- READVILLE
- UNCANOONUC
- VASTER
- STEVENSON'S
- KINDERGARTEN
- SCHOOLBOYS
- CHURCHILL'S
- MONITORS
- TONNEAU
- SPENDS
- JOURNEYINGS
- PLUTOCRAT
- TRUNCATED
- PYRAMIDAL
- ARCHITECTURALLY
- STEEPLE
- BELLRINGER
- BACKLESS
- FORMALISM
- PUE
- PUNISHABLE
- HOURGLASS
- PSALMS
- INCORRECTLY
- WHOLESOMELY
- SPLITTINGS
- EXAMINATIONS
- PINCERS
- BULFINCH
- FLAWLESS
- HANDRAILS
- BACHELORS
- ROWE
- INCALCULABLY
- CRANFORD
- CAPPED
- DATING
- HAZLITT
- ESSAYIST
- SHUTE
- PANELED
- EBENEZER
- PASTORATE
- WILLARD'S
- ALBION
- HERSEY
- GRAVEYARDS
- TYPIFIES
- QUAINTEST
- DAPPLING
- TINCTURED
- ECCLESIASTICISM
- SALTY
- CRUISERS
- CAMOUFLAGE
- REPEATS
- BIBLICAL
- PILGRIM'S
- CHANTICLEER
- GOSLINGS
- BANTAMS
- CHANTY'S
- GRASSHOPPERING
- CLUCKING
- TWITTED
- TOPPLED
- CLUCKINGS
- GOBBLING
- STABLEMAN
- PARTLET'S
- PIP
- CHICKWEED
- BIDDY'S
- CACKLING
- ROOST
- MEWING
- PELICANS
- GOBBLED
- LIONESSES
- LIONESS
- RESTLESSLY
- LEOPARDS
- PANTHERS
- CHIMPANZEE
- CAPITALLY
- PERCHES
- BOUNCE
- BOA'S
- REPTILE
- ZEBRAS
- PADDOCK
- KANGAROOS
- HOPPING
- ANTELOPES
- AVIARY
- FLOUNDER
- FLIPPERS
- BINNACLE
- HOLLOAED
- RUDEST
- REASSERTED
- CREAMING
- CASCADING
- PANNIKIN
- HOLLANDS
- SEAMAN'S
- SOAKING
- CASED
- KEEL
- ARTISTICALLY
- SCUPPERS
- CABOOSE
- STARLESS
- BLUENESS
- BERG
- HANDSPIKE
- PRISED
- BUFFERS
- VICTUAL
- EASTWARDS
- FLATTENING
- UNSETTLING
- CRAFT'S
- HEAVE
- WHALER
- SOUTHSEAMAN
- SEABOARD
- CRUSHINGLY
- EMPEARLED
- LAGOONS
- ALBATROSS
- UPRAISED
- TREMORLESS
- SPLITTING
- HAV'N'T
- ROSTOPCHIN
- KOSTROMA
- DRUBETSKOYS
- VOZDVIZHENKA
- KARATAEV
- UNEXPECTEDNESS
- INTERROGATIVE
- LAOCOON'S
- CLENCH
- COMMENDING
- SERF
- BIRCHWOOD
- BOLKONSKIS
- THIERS
- LANFREY
- SCHLOSSER
- STEIN
- METTERNICH
- FICHTE
- DECOMPOSES
- PROGENITORS
- UNDEFINED
- NEXUS
- CONTEMPORANEOUSLY
- HANDICRAFT
- AGRICULTURISTS
- TINGLE
- ATTENDS
- UNCOMMONLY
- SEVENFOLD
- KIRK
- ALLOWAY
- HOOPED
- SWOONING
- UNFREQUENTED
- DEACON
- AVOWS
- ELIAS
- CASSIAR
- PURSUES
- WESTERLY
- ENHANCING
- WAFTS
- SWATHS
- GENTIANS
- CIRQUES
- LEAVED
- UNCLEARABLE
- COTTONWOODS
- CONTORTA
- CONTRASTING
- CONIFERS
- TAMARAC
- PICEA
- ALBA
- NORTHEASTWARD
- DISINTEGRATING
- FOOTHOLDS
- STEADIED
- DIVINES
- GANGPLANK
- IRREVERENCE
- LIGAMENTS
- MILLBAY
- MUDDIER
- SKINNY
- RETRIEVER
- FAWNED
- ROSINESS
- CLAMMY
- LODGER
- UPPERCLIFF'S
- EXHAUSTING
- SWEARS
- WAVERLEY
- TRIMLEY
- DEEN
- PREFATORY
- REPELLENT
- LURID
- INTERFUSIONS
- CAPRICIOUSLY
- TRANSFORMATIONS
- MISBEHAVED
- PREVARICATION
- KYLAM
- TURNER
- GUASIMAS
- DISEMBARKED
- SEARCHLIGHTS
- IMPEDING
- SHOREWARD
- PONTOONS
- SLEDS
- CHUTE
- LAUNCHES
- SIGNALIZED
- SHAMING
- HAVERSACKS
- CARBINES
- REGULARS
- FLANKERS
- HALTS
- DEPLOYING
- DEPLOYED
- BRIGADE
- GREENWAY
- BEARER
- BUCKY
- O'NEILL
- LUNA'S
- BURROW
- TAMPA
- GREWSOME
- TOURNIQUETS
- MEXICANS
- CHURCH'S
- SURGEON'S
- TAWDRY
- ANTONIO
- UNCONCERNEDLY
- JORGENSEN
- RETALIATE
- UNSELFISHLY
- AGUARDIENTE
- ALIGNMENT
- TOLERANTLY
- TRAPPERS
- CHAMPNEYS
- BRODIE
- VOLLEYS
- INVIGORATING
- INTRENCHED
- SMOKELESS
- SWORD'S
- FOGS
- GUNNBIORN'S
- SCARFS
- PASTURED
- FIORDS
- SKALDS
- FAROES
- ASGARD'S
- HAIRFAIR
- BELTED
- CLASPS
- THRALL
- GREENLANDER
- THRALLS
- TYRKER'S
- NORSEMEN
- GRAPEVINES
- SHIPLOAD
- GUNWALE
- CRAWLS
- SMACKING
- FIORD
- HENDERSON
- STUIVERS
- PELLMELL
- KODAKS
- FOCUSSED
- RECKLESSLY
- LITTLEST
- WAISTS
- POLLY'S
- FATTEST
- PHRONSIE'S
- TEACHABLE
- HIRING
- INTERESTEDLY
- CHARLEY'S
- MEHITABLE
- ISAAC'S
- RATEPAYERS
- GENTLEMANLY
- EBBING
- IMPERVIOUSNESS
- IRRELEVANCE
- SNAPSHOTS
- ALF
- FEUDS
- AMOURS
- JOIN'D
- THICKSET
- ARBOUR
- MILITANCY
- ILION
- COLON
- PRODIGAL'S
- BRIDGET
- MANTELET
- FIX'D
- FASTEN'D
- GENERALSHIP
- BREASTWORK
- TOISES
- BEERSHEBA
- REACH'D
- INDENTINGS
- HER'S
- SKILL'D
- PUSHINGS
- PROTRUSIONS
- COMPRESSIONS
- LOOKER
- BOUCHAIN
- SNUFFY
- AUTHENTICATED
- RELICK
- STIGMATA
- RELICKS
- RADAGUNDA
- FESSE
- CLUNY
- SUPPRESS'D
- EXEMPT
- AEGINA
- MEGARA
- DEMOLITION
- LAUGH'D
- CRY'D
- TALK'D
- HAPPENEST
- SQUAT
- UNFORCED
- FOREFINGERS
- INSENSIBLY
- SQUEEZ'D
- FLATUS
- TOUCH'D
- FRAY'D
- LAY'D
- MORALIZE
- DOOM'S
- ENGENDER'D
- HALFPENNY
- CHUSEST
- PILGRIMAGES
- OLYMPIADS
- URBECONDITAS
- EPOCHAS
- CONTESTATION
- ALMANACK'
- ORMOND
- FAGEL
- DROPT
- SHOULD'ST
- TELLEST
- KNAPSACK
- FURBISH
- REGIMENTALS
- WOULD'ST
- FORDABLE
- DEFILES
- ACCLIVITIES
- MAES
- SKELLENBURG
- DANUBE
- CROSS'D
- LECH
- BLENHEIM
- HOCHSTET
- RENVERSING
- AERA
- CONTROVERTING
- PEDRO
- LEON
- SIEGES
- BARBARY
- LIARS
- BARBAROUSLY
- SYNONIMAS
- BOHEMIA'S
- LUSATIA
- FRANCONIA
- BAVARIA
- PROPELL'D
- UNFEIGN'D
- HONOUR'S
- FLOWINGS
- WYNDHAM
- LUMLEY
- GALWAY
- NEERSPEEKEN
- LUXEMBOURG'S
- SCARFE
- GALWAY'S
- CONTI
- RECALL'D
- TALMASH
- GROIN
- UNCOCKED
- CIRCASSIAN
- NOONTIME
- SMEARED
- RUNLETS
- GURGLED
- IMPRINT
- FESTOONED
- PHEASANT
- EROSHKA
- SCHEMED
- ABREKS
- NIZHNI
- PROTOTSK
- CHECHENS
- SUUK
- SU
- OUTPOSTS
- GAVRILOV
- GAVRILOVS
- REGALE
- COSSACK'S
- GODSON'S
- BARGAINING
- CAMPAIGNING
- STOREYS
- TROTTERS
- KISYAK
- GALLOPS
- KUNAKS
- CHURNING
- SHOPMEN
- RHYTHMICAL
- STUBS
- HARPOONS
- HOARDS
- KEEPSAKES
- DEVOLVING
- ENTRIES
- BANDANAS
- SANDSEND
- BETTER'
- FORGIVED
- DARLEY
- FRA
- SPECKSIONEER
- UNMENDED
- REDD
- DITTY
- BEFITTING
- WORSTED
- YO'VE
- KITH
- ROBSON'S
- HISSELF
- HORNY
- KINRAID'S
- MATTERIMONY
- WE'N
- OUD
- UNBOLTED
- LAMER
- LATENESS
- UNREAD
- QUAKER
- SPITALFIELDS
- PHILANTHROPIC
- DICKINSON'S
- PLEDGING
- DICKINSON
- HARTLEPOOL
- INACCESSIBILITY
- ROSE'S
- IVERY
- TEASPOON
- MODIFICATION
- LIEFER
- YO'LL
- NEET
- FORTNEET
- GRAVESEND
- GEARIN
- BRUNTON
- PENITENTLY
- SOPHISTRY
- LASSES
- TELLED
- FEYTHER
- OAT
- UNREGARDED
- HERODS
- CENTURION
- CRUCIFY
- SAVIOUR'S
- RISINGS
- COMPASSED
- RABBIS
- TERRACED
- AFTHER
- DESARVING
- SLEUTH
- REPAYS
- THROUBLE
- WORSHIPING
- PALLIATING
- DISBURSE
- ENSHRINED
- DOTING
- CONTESTING
- FARM'S
- PIQUANT
- INCONSEQUENT
- SHALLOWNESS
- PRIMED
- DOIN'S
- TABULATE
- PINED
- UNTANGLE
- NUTT
- FARIN
- DAN'S
- MOUTH'S
- BALKY
- BALKS
- ORNERY
- THOMPSONS
- EXQUISITENESS
- GALLOONS
- CELEBRATIONS
- FUSION
- BANNS
- MAYORALTY
- SWATHE
- REPAVING
- TUESDAYS
- SERGEANTS
- FILES
- EMBLAZONED
- SEYMOUR'S
- GRANDMOTHERS
- COLUMBINES
- PARALYZE
- PARISIANS
- DANDIFIED
- GRIMACER
- PEDESTRIANS
- THESPIS
- HACKNEY
- VADE
- PARODIED
- ALLOT
- TOURNOIS
- MASCARADES
- VOCIFERATE
- JOVIALITY
- TURPITUDE
- OPPROBRIUM
- CARYATIDS
- PROSTITUTION
- SHAMES
- DISAGGREGATE
- POPULACES
- BUFFOONS
- ROQUELAURE
- LIGHTERMAN
- CALASH
- MASKER
- ACCOST
- CROWD'S
- REPERTORY
- FISHMARKETS
- BOTHERS
- NABBED
- CADRAN
- BLEU
- AMALGAMATING
- ELEGANCES
- BARRAS
- DEMAGOGICAL
- YESSES
- NESTLING
- IRRECOVERABLE
- SUBLIMATED
- APOTHEOSES
- ASCENSION
- BODICE
- VIED
- VENETIAN
- SCONCES
- FAIENCE
- SILVERSMITH'S
- PAINING
- SUBMERSION
- ESTELLE
- NEMORIN
- ENCHANT
- CARATS
- QUIBBLING
- QUIRKING
- DEMAGOGUE
- PATCHOULI
- GEWGAW
- CELIMENE
- ALCESTIS
- METHUSALEM
- DAPHNIS
- CHLOE
- IDOLIZE
- PREEN
- CRUCIBLE
- COROLLARY
- AFFECTIONES
- SIMILITUDE
- CLARENCE
- STEDMAN
- BEAUX
- ARCADIAN
- OVID
- LEANDER
- SLYER
- HYMEN'S
- JANUARY'S
- MAX
- MARETZEK
- JULIEN
- GRANDEES
- PLY
- CANDIES
- GASCONADE
- MATANZAS
- ROSEBUDS
- GARNETS
- SAPPHIRES
- JEEMS
- FLYER
- DEMENTED
- KOHINOOR
- HUBBUB'S
- DILETTANTI
- TARLETAN
- CHANCEL
- REVERENDS
- CHASTENED
- GARIBALDI
- BEFITS
- SEVER
- WADES
- STYX
- CHIFFONNIERS
- CULBERTSON
- NUTTING
- DISCONTENTEDLY
- BES
- LEMME
- PUTT
- BAID
- CHILLEN
- WUSSER
- SETTIN'
- BIZNESS
- BRE'KFUS
- OFFEN
- ROAS'IN'
- CAWN
- DARNSE
- SHUK
- EV'Y
- FO'TH
- POW'FUL
- RATTLES
- MONST'OUS
- HAIVY
- KEEPIN
- HAN
- AIN
- GWINE
- NUFF
- NEX
- COMED
- SOT
- WROP
- BLINKIT
- HAID
- BLEWED
- SPEC
- WO'M
- KYOUNTRY
- YONNER
- HUNTIN
- WUNNER
- HATTER
- BLINK
- WAN'T
- SASSIFIED
- LIGHTWOOD
- ENACTING
- SPUTTER
- JES
- OOMAN
- FOTCHED
- LAN
- MUSSIFUL
- WIMMINS
- TURR'BLE
- TWA'N'T
- NUTTIN
- SETTIN
- BLINKIN
- EZ
- FLEWED
- NAW
- DIDN
- UVER
- SENCE
- NUVER
- TWEL
- TECK
- SOL
- STOPPAGES
- LINSEY
- WOOLSEY
- BRIMMED
- CAPTAING
- RIZ
- AWAR
- FOOEL
- TISN'T
- COTTONWOOD
- WOODING
- ANTIES
- BULLITS
- FACE'S
- PILOTS
- PREBEND
- WINDFALLS
- SUPERFLUITIES
- SATURDAYS
- ATONING
- REMITTED
- DOOM
- RETALIATORY
- FORREST
- REMIT
- SONSY
- EMPLOYMENTS
- IRK
- DEMEANED
- PUCK'S
- ASS'S
- HORRIDLY
- CURACY
- APPORTIONED
- ARTISAN'S
- ARTISAN
- TROUSSEAU
- POLEMICAL
- COVERLESS
- BOB'S
- SHAMEFACED
- FONDLED
- PETTING
- BOOKSHELF
- TENDERED
- ENCLOSURES
- LUMPS
- DEAN'S
- INSURRECTIONS
- LEONINE
- PALADINS
- PROSINESS
- ORIGINATORS
- DETRACTED
- MARSEILLAISE
- LYRICALLY
- LACEDAEMONIAN
- ANARCHY
- GOVERNMENTALISM
- HENRI
- FONFREDE
- AGGREGATION
- CONSTITUTING
- SUCCORED
- EXTERMINATION
- IMPROVISATION
- AIME
- GARNIER
- ROYALE
- DUC
- CONDOTTIERE
- GOVERNMENTALIST
- MOWN
- FLINGS
- ARCHANGEL'S
- SEETHE
- REDOUBTS
- QUID
- DIVINUM
- MAW
- REANIMATED
- POIRIER
- GRAVILLIERS
- SLATS
- COSSONERIE
- BERTIN
- POIREE
- CUIRRASSIERS
- CAVAIGNAC
- BARAGUE
- PLANCHE
- MIBRAY
- SUCHET'S
- SARAGOSSA
- MAUBUEE
- LITTERS
- UNACCOMPANIED
- MESDAMES
- ANGELIQUE
- MANAGES
- PATRIA
- POUNDER
- CANNONADE
- GUNNERS
- INSTANTER
- FANNICOT'S
- ESCARPMENTS
- CARTOUCHE
- BANLIEUE
- FICHTRE
- PILLAGING
- INVULNERABLE
- FILLIP
- O'
- ANTAEUS
- JEWELER
- VERITABLY
- L'HOTELIER
- CIRCUMSCRIPTION
- TENBY
- BRITTANY
- MAGNIFICENTLY
- CONDUCTS
- GIRDED
- SIFT
- QUESTIONINGLY
- DISCOMFITTING
- DISRESPECTFULLY
- DISRESPECTFUL
- SNEAKS
- CONFAB
- CROSSBONES
- CHORUSED
- TRUANTS
- TRANSGRESSORS
- SCATHING
- DECOROUSLY
- EXAMS
- EXONERATE
- MOTHERED
- STUNTS
- SAVELL'S
- FOULS
- OAKDALE
- YELPS
- PETTIFOGGING
- PICPUS
- PATERNITY
- FAUCHELEVENTS
- ACTE
- NOTORIETE
- RENUNCIATIONS
- UNASSAILABLE
- DAUPHINES
- PARURES
- GEWGAWS
- SERAPHIM
- MECHLIN
- BRIC
- BRAC
- KNICKKNACKS
- PHYLLIS
- COLORLESS
- ODORLESS
- SIEUR
- SUMPTUOUSNESS
- HARPING
- SYMPHONY
- DIOMED
- YORE
- GAMACHO
- RIGADOONS
- DOCTRINARIAN
- CHIMERICAL
- RHEIMS
- CHANTELOUP
- ARGIRASPIDES
- STUPIDS
- EMPYREAN
- BOURGEOISIE
- PRUNE
- SCRIMP
- HOUSEKEEPING
- EARTHQUAKES
- SPINSTER'S
- INDECISION
- TEMPTS
- FRUSTRATE
- DIAGNOSTICAL
- NEUROLOGY
- PARANOIAC
- NEUROLOGICAL
- DIFFERENTIAL
- PSYCHIATRIC
- HALLUCINATIONS
- ORIENTATION
- VOLITIONS
- PSYCHOTHERAPEUTICS
- REACTS
- SCHEMATICALLY
- PSYCHOPHYSICAL
- ASTIGMATIC
- ABNORMITIES
- REINSTATE
- FRICTIONS
- OUTING
- DETERIORATES
- PERMANENTLY
- RECUPERATION
- NEUROLOGIST
- COMMERCIALISM
- PSYCHASTHENIC
- FUNCTIONING
- EMOTIONALISM
- PROPORTIONAL
- COUNTRYMAN
- STIMULATIONS
- STRENUOSITY
- DISBURDENING
- ABSORBS
- SANITARIUMS
- COUNTERACTED
- PRELUDED
- SPEAKERS
- GUILDS
- CONVERGED
- MARTIN'S
- HAMPDEN
- MONTFORT
- FRIEZES
- PEMBERTON
- LONDONERS
- PRECURSOR
- CRANING
- INDRAWN
- REPEATER
- DELIBERATENESS
- ROSARY
- THROBBED
- CALDECOTT
- SNOWFORD
- COMMUNISTS
- INDIVIDUALISTS
- CONFESSORS
- ESPERANTO
- MOONS
- TORIES
- TOBOLSK
- BENARES
- YAKUTSK
- SUFIS
- EXPLOSIVES
- EXTRAVAGANTLY
- CONTRADICTING
- CORDITE
- PREMISSES
- TENSENESS
- REMINISCENT
- PEEVISHNESS
- SADDER
- SYNCOPE
- INJECTOR
- HOOTS
- PHILLIPS'S
- FELSENBURGH
- WATCHMAKER
- OVERWORKED
- LASSIE
- TULLEGORAM
- TWIRL
- SAWPIT
- SHAKEDOWN
- GOLDFIELDS
- ALLSORT'S
- PENNYWEIGHT
- FRIAR'S
- CAMPBELL'S
- TARRANGOWER
- TUBS
- RELOADED
- TIM
- NOBBLERS
- COMMISSIONER'S
- TOWNSHIP
- CASTLEMAINE
- SURINAM
- EROTIC
- PLOSS
- BARTELS
- REYS
- DISCRIMINATES
- IMPURE
- LUCIAN'S
- WELLHAUSEN
- ISLAMIC
- DIFFERENTIATED
- INSULATOR
- INFLICTING
- NEUTRALIZE
- HAMMOCK
- INSULATE
- EXOGAMY
- TOTEMISM
- INTERMITTENT
- SACRIFICIAL
- COITUS
- DISINTEGRATION
- ANIMISTIC
- TEMPLUM
- AEDIFICATUM
- CLOACAM
- REFINERIES
- BLACKEN
- SAIGON
- HINDUS
- MAHOMETANS
- ABORIGINAL
- GONDS
- CATAMENIAL
- POLLUTE
- AFFLUX
- CURETTING
- OVARIAN
- CYSTS
- EXTIRPATION
- OVARIES
- INTESTINAL
- ADHESIONS
- AUTOSUGGESTION
- NEUROPATHIC
- PRICKLING
- HARPIST
- LAURENT'S
- COCHIN
- ANNAMITE
- VIOLINISTS
- TEUTONS
- ALIQUID
- PROVIDUM
- SCHOPENHAUER
- NIETZSCHE
- TACITUS'S
- CONSULATE
- CLARENCY
- GERTIGNES
- CONSUL
- OSTEND
- SCOTCHMAN
- NAE
- BONNIEST
- BROILING
- MINISTERED
- CEILINGED
- GAYEST
- FLATTENED
- PLIED
- OILY
- INVERASHIEL'S
- ASHIEL'S
- CRATES
- HOMESPUNS
- TARTAN
- COBBLE
- UNBLINKINGLY
- HEATHER
- SHOPWOMAN
- DOORPOST
- VACILLATION
- BROOCH
- FOURPENCE
- LOAFERS
- WATERSIDE
- AXTRA
- FEESHIN
- SHENTLEMAN
- TAK
- HAE
- BIDIN
- MAISTLY
- DOON
- INNKEEPER
- PRETOVSKY
- COULDNA
- CAE
- ROMANINOV'S
- DRIPPED
- ASHIEL
- KITTLE
- INDULGES
- MISBEHAVIOR
- ESSENTIALS
- DISPROVING
- UNTUTORED
- UNCIVILIZED
- WHOOP
- SCALPING
- BOOKER
- QUACKS
- APACHE
- PROTEGE
- GRADUATION
- UNCOMPROMISINGLY
- SUSETTE
- DENTISTS
- TOPEKA
- KAW
- OWEN
- POCAHONTAS
- VERIFIED
- AGGRESSIVENESS
- MATHEWS
- RIGGS
- EVANGELISTIC
- ARAPAHOE
- WHIPPLE'S
- SEABURY
- FARIBAULT
- CHRISTIANIZATION
- OBERLIN
- COMPETITIVE
- ETHNOLOGY
- ARCHAEOLOGY
- PUTNAM
- LABRADOR
- PHILIPPINE
- COLLABORATED
- FETCHER
- ETHNOLOGICAL
- ZITKALASA
- AUTOBIOGRAPHICAL
- OSKINSON
- COLLIER'S
- GANSWORTH
- ATHLETICS
- SAVAGERY
- DAKOTAS
- UNIVERSITIES
- DEERFOOT
- LONGBOAT
- SOCKALEXIS
- BEMUS
- TEWANIMA
- METOXEN
- MYERS
- BENDER
- OLYMPIC
- ANTAGONISMS
- EASTMAN
- SLOAN
- DAGENETT
- STANDINGBEAR
- CORNELIUS
- INTENSIVE
- COOPERATE
- RECTIFICATIONS
- CURETH
- APOTHECARIES
- ORCADES
- DAMIANUS
- SAXO
- GRAMMATICUS
- LAPLAND
- FINMARK
- BIARMIA
- CORELIA
- SCANDIA
- DITHMARUS
- BLESKENIUS
- WHEY
- LERIUS
- PAULUS
- JOVIUS
- LEVINUS
- LEMNIUS
- SURFEITING
- LUBBERS
- JURIDICIS
- MEDICIS
- FISCO
- FAS
- VIVERE
- RAPTO
- APOLLO'S
- VARRO
- PLINY
- COLUMELLA
- LACTANTIUS
- HIPPOCRATES
- DISCIPLE
- SCALIGER
- FIMBRIAM
- HIPPOCRATIS
- PARACELSUS
- LATINS
- EMPIRICS
- COVETOUSNESS
- MULTITUDO
- PRINCIPEM
- INTERFECIT
- MEDICO
- QUAM
- MORBO
- PERICULI
- MISCENTES
- CALIDIS
- FRIGIDA
- FRIGIDIS
- HUMIDA
- PURGANTIBUS
- ASTRINGENTIA
- BINDERS
- PURGATIVES
- OMNIA
- PERTURBABANT
- CURTIUM
- DAMNABANT
- DISAGREED
- STUMBLES
- MERETRIX
- FORESTUS
- HERODOTUS
- STRABO
- SARDUS
- NECESSITY'S
- LIFTETH
- PANEGYRICS
- ADVISEDLY
- PURGES
- UNSEASONABLY
- IMMODERATELY
- ALTERATIVES
- COMPOUNDS
- FREMONT'S
- BUENAVENTURA
- WATERCOURSES
- TULARES
- MAULS
- RECONNOITRE
- PARFLECHE
- SCATTERINGLY
- MISTLETOE
- TUFTED
- SUTTER
- PROVEAU
- DEROSIER'S
- PREUSS'S
- RIVULET
- HULLS
- MARBLES
- REYNOLDS
- PLAYTHINGS
- SCRIBBLE
- BLACKBOARD
- HOGARTH'S
- ENGRAVINGS
- MARGINS
- PERT
- ETCHING
- PUBLIC'S
- WHISTLER'S
- LUXEMBOURG
- ACADEMIES
- PASTELS
- ETCHINGS
- LITHOGRAPHS
- TWENTYMAN'S
- INTRUDED
- PLENTEOUSNESS
- CHOWTON
- DOWNHEARTED
- USHANTING
- DISRUPTION
- HONBLE
- DISPENSATION
- USHANT'S
- GLOMAX
- VIXEN
- DOLLY'S
- FORSWORN
- MANDERSON'S
- SOLEMNITIES
- MARLOWE
- MISTRUSTFUL
- COLORLESSLY
- CRASS
- PREYING
- UNNERVED
- HOUSEMASTER
- CALIPHATE
- INSHALLAH
- DINARS
- AFORESAID
- HEREWITH
- BEGOT
- STANDETH
- MAAMUN
- CALIPH
- BESTOWER
- CUTTETH
- ADJURED
- THEREFOR
- UNCHASTE
- UNSALEABLE
- BROIL
- BELONGETH
- UNPIERCED
- DUNNED
- IMPORTUNED
- KHORASAN
- PURPOSED
- MONIES
- WOTTED
- BAGHDAD
- FAMILY'S
- UNBELIEF
- JOE'S
- FLABBY
- SEVERANCE
- HANGER
- LOAFER
- DEMUR
- TAUNTINGLY
- HICCOUGHING
- SKAGGS'S
- REGRETFULLY
- FLIPPANCY
- HYDROPHOBIA
- SKAGGS
- THA
- THROWED
- GULPED
- HATTIE'S
- COILING
- TEMPTINGLY
- MOODILY
- SCUFF
- MASH
- UNATTENDED
- PERSEVERE
- CROSBIE'S
- TOMBSTONES
- RAFFERTY
- OBTRUDED
- UNMANNERLY
- WAGGISH
- LOOKERS
- ABBOT'S
- HOBBLEDEHOYHOOD
- BACKWARDNESS
- CLUTCHES
- GASHES
- PURVEYOR
- DISSEMINATE
- IMPS
- GRINDS
- SHAKESPEARIAN
- BARRYMORE
- ROSALIND'S
- ORLANDO'S
- SULK
- VENDETTA
- CAMORRA
- PHEBE
- AUDREY
- TYPEWRITTEN
- REHEARSE
- SATELLITES
- DOORKNOB
- BIRTHNIGHT
- WHOLENESS
- THREEFOLD
- SCAMPER
- RHAPSODIZE
- DUMMY
- LEVICES
- WHIST
- HESITANCY
- ACCRUE
- ROCKER
- BEATER
- NONCE
- BORROWER
- COCKLE
- ROWER'S
- VOLUBILITY
- DISCURSIVE
- OUTDO
- ROWERS
- ABSENTLY
- JUANITA
- ABSALOMED
- STANCH
- LEVICE'S
- INAUDIBLY
- DILATING
- ASSET
- NIL
- ADMIRARI
- FUSARO
- STERLET
- VOLGA
- CARBUNCLES
- JACKAL'S
- TUSCANY
- PIEDMONT
- FACCHINO
- STOCKBROKER
- CORSICA
- TOULON
- INOFFENSIVELY
- GARD
- CAVALCANTI'S
- SHAWLS
- PUNCTUATING
- CROCKERY
- CREAMY
- CONCILIATION
- NEILSON
- INTRODUCTIONS
- ENCOMPASS
- GRANDMOTHER'S
- REASONABLENESS
- DARNS
- CURIO
- FUMED
- VIXENISH
- UNTOLLABLE
- PLENNYPENNYTINCHERY
- BLUR
- AGERS
- CLAPT
- JAMINEE
- GIVIN
- FLYIN
- BARRING
- DISCOURSING
- DACENT
- THIEVING
- CONVULSED
- DEPOPULATE
- UNCURSED
- UNPOLLUTED
- BAFFLES
- PRECONCERTED
- OVERMASTERING
- ROCKINGS
- TRIPLY
- RADIATES
- TRANSFIGURE
- WYNDHAM'S
- DIPLOMATES
- PASSPORT
- PHARSALIA
- ANTINOUS
- PYGMALION
- CONDITIONALLY
- TRANSCENDENT
- DEMEANOR
- AWESTRUCK
- UNEXERCISED
- TINCTURE
- INVOKING
- CONFESSOR
- PRUSSIANS
- HARRELSTEINS
- DESISTED
- COSTLINESS
- ELATION
- OUTSHONE
- CEDE
- SUFFUSION
- TIDES
- LIEBENHEIM
- KINSWOMAN
- TUMULTUOUSLY
- BALLROOM
- FEROCIOUSLY
- UNAFFECTEDLY
- REPEATERS
- MASSY
- BIJOUTERIE
- UNDERANGED
- PARQUET
- TESSELLAE
- EXTERMINATING
- WEISHAUPTS
- BIGOTED
- MUNIFICENTLY
- ASCETICISM
- SUBPOENA
- OF'EM
- INCOGNITO
- KELLERMAN
- MAKEUP
- HAULS
- MURPHY'S
- WATCHFULNESS
- CONNOR
- HARMON'S
- LEES
- BUNGALOW
- TRIANGULAR
- GLORIA
- GREENE'S
- WATCHWORD
- CHEMICALS
- ENGLISHEST
- DOORBELL
- RODNEY
- TOURED
- BRITISHLY
- FAFNIR
- WRENCHING
- FORCEFULLY
- MONOCLE
- BURROWED
- FORETHOUGHTFUL
- ELUDER
- JABBERING
- TUSSLE
- YORKER
- BARLOW
- HUMOREDLY
- HARDON
- HARDON'S
- FLUNK
- BRANDING
- EDITH'S
- SPONGING
- SCHOOLGIRLS
- CHAPERONAGE
- FREER
- IMPOLITE
- MERCURIAL
- CHAMPIONSHIP
- POTTERING
- PHST
- PEACOCKS
- DINNED
- BEEHIVE
- NECROMANCER
- ALCHEMIST
- RHETORICIAN
- ASTROLOGER
- BIDPAI
- PENNIES
- JOGGING
- MILLSTONE
- HACKING
- LIEF
- BOARS
- UNICORNS
- CANICAN
- FORTUNATUS
- COCOTTE
- CHOISY
- FRANC
- PREEMINENCE
- BRUNETTE
- MUFFLER
- CONCIERGE'S
- LODGERS
- VARENNES
- MORTAL'S
- CHUPIN'S
- GRIPES
- CRAMMED
- SIGHTSEERS
- DELECTATION
- IMPREGNATED
- STENCH
- CHLORIDE
- DISINFECTANT
- SPIGOT
- GEVROL
- INTENTIONALLY
- COMRADE'S
- GUSTAVE
- ABSINTHE
- JUSTIFICATIONS
- FUNCTIONARY'S
- DOORKEEPER'S
- CHUPIN
- SUBLIMELY
- PREVARICATE
- HEARTBROKEN
- RETRACTED
- MISCONSTRUED
- BRUTE'S
- INJURES
- PROLONGATION
- RAKISH
- FOPS
- NINON
- ANTECHAMBER
- PALPITATE
- NEREID
- DEFILEMENT
- ASTARTE
- ANGLICAN
- EPISCOPALIAN
- COUNTERBALANCED
- EXCOMMUNICATION
- QUATRAINS
- ACROSTICS
- COLLARED
- BASSOMPIERRE
- WESLEY
- ARAMINTA
- THUNDERCLAPS
- GRIMACE
- CASUISTRY
- BOLINGBROKE
- EQUALIZED
- JELYOTTE
- MARQUISE
- SMETON'S
- WOMANLIKE
- UNBECOMING
- ESPOUSAL
- SYNTAX
- DISPERSES
- PROSAICALLY
- PAPIST
- CATHOLICISM
- EMBROIDERIES
- DEVERIA
- DEVEREUX
- ARBITRATOR
- ROULEAU
- ROULEAUX
- TRIBOULET
- DUNS
- HUDIBRAS
- SCARRON
- ESOP
- COCLES
- CAMOENS
- VISART
- MIRABEAU
- HONORARY
- CANT
- UNTRANSLATABLE
- SLASH
- HOLBEIN
- CISTERNS
- SAWING
- GUERNSEY
- CHASTISED
- GOUGED
- PASTIMES
- COCK'S
- SPITTLE
- REANIMATES
- WHOSOEVER
- BUTTING
- PIMPLE
- CYCLOPS
- ATHLETE'S
- PUPIL'S
- GOUGES
- MERRYMEN
- FARCES
- TAVERNS
- CINQUE
- JOWL
- TOPMAN
- CALKER
- FAVOURITES
- BIGOT
- BRAWLER
- EXUDED
- FORETROCHE
- CRIMPS
- OVENS
- EMANATING
- ASTROLOGERS
- ASTROLOGY
- PRIMATE
- VIRGINITY
- VIRGINITAS
- EMPTA
- MOT
- SCROLL
- EXCLUDES
- EMBODIES
- CATALONIA
- BARCELONA
- CHEFS
- D'OEUVRE
- LULLI
- ENSEMBLE
- PASSABLE
- MANSARD
- LAMOIGNON
- RACINE
- DRYDEN
- LOUVOIS
- PEMBROKE
- EFFEMINATE
- PERE
- TELLIER
- TARTUFFE
- IMITATES
- HYDE
- INCORRECTNESS
- MESALLIANCE
- PRINCIPIUM
- DOMINI
- MADEST
- GENESIS
- TRANSFERABLE
- POSSESSOIRE
- PETITOIRE
- CULTIVATORS
- REM
- DOMINIUM
- POTEST
- NISI
- CAUSA
- OCCUPANCY
- OFFSETS
- THEMIS
- RENNES
- TOULLIER
- ESCHEAT
- CULTIVATES
- INDORSED
- IMPIOUS
- REINSTATED
- REJUVENATED
- PALLADIUM
- DEIFY
- SPOLIATION
- DISPOSSESSED
- FORMULATES
- LITERALITIES
- RETAINABLE
- NUDO
- ANIMO
- LEVY
- EXCLUSIONS
- CONTROLLER
- SUBSTANTIATE
- NILE
- GANGES
- PARTITIONING
- APPRAISING
- CONTRADICTS
- DITHYRAMB
- NEGATION
- IMMORALITY
- AXIOMS
- ROUGHER
- ASSUAGED
- DELUGES
- DECLIVITIES
- STIFFENING
- ICICLES
- GLUED
- UNSLEEPING
- BINCLEAVES
- SEPARATES
- SILHOUETTE
- SLATED
- CARVE
- FOUNDLING
- INTERSECTIONS
- SCRAMBRIDGE
- BACKWATER
- SUCTION
- LETHARGIES
- SLUMBERS
- DECOMPOSED
- TRANSPARENCIES
- IMPALPABILITY
- EXISTENCES
- AMALGAMATE
- LARVAE
- IMPEDES
- WENDING
- DIFFUSION
- ALBAN'S
- JERKY
- TOLLED
- VAGABONDS
- UNRELENTING
- JOHNSTONE
- RADIPOLE
- REFUSALS
- MISANTHROPY
- GRAINING
- GRANULATORS
- QUICKS
- IMPRECATIONS
- PROPRIA
- PERSONA
- BANQUETING
- RUBENS
- DEITIES
- CRUNCH
- SPONGERS
- PARASITE
- RABID
- VIRUS
- GATESBY
- ENGLANDER
- PROSTITUTES
- BLACKS
- HARBORED
- EKE
- CORNFIELDS
- HICKORIES
- PALMETTOS
- WHOLESALERS
- REDDEN
- RENTERS
- MISTREATMENT
- IMMIGRANT
- SHUFFLES
- DEBTORS
- RUTHLESSLY
- UNSHADED
- UNFENCED
- SEARS
- FLIPPED
- STRAPPING
- DEVASTATING
- UNHINGED
- SOBERLY
- SANFORD
- LADSON
- CORLISS
- WILLIS
- WEAZENED
- FEATURED
- DELSON
- SENNET
- OVERSEERS
- GILLONSVILLE
- FARMHOUSES
- STOREKEEPER
- PATCHING
- MATRONLY
- SWINBURNE
- FATHERHOOD
- BERKSHIRE
- WHIMPERING
- FORMLESS
- SNEEZING
- TWITTER
- NEGRO'S
- UNBOWED
- UNHOPEFUL
- UNVOICED
- FRILL
- PHANTASM
- ATLANTA
- PATTERED
- GLINTED
- WRITHED
- QUAIL
- COIGN
- UNCOLORED
- DARKLY
- DEFORMED
- CRINGE
- PRISONED
- FOREGONE
- WED
- UNMOTHERED
- PATTER
- TRANSFERRING
- VUM
- PALMIRY
- EXPIATION
- DELF
- BABYSHIP
- TASSELED
- SALINE
- BRANCHED
- SNIPPED
- CANDELABRA
- COUNTERPANE
- PLAINER
- WASHSTAND
- REARING
- UNREMITTING
- APPOINTMENTS
- AVERRING
- NELL
- SNELL
- CONDOLED
- INSPECTED
- ANNOYANCES
- RELAYS
- PANDEMONIUM
- DRESSMAKER
- VERONICA'S
- PALPITATED
- PALINGS
- LICHENED
- DOMINANTLY
- PITTING
- SACRIFICES
- DAWNS
- PURPOSEFUL
- EVENTIDE
- SAGA
- UNBLURRED
- BEGGARED
- BOVINELY
- RESPONDING
- WIFELESS
- CONFIDANTE
- EAVED
- OFFENSIVELY
- WINNOWED
- HOMELIGHT
- SANDSTONE
- UNFEIGNEDLY
- SCARFED
- STAUNCH
- UNBELIEVINGLY
- UNTARNISHED
- ASPIRATION
- UNHINDERED
- BLUNDERINGLY
- YEARNINGLY
- FOOTER
- MUSHY
- FOOZLE
- CAD
- TODDLED
- PURPLY
- PROM
- FIENDISHLY
- SNUBBING
- FLATTISH
- RUGGLES'S
- STUNNING
- NUDGED
- ASSERTS
- SILLIEST
- SPOONEY
- FLATFISH
- IDIOTICALLY
- SIMILE
- QUR'AN
- FECUNDATION
- PAIRING
- UNACHIEVABLE
- PHONOGRAPHY
- GILDING
- WAINSCOTING
- AZULEJOS
- LEBEL
- ABDUCTIONS
- DISAPPEARANCES
- CAVERNS
- CHAROLAIS
- COURCHAMP
- PENNYWELL
- VI
- CLAM
- PRECARIO
- OUBLIETTES
- SAVOURING
- STAIRCASES
- EXITS
- CONTRIVANCES
- NICHES
- ELTZ
- RIZZIO
- CROSSWAY
- QUAINTER
- FITTINGS
- ENAMELS
- MINIATURES
- HYPOCHONDRIASIS
- BOUDOIRS
- LACEWORK
- QUEENS
- TRITONS
- TERMINATIONS
- PRISMATIC
- SPARKLES
- PIGEON'S
- BERYLS
- MAB
- GEO
- RETARD
- ENCOMPASSED
- MINOTAUR
- CLEMENCY
- OMNISCIENT
- SURMOUNT
- BALLASTED
- PORTHOLES
- STOWING
- FLAPS
- SLOOPS
- TILLER
- POINTI'S
- CARNERO
- AXLETREES
- HOMO'S
- GWYNPLAINE'S
- HESITATES
- REOPEN
- SEMESTER
- UNPREMEDITATED
- ABSTRACTEDLY
- GRAMMATICAL
- REINSCRIBED
- REENROLLED
- OVERDUE
- HOSTESSES
- REENTERTAINING
- HALLUCINATION
- INCLOSE
- OUTGROWS
- PALTRINESS
- ADDLED
- SLUTTISH
- TINGLED
- ENGENDERS
- LINEAL
- DESCENDENT
- FRANCOIS
- RABELAIS
- EDITORIAL
- PARAGRAPHERS
- LASTLINE
- SPOOFING
- ISAIAH
- IMPRECATING
- CHRONICLING
- MUDDLED
- JESTER
- TICKS
- PESSIMIST
- BIDDLE
- CAPTION
- ODDITIES
- PLUMPLY
- ABAFT
- PENCILLED
- PINKISH
- RACKING
- HURLY
- INTELLIGIBLY
- FRONTAGE
- SCRIBBLERS
- TRAVAIL
- TETCHY
- SCHOOLED
- STEWING
- LAUREATE
- NICOTINE
- SAINTED
- BLASI'S
- REHBOCK
- EXPLAINS
- ARRANGES
- BOUNCED
- DEALER'S
- FATHOMED
- ONEROUS
- INITIATION
- LANGDON'S
- PRONGED
- PITCHFORK
- COMPUTE
- DIRGE
- ASPIRANT
- CENTIPEDAL
- TRICKED
- DIABOLUS
- BLAKE'S
- MYSTIFY
- RIVERMOUTHIANS
- TOWNSFOLK
- SIGNBOARDS
- TRUSTFULLY
- PEANUT
- CLAPBAM
- DISCLAIMED
- HAYMOW
- COOPS
- EXHUMED
- SELECTMAN
- MUDGE
- CRONIES
- STAGECOACH
- PETTINGIL'S
- ICECREAMS
- MARDEN'S
- ABLY
- SYRUPS
- WENDED
- PESTLE
- PLACARD
- LIVELIER
- ANDREWS'S
- LUSTILY
- WINDMILL
- TEMPLARS
- STAGGERER
- MULATTER
- GALLOWSES
- BRACES
- PHILISTINE
- UNJUSTIFIABLE
- CASTE
- TRADITIONARY
- POMMELLED
- DUMPLING
- COCOANUT
- NUTTER
- CONSISTENTLY
- DRO
- PUGILISM
- BLINDERS
- UNPREPARED
- VERSUS
- GRIMSHAW'S
- MALTREATS
- TAMEST
- MUMBLED
- MOLESTATION
- ABIGAIL'S
- SANITARY
- OPODELDOC
- REQUISITION
- FLIRTATIONS
- SWEENEYS
- CASSIDYS
- PRESBYTERY
- INERNEYS
- SCRATCHINGS
- BACKSTAIRS
- FRIEN'S
- INCRIMINATING
- SHAMED
- SECRET'S
- TURGID
- MISSHAPEN
- CLANE
- CHILDER
- STINKIN
- BUSTED
- PINSIONS
- HAN'S
- SWEEPERS
- WESKIT
- CONNIVANCE
- FOULLY
- SOMEONE'S
- MISLAID
- BRICKS
- SOOTY
- LACERATED
- WHILES
- SCREECHING
- EGGED
- DOMNED
- SCREECHIN
- DEASEY'S
- HANKERING
- MORBIDITY
- CAROLLINGS
- MERCIANS
- PLOD
- THAW
- SPREE
- BOLGIE
- LYCIDAS
- NATIVITY
- CAPO
- FAIRFIELD
- HALIBURTON'S
- CLEMENT'S
- PINCKNEY
- WALTER'S
- EPHREM
- STITCHES
- HANOVER
- GERRY'S
- UNCTION
- CONFECTIONERS
- MILLIONNAIRES
- FROTHINGHAM'S
- MANGER
- CAMP'S
- SISERA
- RETORTING
- APPROPRIATELY
- BLUFFED
- GRABBED
- BRIDLES
- LOPE
- AGENT'S
- BATCH
- REVOLVERS
- LIABILITY
- OUTWITTING
- STATE'S
- SIDING
- CULLEN
- RALLES
- KANDA
- ADZUMA
- OJI
- PROFLIGATE
- AMAZE
- GULLED
- SMARTENED
- CONSENTING
- RONIN
- CHOKICHI'S
- HATAMOTO
- ASAKUSA
- YEDO
- PEDIGREE
- MENDICANTS
- SORCERERS
- DIVINERS
- HERMITS
- DISOBEDIENT
- COBBLERS
- TORIOI
- SHAMISEN
- BURYING
- CONCUBINE
- MUNEMORI
- LEDA
- CENTAUR
- PELION
- POSEIDON
- WINGING
- AGAMEMNON
- PELEUS
- TELAMON
- OREITHYIA
- ERECHTHEUS
- BOREAS
- THESEUS'S
- ANAURUS
- METALWORKERS
- SMITHS
- ZEUS'S
- PELIAS
- ENCIRCLES
- PECCARY
- DUGONG
- NEB'S
- HAYMAKING
- HOUSED
- PULLEYS
- TRANSIT
- GREASING
- POLISHING
- JUP
- CYLINDROCONIC
- RICOCHETED
- MANDIBLE
- JAGUARS
- THOUGHTLESSLY
- MAYN'T
- HUMANITY'S
- HARDING'S
- PROPS
- AYRTON'S
- STANCHED
- HERBERTS
- INFLAMMATORY
- STYPTICS
- ANTIPHLOGISTICS
- TEPID
- COMPRESSING
- SUPPURATE
- LINT
- CICATRIZATION
- COAPTATION
- TOP'S
- SHAMASH
- SUBBILULIUMA
- CARCHEMISH
- TABAL
- CILICIA
- KHILAKKU
- TARSUS
- TIANA
- COMANA
- KAMMANU
- THRACO
- PHRYGIAN
- MOBILITY
- RAIDED
- TUKULTI
- NINIP
- REVOLTING
- BUBU
- NISHTUN
- AKHIABABA
- KHABAR
- BALIKH
- ADINI
- KIRKHI
- REPRISALS
- METED
- KINABU
- UNFAITHFUL
- DAMDAMUSA
- TELA
- TELLO
- DEVASTATED
- STRATEGICAL
- SUPPLANTED
- ZABDANU
- KASHSHI
- KALDU
- REDECORATED
- JAIF
- KESHAF
- INDISTINCTLY
- KURDISH
- SUMERIAN
- AKKADIAN
- ASSYRIA'S
- DELINEATING
- CANAANITES
- AHIJAH
- ABIJAH
- JESHANAH
- RAMAH
- MESOPOTAMIAN
- TABRIMON
- HEZION
- ARZA
- ZIMRI'S
- SHORTLIVED
- PHILISTINES
- MICAH
- BACKSLIDERS
- PUTTETH
- AKHUNI
- HITTITE
- ORONTES
- AKHABBU
- REPULSING
- OVERLORD
- BORSIPPA
- CUTHAH
- UNCONQUERED
- CONSPIRE
- DEPOSE
- JEZREEL
- NIMSHI
- DRIVETH
- HERMON
- TYRIANS
- SIDONIANS
- YAUA
- UNPRONOUNCED
- ALEPH
- JEHUA
- BAAL
- BACKSLIDER
- CULT
- ISRAELITISH
- MUSICK
- FOLLOWETH
- JEHOAHAZ
- DANIN
- APLI
- IMGURBEL
- BALAT
- DEPENDENCIES
- RECONQUEST
- BABYLONIAN
- DABAN
- BLASTING
- MORTGAGING
- PROPELLERS
- STRUCTURAL
- DIRIGIBLE
- DEFLATION
- GAMELY
- REMEDIED
- KAISER
- LIQUIDATE
- CRUCIAL
- COLLIDING
- CLIMATIC
- CONFLAGRATION
- UNPREJUDICED
- SYNONYM
- LEVELHEADED
- ZEPPELINS
- FRIEDRICHSHAFEN
- DOCKS
- AIRMEN
- CONVINCINGLY
- RATED
- TANTAMOUNT
- PARSEVAL
- AERONAUTICAL
- RUTHEMBERG
- SIEMENS
- SCHUKERT
- SERVICEABILITY
- CONCLUSIVELY
- COLONELS
- ROUGHEST
- RESPLENDENTLY
- SUMPTUOSITY
- ELIOT'S
- MATHER
- ATLASES
- PRINTER'S
- UNPRIZED
- MOULDERED
- SURPLUS
- UNWRITTEN
- COMPILER
- LIBRARIAN
- SPOFFORD
- SUPERINTENDING
- TOILETTES
- BARONET'S
- SYMMETRICAL
- PITT'S
- BUTTERED
- RAWDING
- CRAWLEYS
- DAMPED
- SMUGLY
- STARCHED
- RAGLAND
- FONDER
- VILLAIN'S
- SURMISED
- BAILIFFS
- ENSUE
- RAWDON'S
- AUGURING
- LIVERIES
- SILENUS
- GREENGROCER
- LOLLING
- JOKED
- KNIGHTSBRIDGE
- CONSORTED
- FANCIERS
- BRUISERS
- BRISTLY
- MARKER
- CAMPAIGNER
- QUOD
- INCOHERENTLY
- KEP
- MACMURDO'S
- LORDSHIP'S
- SHINTY
- LANDLORD'S
- DRINKIN
- DRAWINGROOM
- MUSTACHIOS
- INEXPRESSIBLY
- TAPEWORM
- GLUM
- GEORGY
- AMELIA'S
- CONCLAVES
- DISTRAITE
- WEBER'S
- RUMMAGING
- GEORGY'S
- MONOGRAM
- EDIBLES
- RUSSE
- JELLIES
- MOUSSES
- CAVIARS
- DAUBED
- SOAPS
- TOBACCOS
- PROPOSES
- CIRCUMVENT
- BETH'S
- ERASTUS
- CHALLENGES
- FAIRVIEW
- VOTER
- NOMINATIONS
- DISTRIBUTORS
- TIRADE
- FRIZZLE
- MISSES
- EDUCATING
- SOUTHERNER
- GALLING
- UNTENANTED
- VERGING
- CRISPLY
- COLLINGWOODS
- INTRUDING
- UNEXPLAINABLE
- CYNTHIA'S
- UNUTTERABLE
- CANOPIED
- PORTENT
- SYMBOLISES
- CYNOSURE
- CRIMINALITY
- CANTONMENT
- POINDEXTER'S
- PAYED
- SYMPATHISER
- BULLYISM
- UNBURDEN
- CICADAS
- FORMALISED
- INTERROGATORY
- CONJECTURING
- CULMINATES
- GLINTS
- RICOCHET
- CORPOREAL
- VULTURES
- PLANTER
- MUSTANGER
- CASA
- CORVO
- WOODLEY
- HACIENDA
- COVARUBIO
- LLANOS
- SADDLING
- JOYED
- LAZO
- REGULATORS
- SUMMARY
- SWARTH
- TEXANS
- ISIDORA
- PATHLESS
- WITHAL
- CORRALES
- PHELIM
- O'NEAL
- YCLEPT
- FLORINDA
- GOBBLER
- EXCITES
- BENIGHTED
- CRAM'S
- LINGERS
- CRUM'S
- MISHAWAKA
- GOSHENITES
- HOOSIER
- JUMBO
- DELIGHTFULLY
- JAUNDICE
- PHLEGMATIC
- PUMPKIN
- YOUNGSTER'S
- RELINQUISHING
- UNBIDDEN
- SAGEBRUSH
- REBEAUTIFIED
- OHIOANS
- RESOUNDS
- PERRYSBURG
- BELLEVUE
- GARFLELD'S
- MENTOR
- GARFIELD'S
- FRUCTIFEROUS
- GLISTEN
- INTERSPACES
- GIRARD
- TAHOE
- NEVADA
- WHEELMAN
- TORTUOUS
- ANGOLA
- SWASHING
- MISFIT
- AUDACIOUSLY
- SWIPES
- JOGS
- ISHAM'S
- TRAYNOR
- NELLY
- CONGESTION
- MEMBRANES
- ACCENTUATION
- PROVINCIALISM
- AFEAR'D
- COWLD
- SLUICE
- INTERCOSTIALS
- SANGUINEOUS
- WEATHERIN
- CHAPS
- HEXAMETERS
- GRIDDLE
- RAYSON
- MEDICATRIX
- CONTIGIT
- HYDRAULIC
- THEM'S
- DISAZE
- YE'RE
- TEMPORIAL
- SALTPETRE
- DIAPHORESIS
- PHENOMENONS
- INORGANIC
- SHINDY
- PUER
- INGENUUS
- FRONTAL
- RELISHED
- TANKARD
- BAYCON
- TROTH
- SORRA
- ILLIGANT
- PRETENTIOUSLY
- OBSTRUCKT
- IMPADE
- CORRESPONDIENCE
- SARVE
- SINTRY
- YAWL
- GRENADIER
- CARBINEER
- FRINCH
- BRIGADED
- ARENTSCHILD'S
- HANOVERIANS
- SCREWING
- SCRAPING
- SURROUNDERS
- CHANSONS
- RHINE
- RHONE
- SATIS
- HEARTIEST
- FLATTERIES
- IRRITABILE
- TUPPENCE
- DANCIN
- HANDIN
- EMPRESSES
- SACRET
- PENKNIVES
- BODKINS
- BEGINNIN
- FORNINT
- FASTIN
- THRIVES
- DROPPIN
- FILTERIN
- GLITTERIN
- REFULGENT
- DISTURBS
- O'ERCASTS
- GILD
- YELLOWER
- MOUNTAIN'S
- VALES
- SWAINS
- ACCUSES
- DOABLE
- THRAVELS
- HOLLANDIA
- UNDEFENDED
- MEGANTIC
- INEXPRESSIVENESS
- INEXPRESSIVE
- DELUSIONS
- LAKE'S
- SUCHLIKE
- CASTON
- SHELVED
- PROTECTIVE
- SWEETHEARTS
- FALMOUTH
- LINER
- SOMERSET
- STONEHENGE
- ROMANIZED
- BRITONS
- DYKES
- GLOUCESTERS
- TEWKESBURY
- NORTHMEN
- CORINTHIAN
- TRIENNIAL
- SUCCINCT
- AMELIORATING
- CANDIDACY
- PHILOLOGY
- LINGUISTICS
- ENUMERATION
- INSEPARABLY
- INCLUSIVE
- REDOUND
- PARDONS
- ANACHRONISM
- BEHOOVES
- CONFOUNDING
- INDISCRIMINATELY
- PROPAGATORS
- DISAVOW
- COMPATRIOTS
- FULMINATING
- BROCHURE
- PUBLICIST
- DISGRACING
- FORWARDING
- VIOLATING
- RECONSTRUCT
- EQUALIZE
- IMPROVISATIONS
- UTOPIST
- ROUSSEAU
- INCENDIARY
- GRAVEST
- PERORATION
- PRUNING
- CHARACTERIZE
- MANIFESTOES
- IMPARTIALITY
- BLANQUI
- TERMINATES
- WEARIES
- RECAPITULATING
- ADMINISTRATIVE
- PELLET
- GALAIL
- EDDY
- EVILDOER
- OGRESSES
- CLAMBER
- NIMBLY
- PARROT'S
- GEW
- GAWS
- MAIDEN'S
- CRIMEAN
- SWARDESTON
- CONFORMANCE
- HOXTON
- PANCRAS
- COMMENTING
- FOOTSORE
- OFFENSES
- FABRICATIONS
- EXPOSTULATE
- LEVAL
- HEROICALLY
- ENLISTMENT
- SILVERED
- PEARLED
- BUTTRESS
- TRANSGRESS
- REFECTORY
- MOULDING
- WRING
- DEVOTIONAL
- TEENS
- PROVISIONAL
- PUP
- WRONGER
- ILLUMINATIVE
- STETHOSCOPE
- PALPITATING
- CONVENTIONALLY
- UNDOING
- WILMINGTON'S
- FLAGMAN
- DEAFENINGLY
- JOUNCING
- GRASSLESS
- WINDOWY
- CINDERS
- ASPHALT
- WATTEAU
- RUSSETS
- VURRY
- CORNERWISE
- AUNTY
- MORRELL
- JULIET'S
- CARVEN
- NORAH
- AFFLUENT
- NORTHWICK'S
- HAUTEUR
- NORTHWICKS
- IMPROBABILITY
- OPPOSITES
- FILIAL
- IMPOSINGLY
- FEINTS
- SASHES
- SPLICED
- FROWSED
- LYRA'S
- SUPERFLUITY
- ROMEO
- REHEARSALS
- INCARNATIONS
- MEDIOCRE
- DOMINATES
- IRONIC
- CUSSEDNESS
- ROMANTICISTS
- SALEM
- LONGFELLOW
- AMOS
- BRONSON
- ALCOTT
- EUCALYPTUS
- POMEGRANATE
- SORDIDNESS
- CATHAY
- BROBDINGNAGIAN
- DAMPENS
- ALKALI
- PHOTOPLAYS
- THRILLINGLY
- THINNESS
- ENUMERATE
- ROCKIES
- EXISTENT
- SHOUTER
- HURRAHING
- CONFETTI
- RELEGATES
- EPIC
- OVERWHELMS
- PHOTO
- PLAYWRIGHT
- MOBS
- ROMANESQUE
- REDWOODS
- REPROVED
- MELODRAMATICS
- ASHTON
- TREADER
- SCENARIOS
- INCONSIDERATE
- ULTIMATUM
- UNCONGENIAL
- COUPLING
- FLIRTATIOUS
- KENELM
- BOURNEMOUTH
- HEARTHRUG
- ADORES
- BUTCHERED
- WOMENKIND
- HARMFUL
- CONDESCENSIONS
- UNEXACTING
- FIXITY
- HA'PORTH
- INTERPENETRATING
- UPBRINGING
- PLEASINGLY
- BEHINDS
- RESHAPE
- PIGGY
- WIGGYS
- SEIZES
- REALIZATIONS
- MEDITATIVELY
- LASSO
- REVIEWED
- WHO'D
- SPENDERS
- WASTERS
- SKYLINE
- JUDGMENTS
- GRAMMONT'S
- REGRETTABLE
- COMPLETEST
- PRACTICABILITY
- REFT
- HARVESTER
- MENSERVANTS
- PIECED
- ASCRIBING
- COMPLETER
- SCIENTIFICALLY
- COMMONSENSE
- REPLACEABLE
- UNWONTED
- OUTRUNS
- STIFLES
- DAIS
- PERSEIS
- STRAYS
- CUPIDON
- CRATE
- APROPOS
- ILLUSTRATING
- DRIVELLED
- GRANDILOQUENT
- INGENUE
- BOHEMIANS
- DEBUT
- ARSENIC
- FENDER
- PALETTE
- BOHEMIENNE
- REVILING
- POUTING
- CHIMED
- UNLOCKING
- SEALSKIN
- SOURED
- TYPHOID
- FATALLY
- CRAYON
- INFATUATED
- SOPRANO
- TROTERE'S
- GARLANDS
- WISHERS
- REVIEWERS
- MINGLES
- ENDORSED
- GUSHING
- INANE
- JEUNESSE
- DOREE
- RECOGNISING
- ZELIE
- SPRAWLY
- DISFIGURING
- VALUELESS
- OBLITERATION
- APERTURES
- DIABLERIE
- ENHANCE
- LAUDATORY
- VOSGES
- METHODICAL
- ENMESHED
- CHANTREY
- ADIEUX
- RESUSCITATE
- GLEAN
- OVERDOSE
- MORPHIA
- HOUSEHOLDER
- EVILDOERS
- NOTS
- RINGED
- SPRITE
- NIGHTINGALES
- DISPEL
- PLAYFELLOW
- BRUNHILDA
- CRONES
- TRELLIS
- CARROT
- RADISH
- SPROUTING
- FORSOOK
- GNOME'S
- BRAIDED
- BRIDLED
- SUBMISSIVE
- MOSSY
- ENCUMBERED
- THUNDERCLOUDS
- CORNISHMAN
- ARNOLD'S
- CELTIC
- FINSBURY
- CENSORIOUSNESS
- GUY'S
- COWDEN
- CLARKE
- INTENSIFIER
- QUALIFIES
- VULGARLY
- YARMOUTH
- MERCI
- INDICATOR
- UNWARRANTABLE
- DEPRECIATING
- COLVIN'S
- MORBIDNESS
- MAWKISHNESS
- VIRILITY
- RECASTING
- PAGANS
- REFORMER
- FELICITATE
- UNSUSCEPTIBLE
- PLEASURABLE
- WOFUL
- COLERIDGE'S
- IRREMEDIABLE
- INEXPUGNABLE
- UNREMITTINGLY
- AVERSIONS
- UNDIMINISHED
- ANALYSING
- COMPLEMENTS
- CORRECTIVES
- SEQUENCES
- COHERE
- HAL
- BOWNTANCE
- ARRAGON
- BLUFFLY
- ADDLE
- DOFF
- BESHREW
- MINION
- MARK'S
- GRAPPLED
- HERNE
- FORTIFICATION
- EIGHTH'S
- OCTANGULAR
- LOOPHOLES
- ARQUEBUSIER
- CONVEYS
- NINETIETH
- ANN
- BRADSHAW
- TOLERATIONS
- BUCCA
- FISSA
- COMPRACHICOS
- CHEYLAS
- HARDQUANONNE'S
- INDICTMENTS
- FULMINATIONS
- HANDWRITINGS
- HEBRIDES
- QUARTOURZE
- NARBONNAIS
- LUC
- GALDEAZUN
- INTERSPERSING
- MATUTINA
- BISCAY
- TILE
- GERNARDUS
- OAKUM
- MALEFACTOR
- PLAGIARY
- WRETCH'S
- CRUCIFIED
- DENZILL
- VERIFICATION
- ATTESTATIONS
- PURSY
- ERMINE
- WAIFS
- UNCORKED
- ANATOMIST
- BIDLOO
- YELVERTON
- LONGUEVILLE
- ROTS
- LINNAEUS'S
- CHANCELLORSHIP
- PATHOLOGICALLY
- LACENAIRE
- COMMITS
- DISSIMULATE
- SIMANCAS
- PAYMASTER
- ENDORSEMENT
- JUSSU
- UNFEMININE
- MONARCHIES
- AULIC
- CARLOVINGIAN
- AURICULARIUS
- PALATINE
- LAPWING
- HUDBUD
- SENIORATU
- ERIPIMUS
- ROTURAGIO
- CADAT
- READJUSTING
- TARNISH
- BLAZON
- ABDOLMUMEN
- REINSTATEMENT
- MISCALLED
- ZENA'S
- UNRAVELED
- ACUMEN
- THREADBARE
- DODDERING
- KNOLL
- TRESPASSERS
- HILLY
- CLIMBABLE
- BRACKEN
- NOTCHED
- UNERRING
- SCRUB
- TAINT
- MANIA
- BORNEO
- DETECTION
- LUNATIC
- ELUCIDATED
- BURGLARIES
- AMATEURISHNESS
- BURGLE
- BURGLED
- FACEACHE
- REPETITIONS
- FLOATERS
- MUDBANKS
- BLACKBIRDS
- HERONS
- PLOVERS
- FORAGING
- HAWK'S
- PROPHET'S
- CLOUD'S
- SNAGS
- SPRING'S
- TRAUT
- BOGGY
- EDDYING
- BREAKNECK
- AWNING
- MUDBANK
- HALLOO
- ODOROUS
- FERRYING
- AVOWEDLY
- FERRYMEN
- GUARANTIED
- FERRYMAN
- ERIE'S
- MILE'S
- PARLEY
- DEVIOUS
- HINGELESS
- SHIFTLESSNESS
- UNTASTEFUL
- PRETENTIOUSNESS
- WHEEZY
- DIAGONALLY
- SIDELONG
- JOCKEY
- OSTRICH
- PINEAPPLES
- SANKEY
- WHEEZES
- MORNIN'S
- MILKIN
- HUL
- BLOWIN
- FASHI'N
- BUSTLES
- HOOPSKIRTS
- UNDERWEAR
- CREAKY
- CLEAT
- HI
- DUMPY
- SHOVELLED
- COB
- CUSPADORE
- VINEGARY
- SLANGY
- GLIB
- COTERIE
- USIN
- SOME'N
- JINED
- KNOWIN
- KIDS
- NOUGH
- KEM
- LOWED
- AGINT
- WORKIN
- DUCK'S
- HAMPTIN
- KEER
- DRUV
- GOV'MENT
- AMMERNITION
- WAGIN
- VETERAN'S
- QUONDAM
- CANOEISTS
- TENS
- ASTONISHINGLY
- INEDIBLE
- DOUGHY
- PASTEY
- GRAVY
- LEATHERY
- ADULTERATED
- SALERATUS
- ATKINSON
- HYGIENISTS
- WIDEN
- MORASSES
- DIREFUL
- CAPSIZED
- INTERVENE
- WORT
- LOBELIA
- SUNBAKED
- ASHY
- BURLINGTON
- QUINCY
- SPURT
- VOYAGED
- PORTAGE
- CHURLISH
- CANOEIST
- CHAINING
- PAYMASTERS
- SWINDLING
- JEOPARDIZING
- EXCHANGERS
- DYNAMICS
- ADVERSELY
- SHAREHOLDER
- PILLS
- GAMBLED
- INTERMIXED
- ECONOMICALLY
- SUBSIDY
- NOMINATING
- USURPING
- DEPOSED
- CONTROLLERS
- HARMSWORTH
- ADJECTIVE
- MISDEMEANOUR
- CLAMOURS
- ELIMINATION
- AUDITING
- GOVERNS
- CRITICIZE
- DUPED
- WARPS
- DEPLETES
- SNIPES
- PARVENU
- CONNOTE
- VAGUEST
- PLATITUDES
- INVERTEBRATE
- FORMATIVE
- AGNOSTIC
- NATIONALIZING
- OPPRESSOR
- COGNATE
- BREEDERS
- DEBATS
- ECCENTRICITY
- CONNOTES
- PENALTIES
- CENTRICS
- PLAINEST
- SUPPRESSIONS
- SPECIALITY
- CENTRIC
- TEETOTAL
- DIABOLISTS
- RATIONALIST
- ATHEIST
- PARTICULARISM
- CANCEL
- PARTICULARIST
- REVOCABLE
- DUAL
- EDITOR'S
- VULNERABILITY
- VULGARIAN
- DISTORT
- CONTRASTS
- NOBODIES
- LIMELIGHT
- NOMINATE
- NUMEROUSLY
- MULTIPLYING
- HUMANITARIAN
- OVERAWE
- DISPIRIT
- VINDICTIVENESS
- STRATUM
- SYMPATHIZING
- MISSILES
- GLASCOCK
- JARVIS
- BANTERED
- INNOVATORS
- IRREDEEMABLE
- HIRELING
- EXTREMISTS
- FEASIBILITY
- DISQUALIFY
- ASSORTING
- FILING
- ABOLITIONIST
- INDEPENDENCE'
- INDWELLING
- BLESSEDNESS
- OLASTON
- WITHSTANDING
- STILLING
- GLASTON
- HEAVINGS
- REPASSED
- APOSTOLIC
- MISREPRESENTATIONS
- TASTEFUL
- UNHOMELIKE
- ALMSHOUSE
- PAUPER'S
- CLIME
- CARVING
- WARWICKSHIRE
- RITTENHOUSE
- PLANETARIUM
- POTTS
- IRONSIDES
- ALLIBONE
- ALMANAC
- SUBSCRIBER
- BOOKSHELVES
- CHAILLU
- LOWELL'S
- IDYL
- FENIMORE
- EDGAR
- ALLAN
- POE'S
- POE
- ERASURES
- DANTE'S
- DIVINA
- COMMEDIA
- WORDSWORTH
- PRESIDENTS
- PEABODY
- WOOTTON
- FLINTSTONE
- MILKROOM
- DUFFERIN
- NORTHCOTE
- WALLER
- HUGHES
- EVARTS
- CENTENNIAL
- GRANT'S
- NEWSBOYS
- INGOLSTADT
- SLAKED
- ALLURED
- INCLEMENCY
- PURLOINED
- THRUSH
- COTTAGER
- INTERCHANGING
- POIGNANTLY
- ENDEARED
- SADDEST
- PERIODICALLY
- RECOMMENCING
- DISPELLING
- ENTRANCINGLY
- ENRAPTURED
- PEACEABLY
- VOLNEY'S
- DECLAMATORY
- DEGENERATING
- NARRATIONS
- DOTED
- DEIGNS
- SHARPENING
- PLUMING
- NIGGER'S
- QUELLING
- CADEROUSSE
- HEELED
- ALLIANCES
- PINCETTE
- BAILED
- RESIDES
- MENACES
- COACHMEN
- DISSIPATING
- TARQUIN
- LINDEN
- HELIOTROPES
- FLICKERINGS
- NOIRTIER'S
- DENTED
- IMPASSIBILITY
- POISONERS
- BEVERAGES
- UNBLEMISHED
- WHIRLS
- SUCCINCTLY
- RALLYING
- FELONS
- PERVADED
- PIAZZI
- BARBERINI
- MAZZINI
- BELGIOJOSO
- QUIRINAL
- ENDEARMENT
- PINCIAN
- PORTER'S
- RIETI
- CORSO
- DORIA
- SEQUINS
- GONDOLAS
- STILTS
- PULCHINELLOS
- AFFETTATORE
- CONFESSES
- CONVEYANCE
- LAUDATION
- ANTONINUS
- FAUSTINA
- PIQUED
- BLOCKHEADS
- HELDER
- GAND
- COLOSSEO
- GASPARONES
- FOREWARN
- CREDENCE
- PORTA
- POPOLO
- ALBERT'S
- PRESERVERS
- PASTRINI'S
- TERRACINA
- CORNEILLE
- LACRYMA
- CHRISTI
- BUGABOO
- LARA
- FERENTINO
- ALATRI
- PESTE
- PRECOCITY
- STYLUS
- FELICE
- IMITATIVE
- GIOTTO
- PINELLI
- VALMONTONE
- FELICE'S
- LIVERIED
- BRESCHIA
- OLIVETREE
- SABINE
- CONTADINO
- SABINES
- EXTIRPATED
- GARIGLIANO
- AMASINE
- SONNINO
- JUPERNO
- GASPERONE
- DISQUIETUDE
- SURVEYOR
- BANDIT'S
- LASCIVIOUSNESS
- ENTREATIES'
- CLINCHED
- DIOVOLACCIO
- DIOVALACCIO
- PICKAXES
- PICKPOCKET
- UNFEELINGLY
- UNPOPULARITY
- VACUOUS
- BILLINGSGATE
- EXASPERATION
- POSTER
- PEARS'S
- ARGUABLE
- TRAMPS
- IRISHMEN
- BROADBENT
- REFRACTING
- DOCTRINAIRE
- JUSTIFIES
- FIZZING
- ANGELO
- VELASQUEZ
- ADMIXTURE
- FALSIFICATION
- DUBEDAT'S
- FARCICAL
- COMEDIES
- FALSIFICATIONS
- DOCTRINAL
- COMICALLY
- PARAMORE
- PHILANDERER
- MISANTHROPE
- EPIGRAMS
- OUTRAGING
- BELLIED
- CONTROVERSIALISTS
- DISILLUSIONIST
- SCEPTIC
- CHALLENGING
- SCOLDINGS
- BEATINGS
- CRAW'S
- BLUEBELLS
- OVERSLEPT
- MELON
- ZIZZ
- WARDER
- SMOOTHER
- PEEK
- SIMPLETON'S
- ZENZA'S
- CHARGER
- TILDA'S
- HARNESSES
- CARTWHEEL
- WHIRRED
- SHABBILY
- TANGENTS
- OCTAVES
- VOLATILE
- WORKBOX
- BURNEY
- EXECUTANT
- CLAVICEMBALO
- CLAVECIN
- SPINET
- PROGENITOR
- TRANSCRIBED
- BAPTISTE
- LULLY
- EXTENSIVELY
- ORCHESTRATION
- ADVISES
- IMPLICIT
- BEGINNERS
- PRACTICING
- RAMEAU
- COUPERIN'S
- VIRTUOSITY
- DOMENICO
- JOHANN
- MIRRORING
- HARMONIES
- EXPRESSIVENESS
- SPONTANEITY
- DEFACE
- PIANIST'S
- EXEMPLIFY
- TUNING
- TUNER
- FORKEL
- SUITES
- FANTASIA
- RHAPSODIST
- RUBINSTEIN
- SOULFUL
- POLAND'S
- COMPLEMENTING
- LANGUOROUS
- SCINTILLATING
- HOMESICKNESS
- DIVINEST
- UNFETTERED
- MELODIC
- SPIRITUALIZED
- TONAL
- UNBARED
- JARRING
- POLAND
- TONALITY
- UHLAND
- PREDOMINATES
- SOFTENS
- UPLIFTS
- STRENGTHENS
- TRAVAILING
- ORGANIST
- THEORIST
- DUNCE
- REVERING
- LISZT
- RAPHAEL
- INDIVIDUALIZED
- PIANISSIMO
- GRADED
- FLUCTUATIONS
- BALZAC
- CHOPIN'S
- KLECZYNSKI
- REPRODUCTION
- ANDANTE
- PRESTISSIMO
- ARPEGGIO
- INDIVIDUALIZATION
- HARMONICS
- PEDAL
- ADVOCATED
- MISAPPREHENSIONS
- MISINTERPRETED
- ASPIRANTS
- SENTIMENTALISM
- DISFIGURES
- RUBARE
- PLIABLE
- INTONING
- GREGORIAN
- BEETHOVEN
- UNSYMPATHETIC
- SPIRITUALIZING
- TIMBRE
- LIBERATING
- CHORAL
- EXTENSIONS
- ARABESQUES
- CANTILENA
- NOCTURNES
- FAERY
- POLONAISES
- VALSES
- MAZURKAS
- IMPROMPTUS
- RECAPTURING
- SCHUMANN
- PROUDEST
- BIE
- VESTURE
- ALGEBRAIC
- UTILITIES
- DEDUCTING
- EXPENDITURES
- STORING
- INDIRECT
- TRANSPORTING
- CONTRIBUTING
- UNFUNDED
- REDUCIBLE
- PARADOXES
- INCONSISTENTLY
- CAMPUS
- MULTIPLIES
- PRODUCER
- DREDGING
- COLLECTIVELY
- DITCHED
- DIKED
- HILLSIDES
- IMPROVES
- GROUPINGS
- UTILIZATION
- FELDSPAR
- HECKLING
- MINERAL
- COLLAPSES
- LATHE
- DYNAMOS
- CONVERTING
- GUSHED
- IOSKEHA
- SARAMA
- VEDA
- SANSCRIT
- RESCUES
- PACHACAMA
- MERITING
- PACHAYACHACHIC
- VIRACOCHA
- REBUS
- YOLCUAT
- RATTLESNAKE
- TOHIL
- RUMBLER
- HUEMAC
- DUALISM
- ANALOGUES
- TULA
- TLAPALLAN
- FIGURATIVELY
- PRECURSORS
- INDELIBLE
- NATTY
- HOLT
- FORESTRY
- DARLINT
- INSPECT
- WOODSMEN
- ILK
- TABLEAU
- GAWKING
- CHUB
- ADMONISHING
- GIRUL
- YOWLING
- SCURRY
- STOVES
- PINON
- STRAWBERRIES
- GROUSE
- GLENHOLDT
- BRONTOSAURUS
- TESCHALL
- HARKRUDDER
- BEL
- TINWARE
- GUY
- LULLS
- CUPLIKE
- GRUMBLINGLY
- HUNTER'S
- HAYNES'S
- WEAKLY
- CHIRK
- WINDFALL
- BREWSTER
- SECRETED
- BRISTLE
- GOLIAR
- BIFF
- BLACKTHORN
- SHILALEY
- CLYDE
- FRAID
- CAYUSE
- TEARIN
- RIBALD
- ZOO
- DESIGNATION
- UNPRETENTIOUS
- PEDANTIC
- LAMELY
- ALDENHAM
- CASTERBRIDGE
- HABAKKUK
- UNNATURALLY
- LYRICAL
- QUOTES
- BRAWN
- APRONED
- SHORTBREADS
- PILCHARDS
- GETHER
- GOLFING
- ATHENAEUM
- HOMER'S
- OOZE
- SIXPENNY
- ROBINSON'S
- WHIPS
- RACONTEUR
- INDIARUBBER
- HASTENS
- BUMPS
- ICH
- DIEN
- ARMAND'S
- SULLENLY
- SCHOONER'S
- CREEKS
- DIPLOMATIST'S
- APERTURE
- STOOLS
- FISHERMAN'S
- CITOYEN'S
- ENDANGERING
- PROBLEMATICAL
- PIMPERNEL
- PARALYZES
- VENTING
- MALICIOUSLY
- BELABOUR
- TRUMPS
- ENROLLED
- ANATHEMA
- OVERBURDENED
- STOLID
- ROSENBAUM
- FINANCES
- PERVERT
- INCULCATED
- DISORDERLY
- CONVENTICLE
- OBLATIONS
- LIBERALITIES
- INDULGENCES
- JUSTIFICATION
- UNEXHAUSTED
- RETAIL
- SCHISMATICS
- FALSITY
- IMPOSTURES
- LUTHERAN
- SCRUPLED
- VALID
- CHARTER
- DAMNABLE
- ANTICHRIST
- WHORE
- PROPAGATION
- SOVEREIGNS
- INVEIGHED
- ENCROACHING
- MONASTIC
- CONVENTS
- LIBERTINISM
- INVADER
- CONCUR
- WOLSEY
- BRUGES
- SEMBLED
- FLOW'RS
- LUCE
- SPARKLE
- EV'NING
- WORDLY
- FORESHADOWING
- TURGOT
- PRIESTLEY
- CONDORCET
- EXEMPLIFYING
- PERFECTIONISTS
- UNWROUGHT
- FORTUITOUS
- PRECONCERT
- VORTEX
- PREEMINENT
- ATTAINABLE
- TILLERS
- PROVERBIALLY
- INSTANCED
- INTUITION
- RADICALLY
- ELLISON'S
- AMASSED
- FACTO
- ABORTIVE
- SEABRIGHT
- EXTRAVAGANCES
- BUSYING
- MINISTERIAL
- VIRTU
- ENDOWING
- SUPERABUNDANCE
- ENAMORED
- EXPATIATING
- MULTIFORM
- MULTICOLOR
- SOLVING
- ARTISTICAL
- SCULPTURAL
- GLADDENS
- GENERALIZATION
- COMPEERS
- COLLOCATIONS
- ADAPTING
- RETIREMENTS
- CAPABILITIES
- INCONGRUITIES
- APPERTAINS
- ODYSSEY
- INFERNO
- PROMETHEUS
- SOPHISTS
- INCONTROVERTIBLE
- TECHNICALITY
- HARMONIZED
- DEFINITIVENESS
- INTELLIGENCES
- EMANATION
- EXEMPTION
- SCRUFF
- SAGES
- SHINTO
- YOSHIDA
- FUSHIMI
- KANJUJI
- PERQUISITE
- DAIMIOS
- PHARMACOPOEIA
- PEKING
- DECOCTION
- DYSENTERY
- ACUPUNCTURE
- VESICAL
- CALCULI
- CORROSIVE
- DECOCTIONS
- MUMMIES
- COMMENDED
- PULVERIZED
- CALCULUS
- PETRI
- ANDREAE
- MATTHIOLI
- PROFUSE
- BADGER'S
- POSTHUMOUS
- REPENTING
- FUSED
- THROTTLES
- PEST
- ERELONG
- FIEND'S
- JUSTINE
- MADDENING
- WREAK
- DEVIATING
- INVECTIVE
- HOVERS
- CHAMOIS
- INTIMIDATED
- PROPORTIONATE
- FLIT
- PRESIDE
- IMPASSIVE
- HOVELS
- WILDNESS
- RUGGEDNESS
- ADVERSARY'S
- INSTIGATED
- PROTRACTION
- UNFULFILLED
- SASSY
- EXTRICATING
- MISHAPS
- SCAPEGOAT
- DECLINES
- MEASLY
- IMPETUOUSLY
- OBJECTING
- STUBBORNLY
- ROWDY
- REENTER
- SCAPEGOATS
- DODGED
- TRAPPING
- MILTON'S
- AFTERTHOUGHT
- BULL'S
- WILL'S
- LUMBERMAN
- LUMBERJACKS
- PORTENDED
- HOOT
- LIMPED
- PALS
- DRUDGERY
- DASTARDLY
- TOTE
- TRE
- MENDOUS
- ANTLERS
- SHRILLED
- CAMERA
- WHIFFING
- FRISK
- FORERUNNER
- BUCKSHOT
- WOMANISH
- REDEEMED
- SWEATERS
- OUTDOORS
- EYETEETH
- MARCHLAND
- HARRIED
- AVARS
- THRACE
- PORE
- SHEAR
- DISUNITED
- AUSTERITY
- POLYTHEISTS
- SCOFF
- GIBBON
- CHEAPEST
- PROVINCIALS
- MONOPHYSITES
- JACOBITES
- GOVERNMENTAL
- RECONQUERED
- RECORDING
- GABATHA
- ITURAEA
- LEGIONS
- BOSTRA
- HIEROMAX
- KHALED
- EMESA
- HELIOPOLIS
- SOPHRONIUS
- SEBASTOPOLIS
- ACCOUNTANT
- EUNUCH
- FLAMBARD
- RUFUS
- FLAGRANT
- PAYERS
- UNPOPULAR
- IMPRISON
- DECIMATE
- DICTATORSHIP
- HELLAS
- SOPHIA
- ABDALMALIK
- BEFEL
- CRIMEA
- SEBASTOPOL
- SUZERAINTY
- AZOF
- INGRATIATED
- KHAN'S
- EUXINE
- WEATHERED
- TERBEL
- BULGARIAN
- UNREADY
- RELENTLESS
- CATHISMA
- HIPPODROME
- TRAMPLE
- APSIMARUS
- WREAKING
- MEANER
- ANARCHICAL
- DEMORALIZATION
- ADRAMMYTIUM
- PHRYGIA
- ANATOLIC
- CAPPADOCIA
- LYCAONIA
- AMORIUM
- BUREAUCRACY
- MONOTHELITE
- MONOPHYSITE
- SLAVS
- PERSECUTING
- PARDONED
- ACQUIESCING
- IMMURED
- ONUS
- KARL
- HONORIUS
- ROMULUS
- AUGUSTULUS
- EQUIPOISE
- ANTIQUARIES
- LONGMANS
- GIBBS
- PERVADING
- CIVILISING
- HEPTARCHY
- SPENSER
- GOODNESSE
- ABRIDGED
- CONDENSATION
- READABLE
- GRENA
- CORMAC'S
- GLOSSARY
- ALDER
- SPRIGS
- FENA
- ERIN
- CONALL
- KERNACH'S
- DERGA
- PLAITS
- SWORDLETS
- KELLS
- HAIRDRESSERS
- BARBERS
- MUSEUMS
- MANUFACTURING
- BADGERS
- EDGINGS
- EXPORTED
- DYEING
- CHEQUERED
- DOMNALL
- CONGAL
- SLEEVED
- LOOPS
- UNTANNED
- STITCHED
- THONGS
- TORQUES
- CRESCENTS
- GORGETS
- LARS
- TUSCANS
- ETRURIAN
- ECONOMIZE
- BEAUFORT
- FREEMAN
- FRISLEY
- SALTER
- MASSEY'S
- DORSEY
- COMMITTEE'S
- SAIRSVILLE
- CHUNKY
- GRUM
- OATEN
- CEREAL
- UNBREADLIKE
- DOUGH
- RUBBERY
- STIFFENS
- POROUS
- WHEATLESS
- CRUMBLY
- VITALLY
- LICENSED
- CRACKER
- RETAILER
- FIRMS
- SUBSTITUTES
- FLOURS
- DIETARY
- HOUSEHOLDS
- MACARONI
- NOODLES
- CONFORMING
- DELECTABLE
- PASTRIES
- BAKESHOPS
- SELFISHLY
- CONSERVE
- TRUSTS
- RUMBLES
- TRUMPET
- BRAYS
- LIVELONG
- MADRIGALS
- DOFFS
- ENERVATE
- CASQUE
- GRASPS
- EQUIPPING
- BUCKLERS
- HELMS
- ARRAYS
- STUYVESANTS
- REGIMENTAL
- CHIVALRIC
- PUMMEL
- SPIRITEDLY
- BELIE
- SWEDE
- PIETERZEN
- VRIE
- BOUSER
- POTATIONS
- POTTLE
- GUZZLING
- SWASHBUCKLERS
- CAROUSALS
- ROBUSTIOUS
- DISCOVERERS
- MANHATTOES
- MEASURER
- BROECK
- TESTY
- OUTSTRUT
- OUTSWELL
- COCKS
- BROADEST
- TRENCHERMAN
- SKIPPERS
- JACOBUS
- LARDERS
- JOLLIFICATION
- GORMANDISERS
- JAN
- NIEUW
- CRAVINGS
- PILON
- D'OR
- OBTUSE
- HARICOTS
- OLIVES
- ACUTELY
- GUSTO
- EMBALM
- BORING
- MINDFUL
- TENDEREST
- FIGS
- FILBERTS
- TOURS
- MUSCATEL
- HEATING
- OGRE
- ANISEED
- RIND
- WARRENS
- DELICIOUSLY
- SNORTED
- MOUSQUETON
- PAON
- PIERREFONDS
- BROWSED
- FELINE
- VALLON
- ANTWERP
- ARTESIAN
- ARTOIS
- FLORINS
- MENDS
- KNITS
- TUNNY
- SHOPFUL
- COATING
- BEDSTEADS
- CASTORS
- UPROARIOUS
- BIBBERS
- TRUCHEN'S
- MORDIOUX
- PROMINENTLY
- SLAUGHTERING
- STICKLER
- BRICKED
- OFFICIATING
- BEADLE
- KNEELS
- CHEVREUSE
- FRANCISCAN
- EXHAUST
- SLAPPING
- BOOTERS
- UNGRACIOUSLY
- RAGGING
- GRAVELLED
- ACCELERATOR
- WINDSHIELD
- ROADSTER
- SHACK
- STUCCOED
- CORRODED
- MUNITIONS
- FLAMBOYANT
- HOLLOWLY
- GRILLED
- BATHROBE
- TENSELY
- JADE
- INDESCRIBABLY
- GULL
- STARTLINGLY
- DEATHLY
- LACERATIONS
- EXPERIMENTALLY
- INSULATORS
- POTENTIOMETERS
- RHEOSTATS
- SWITCHES
- ELECTRODE
- SWITCHBOARD
- WIZARDRY
- PLUGGED
- PROPEL
- DEVILISHLY
- GADGETS
- GRIDLEY
- GESTURING
- VISUALIZED
- FILTERED
- IGLOOS
- ESKIMOS
- COLORFUL
- DALLIED
- TORPEDOES
- CINCTURED
- VISUALIZING
- KELP
- ARROWED
- WHETTED
- HAZILY
- FUSING
- TUG
- TERRA
- FIRMA
- HESITANTLY
- TIMER
- SHAKENLY
- SITTEN
- PROROGATION
- WRITS
- UNEXPERIENCED
- IMPOLITIC
- POTENTATES
- PRINCIPALITY
- ENDUED
- SUBSIDIES
- DEPENDANTS
- DISPROPORTIONED
- TENETS
- CABALS
- PROSECUTION
- SANDYS
- DIGGES
- ELLIOT
- SELDEN
- PYM
- METHODISED
- PROSECUTED
- GENIUSES
- EXTORTING
- ALLOWABLE
- COMPLIANT
- PURITANISM
- PETITIONED
- APPELLATIONS
- CADIZ
- UNDISCIPLINED
- REEMBARKED
- GALLEONS
- INTRUSTING
- IMPRUDENCE
- SHERIFFS
- INCAPACITATED
- ENOW
- PATRIOTS
- UNDISGUISED
- REDRESSING
- REINSTATING
- ABSENTING
- COVENTRY
- CONTUMACY
- RECRIMINATION
- GRANTS
- ACQUAINTING
- RIVETTED
- ENRAGE
- EXTOLLING
- CARLETON
- ANCIENTLY
- OVERTHREW
- BRINGETH
- TURBULENCY
- EXASPERATE
- PRECIPITANCY
- ARUNDEL
- DISQUALIFYING
- IMPOSITIONS
- CANVASSING
- NINTHS
- WREST
- PERFIDY
- UNDISPOSED
- COUNTERWORK
- RELIGIONISTS
- MEDIATION
- SOLICITATIONS
- BARKS
- REYNARD
- ROOSTED
- CENSUS
- TWOMBLY
- NEFARIOUS
- WADS
- RAISERS
- ROOSTS
- ALGEBRA
- CRISES
- WAD
- QUART
- MOUSEY
- BRIARS
- SQUEAKS
- HOLLER
- YELP
- WHACKS
- PHOTOS
- SKITTLES
- LANDLORDS
- ROYALLY
- KINLEY
- MASSILLON
- PHOTOGRAPHS
- KINLEY'S
- INAUGURATION
- HURRAHED
- DALTON
- HEZEKIAH
- BRIMLEY
- LOUNGERS
- PATRONIZED
- MACKEREL
- KITS
- OVERSHOES
- CHEWED
- HEZ'S
- CORNED
- CASTOR
- DIPPERSFUL
- CREMATED
- HORUM
- FYE
- MULGARRY
- BAPTISMAL
- SPONSORS
- TABLESPOON
- MUMPS
- PORTERHOUSE
- SLIVER
- POD'S
- TOUGHER
- SCARCITY
- SAWS
- IMBEDDED
- ORNITHOLOGISTS
- BOTANISTS
- ENTOMOLOGISTS
- WOODMAN
- SMELT
- ERUDITE
- WOODSMAN
- HUSTLE
- PASTURAGE
- HAIRBREADTH
- UNCALCULATED
- DUPLICATED
- CANDLELIGHT
- MEOWED
- DRAPING
- SEDUCTIVELY
- LAMPING
- VISAGES
- DISCARNATE
- DIABOLICALLY
- UNOPPOSED
- VICARIOUSLY
- RADIATIONS
- MAGICALLY
- PENDER
- FILTER
- UNSELFISHNESS
- MODULATED
- SIGILS
- COLLIE'S
- PURRINGS
- PAWED
- KNEADING
- LICKS
- HYMEN
- TAENARUM
- LACONIA
- CERBERUS
- PROSERPINA
- FLOCKING
- BEGUILE
- ORPHEUS'S
- MOCKS
- UNWEPT
- NAIADS
- REVELERS
- SINGER'S
- HEBRUS
- LESBOS
- SAPPHO
- STILLEST
- THOR'S
- THUNDERER
- BALDUR
- ASA
- GUNBOATS
- COMMODORE
- GOLDSBOROUGH
- RACERS
- BOATSWAIN
- SQUEALS
- INTONATIONS
- EMISSION
- STROKER
- HOSPITALITIES
- TOOTHSOME
- PIGGISH
- DIGESTION
- CUTICLE
- FRISKINESS
- DISGRUNTLED
- GATTY
- CATERPILLAR'S
- DISBELIEVE
- NETTLED
- LARK'S
- FEN
- SQUANDER
- CLOUT
- OVERTOPPED
- DEMOCRATICAL
- COMMONER
- MALIGNANTS
- ROUNDHEADS
- SIGNALIZE
- DISTINGUISHABLE
- ENSUING
- INDECENT
- INSTIGATING
- INVIDIOUS
- FONDEST
- SEDITION
- CALUMNIES
- MACHINATIONS
- TRAITOROUSLY
- ASPERSIONS
- IMPEACHED
- SCOTS
- INDEPENDENCY
- BREACHES
- HALBERTS
- GUILDHALL
- RESOUNDING
- PROJECTORS
- PANICS
- KINGSTON
- INFRINGE
- UNKNOWINGLY
- EXCEPTIONABLE
- PRELACY
- VIRULENT
- MOBBISH
- JUDICATURE
- RAVISH
- CONSULTATIONS
- LOANS
- TREASURERS
- MOIETY
- ABSENTED
- MACES
- GROTESQUERIE
- TIECK
- ACCOUNTING
- UNIQUITY
- UNMITIGATED
- CUE
- HERACLITUS
- EMERITUS
- UNPARDONABLE
- WHIMSICALITIES
- BUFFOONERIES
- SUFFUSE
- LINEAMENT
- SKEPTICAL
- ABSURDITIES
- MYSTIFIC
- INCUBUS
- ENGROSSING
- IMPRESSIVENESS
- AFFECTIONATENESS
- ELICITED
- RIDICULER
- FANFARONADE
- DUELLING
- BARON'S
- SERMONIC
- DUELLIST
- MOMENTLY
- DISCREDITABLE
- QUIZZICAL
- UNBENT
- MISCONCEIVED
- CADAVEROUSLY
- SPECIFICATION
- OBVIATED
- FAVYN
- BRANTOME'S
- DEROME
- DROLLEST
- CONSTRUCTIONEM
- CARTEL
- REFERRING
- EPISTLE
- AMICABLE
- QUIZZES
- REMISSNESS
- WRAITHED
- CHIVALROUSLY
- SENILE
- TIMELESS
- INDIVIDUALISM
- LIBERALISM
- LATCHKEYS
- COMPENSATIONS
- CONSOLATIONS
- MALARIAL
- VAPOROUS
- COWERED
- CLO'ES
- INSIDIOUSLY
- WELCOMES
- SATIRIZES
- MONKISH
- ENERY
- STUART'S
- DOTES
- ACY
- VERSATILE
- PHANTASMAL
- BLACKWOOD'S
- BRANDER
- MATTHEWS'S
- CANTERVILLE
- TRAVESTY
- ZANGWILL
- ENGAGINGLY
- KENDRICK
- LLOYD
- O'NEILL'S
- SOPHISTICATED
- MERCIFULLY
- SCURVILEGS
- HIGHNESS'S
- ASTRAKHAN
- UNLOVABLE
- WHISKER
- STROKABLE
- UNTAMABLE
- SCOLLOPS
- PERIWINKLE
- TWINES
- FARINA
- CUMBERSOME
- PEGASUS
- ROSINANTE
- HAWES
- EXECRATIONS
- URSULE
- FIACRE
- ABDUCTING
- SHINERS
- REVERY
- BRANDISHED
- BENVENUTO
- CELLINIS
- ASPIRES
- COINAGE
- UNSCREWED
- ENJOINING
- UNINTERRUPTEDLY
- COMMODE
- PARDIE
- PONINE
- BOOBIES
- PRESUMING
- HUCKABACK
- DIMITY
- GREYISH
- INDETERMINATE
- LAPPEL
- REFOLDED
- TINFOIL
- FIXTURE
- SAYN
- INDECOROUS
- DISPASSIONATE
- EXPLICITNESS
- REALIST
- DIAGRAM
- UNEMOTIONAL
- ABRASION
- GRADUATING
- BLOTCHY
- TUMID
- REDNESS
- STIPPLED
- SHOPMAN
- CONTUSIONS
- IMPACT
- BEGINNER
- BICYCLING
- CONCUSSIONS
- TREADLE
- BLISTERS
- ROEHAMPTON
- APPRENTICED
- DRAPER'S
- TAILLESS
- REFOLDING
- KNIGHTLY
- WASHABILITY
- UNFADING
- SPASMS
- STRAIGHTENS
- PRITCHARD
- ISAACS
- CLAMOURED
- BETTWS
- TRAMLINE
- VOCIFERATING
- GOV'NOR
- AWESTRICKEN
- COMATOSE
- DININGROOM
- SSSH
- CREAK
- CONVERSATIONAL
- BECHAMEL'S
- MONOMANIAC
- KNUCKLE
- FAIRISH
- EIGH
- MARVELLING
- KNICKERBOCKERS
- BLASPHEMIES
- SULLIED
- MIDHURST
- HASLEMERE
- GUILDFORD
- RIPLEY
- RAPIERS
- AGINCOURT
- ELOPEMENT
- VILLAS
- LAMPLIT
- SPIRE
- FOOTFALL
- SUBTILE
- TREADLES
- SHIMMER
- STARLIKE
- SPIRITUALISED
- TRANSFIGURING
- TURNINGS
- PROMPTITUDE
- SOTTO
- VOCE
- PATRONYMIC
- HUTCH
- CHRIS
- CONVERGE
- MOONLIGHT'S
- DEE
- THENKS
- SIS
- EYELASHES
- INCONSOLABLE
- TROUT'S
- MILLINERS
- DRESSMAKERS
- OSTRICH'S
- FINGER'S
- GODMOTHER'S
- SULKINESS
- BEADY
- RENOUNCE
- THWARTS
- GAOLER
- ADORNING
- BEMOANING
- AMETHYSTS
- LYNX
- WARBLED
- MINETTA
- GLOWERED
- LINNET
- POSTILION
- VALETS
- SOUSSIO'S
- JIGGLING
- SLEEKER
- TWINE
- RUMPLING
- HINDFOOT
- BUSHIEST
- SEDGES
- PARSNIPS
- FROLICKING
- MUSKRAT'S
- IRONWORK
- UNDERSIDES
- ESTUARY
- OBITUARY
- INMAN
- ISIS
- PARAMATTA
- LYONNAIS
- SEDIMENTATION
- STREAM'S
- COUNTERCURRENT
- BREAKUP
- BONEYARD
- BILLIONS
- LUMPFISH
- MONOGAMY
- EELPOUT
- MORAY
- WOLFFISH
- VIVIPAROUS
- GOBIO
- PRICKLES
- SCORPION
- BULLHEAD
- NODULES
- BANDED
- SNOUTED
- MULLET
- NORWEGIANS
- DEPOPULATED
- CODFISH
- GRAPPLING
- SPIDERWEB
- ALERTED
- GEARING
- TRANSMITTING
- METRIC
- RETRIEVED
- RESUBMERGED
- PADDING
- TEXTILE
- SHEATH
- SADOVA
- SEASHELL
- SEA'S
- BLANC
- FASTNET
- GRAVITATING
- LOWERMOST
- SCILLY
- GLOOMIER
- RELIVING
- HUNCH
- ZENITH
- FACILITATED
- VERTICAL
- BEACON'S
- BULGE
- ENSHROUDED
- SEASHELLS
- POYPE
- VERTRIEUX
- PRESTON
- D'ESTAING
- GRENADA
- GRASSE
- BREST
- STABEL
- PREENED
- GRUMPY
- BELOSTOMA'S
- FORELEG
- DUCKLING
- KATYDIDS
- WIGGLY
- LITTLER
- KATYDID
- UNHOOKED
- FROLICKED
- TEDDER
- CHANGEFUL
- INDISCRIBABLE
- FITTEST
- UMBRAGE
- HUSK
- AVAILETH
- UNRECEPTIVE
- INHARMONIOUS
- GLOOMED
- CURATE'S
- UNLOVED
- THEREFROM
- DUCO
- STAAL
- BABUINO
- MERCATO
- FIORI
- WORKROOM
- RUMMAGE
- HEADGEAR
- SUNLIT
- BAEDEKERS
- SATURN
- FLUTED
- DORIC
- IONIC
- BASILICAS
- PREEXISTENCE
- TOGA'D
- ARCHITRAVE
- UNWITTING
- FORTUITOUSNESS
- TITUS
- PREDESTINATION
- MISTILY
- GOLDS
- BAEDEKERED
- BUSTS
- TORSOS
- FRAILNESS
- ACTUALITY
- PRERAPHAELITE
- LIRE
- PETRARCH'S
- CORNELIE
- RETZ
- SAYINGS
- PIECEMEAL
- WOUTER
- TWILLER
- ACKNOWLEDGMENTS
- SALLUST
- WORTHIES
- LIVY
- WAYFARING
- COMPILED
- WINNOWING
- DISCARDING
- PITHY
- ENTHRALLED
- PIOUSLY
- REUNITE
- SHRILLING
- FANFARE
- LILTING
- DINNING
- FLARE'S
- RAUCOUS
- LIGHT'S
- BAND'S
- DIMMING
- GARDEN'S
- OVERLAID
- DEARTH
- BUCKWHEAT
- SHEEN
- MARGINAL
- JANGLED
- STARRING
- FIREFLIES
- ASPEN
- RAM'S
- STEAMS
- VOTARIES
- NUMBS
- ETCHES
- PATTERNED
- WHAN
- APRILLE
- MARSHES
- DOORYARDS
- RHYTHMED
- COUPLINGS
- SUNRISES
- LAPPING
- INQUISITORIAL
- GENERALITIES
- POLWARTH
- REFRACTION
- EXECRATION
- ACE
- DIVISIONS
- SCATTERINGS
- COUNSELLING
- SEVEIRITY
- HEARTLESSNESS
- BASCOMBE
- NOWISE
- TRUISMS
- RESTORATIVES
- DISMAYFULLY
- JILT
- EXECUTIONERS
- LINGARD
- SPECTRES
- LOATHES
- WINGFOLD'S
- GAWKY
- UNPOLISHED
- UNORNAMENTAL
- EMMELINE
- RAINDROPS
- SUNNING
- MEA
- LAGS
- WADING
- CUDDLED
- FIREBRANDS
- DIFFUSING
- BRIMFUL
- EMANCIPATOR
- INDOCTRINATE
- AMISTAD
- FREEDMEN
- BEFRIENDING
- OSTRACISED
- TAPPANS
- INVOICE
- FREEDMAN
- ODIUM
- OBLOQUY
- APPRIZED
- DICTATING
- AMANUENSIS
- NARRATE
- GA
- DOUBTFULNESS
- BIDDER
- JOURNEYMEN
- PURSUER
- EVERYONE'S
- GRATIS
- ORIOLE'S
- ORIOLES
- KITE
- KITE'S
- TITMOUSE
- CHECKERBERRY
- SASSAFRAS
- VARLETS
- JEERING
- SCANDALOUSLY
- OPERATIC
- TYROLEAN
- APING
- ADMONISH
- UNBEFITTING
- INTERLOCKING
- GIZZARD
- VOL
- COMPARTMENTS
- CAPERING
- FEEDERS
- SPEEDED
- RUDDERS
- STATEROOMS
- WHIZZER
- TRANSMITTERS
- WORRIMENT
- SWIFT'S
- BOWLED
- HATBAND
- YORKE'S
- FROCK'S
- FANNY'S
- CHAPERON
- GALLOWAY'S
- REV
- AFFLICTS
- CHANNINGS
- CONTRIBUTOR
- PRESCRIPTIVE
- UNCHECKED
- LEGALITIES
- CHURCHWARDEN'S
- ERUDITION
- EPISCOPACY
- HEARSAY
- ARROGATE
- SHELLINESS
- BROWNRIGG'S
- ADDICEHEAD
- DISTRAINT
- TEMPLETON'S
- RUBICUND
- QUENCH
- UNMINGLED
- PERPLEXES
- AUTHORIZES
- ORDINANCES
- DISSENTERS
- REPOSITORY
- TABERNACLES
- COMPULSORY
- PROCEEDETH
- PROSELYTIZE
- INTERLOPER
- COMMEMORATE
- CHURCHMAN
- MARSHMALLOWS
- COURTNEY
- KNAPP
- REMOTEST
- PHILEMON
- SUTHERLAND'S
- CALUMNY
- GAVEL
- CREDIBLE
- JURY'S
- PUSILLANIMITY
- BATSY'S
- PAVE
- REALISING
- RIGHTFULLY
- UNCALLED
- IMPROPRIETY
- PROVISO
- ZABEL
- PARTOOK
- COMMUNICABLE
- INGRATITUDES
- EMPHASISED
- INFERENCES
- MISDOINGS
- LOATHED
- TRICKING
- ESCAPADES
- CATECHUMEN
- RIVALLED
- GLADIATORIAL
- SMASHERS
- UPSETTERS
- OUTSTRIPPED
- GERMS
- TRUSTFULNESS
- STAINS
- SOLACING
- FRIENDLESS
- MOTHERLESS
- TOMBS
- GEOMETRY
- STUDENT'S
- EMPTINESS
- DISDAINED
- VAINGLORY
- PASSWORDS
- HONORATUS
- CONTENTIOUS
- THIRSTING
- MANICHEISM
- GYRE
- TRANSCEND
- CLARIONS
- HANDMAID
- VAGRANT
- THITHERWARD
- CALAHORRA
- PARAMOUR
- ESPOUSALS
- DOWERED
- POSSESSIVE
- DOMINIC
- JOANNA
- OSTIENSE
- TADDEO
- MANNA
- FADETH
- DRESSER
- DECIMAS
- QUAE
- SUNT
- PAUPERUM
- APOSTOLICAL
- RUNNELS
- ORBIT
- TWILL
- CASAL
- ACQUASPARTA
- AVOIDS
- ILLUMINATO
- AGOSTINO
- MANGIADOR
- RABANUS
- CALABRIAN
- PALADIN
- DISCOURSES
- BEGINNETH
- CHIANA
- MOVETH
- OUTSPEEDS
- CONCORDANT
- OPE
- FOUNT
- DISUNITES
- INTRINED
- POTENCIES
- IMMUTABLE
- SUPREMEST
- NOTEST
- TURNEST
- AFFIRMS
- DENIES
- RETURNETH
- SABELLIUS
- ARIUS
- HARBOUR'S
- ORISON
- LAMENTETH
- LIVETH
- CIRCUMSCRIBING
- BESTOWS
- SUBSISTENCES
- CIRCUMFERENCES
- ENKINDLED
- HOLOCAUST
- BESEEMED
- HELIOS
- GLIMMERS
- GALAXY
- CONSTELLATED
- QUADRANTS
- ENSAMPLE
- UPGATHERED
- RAPT
- LAUD
- POSTPONING
- BETHINKS
- RIGHTEOUSLY
- TIGHTEN
- DESPOILS
- CHANGETH
- ENDURETH
- CROSS'S
- ANCHISES
- ELYSIUM
- BENEDIGHT
- TRINE
- THINKEST
- GLADSOME
- SHOWEST
- SIMILITUDES
- DIVERSELY
- PINIONS
- E'EN
- GRANDSIRE
- BEHOVES
- SHOULDST
- TAKETH
- TIERCE
- NONES
- CORONAL
- SANDAL
- SHOON
- O'ERRUN
- SARDANAPALUS
- NERLI
- VECCHIO
- TROJANS
- LAPO
- SALTERELLO
- CINCINNATUS
- BAPTISTERY
- CACCIAGUIDA
- MORONTO
- ELISEO
- VAL
- PADO
- BEGIRT
- LAW'S
- PASTOR'S
- EXECRABLE
- FALLACIOUS
- LANGUISHES
- SHORTENS
- SHEARS
- PERSEVERES
- HARDIHOOD
- SHEEPFOLD
- QUICKENS
- REINFLAME
- RUNNETH
- SIGNA
- SIMIFONTE
- GRANDSIRES
- MONTEMURLO
- CERCHI
- ACONE
- VALDIGRIEVE
- BUONDELMONTI
- INTERMINGLING
- SURFEITS
- LUNI
- URBISAGLIA
- CHIUSI
- SINIGAGLIA
- BARES
- UGHI
- SANNELLA
- ARCA
- SOLDANIER
- ARDINGHI
- BOSTICHI
- RAVIGNANI
- GUIDO
- WHOE'ER
- BELLINCIONE
- PRESSA
- GALIGAJO
- POMMEL
- VAIR
- SACCHETTI
- GIUOCHI
- FIFANT
- BARUCCI
- GALLI
- CALFUCCI
- CURULE
- SIZII
- ARRIGUCCI
- ENFLOWERED
- CONSISTORY
- UBERTIN
- DONATO
- CAPONSACCO
- GIUDA
- INFANGATO
- DELLA
- PERA
- KEEPETH
- GUALTEROTTI
- IMPORTUNI
- UNFED
- EMA
- CAMEST
- BEHOVED
- MASQUERADED
- MEEKER
- HURSETON
- PLANTAGENET'S
- MALLESON'S
- PUPPET
- TABERLEY
- SNEERINGLY
- PITTANCE
- BLACKMAILING
- SPITFIRE
- UNDERSIZED
- CHERUB
- NIP
- CHRYSANTHEMUMS
- TOBACCONIST'S
- OSTENTATIOUSLY
- SNUFFLED
- TROWEL
- STEPSISTER
- DRUNKARD'S
- WASHINGTONIAN
- EULOGY
- SONNY
- IMBECILE
- GORILLA
- TURNCOAT
- DELEGATIONS
- COMET
- ANTIETAM
- FLATBOAT
- BUFFOONERY
- GENTRYVILLE
- CONSTABLE'S
- SEWARDS
- FREDERICKSBURG
- MEADE
- GUNBOAT
- BISECT
- DEDICATORY
- NOAH
- GARDNER'S
- EVERETT'S
- LISPING
- FLOWERTH
- PRETHIDENT
- COBBLER
- TANTRUM
- PIGEONHOLE
- ASSASSINATION
- INBRED
- NOTHER
- EVACUATION
- APPOMATTOX
- PROMULGATED
- OUTLINING
- PENDEL
- GURGLE
- ECSTACY
- ROMPING
- TADDIE
- SURGICAL
- INEQUITIES
- ARBITER
- WHILENESS
- FASTNESSES
- DEERLIKE
- GARMENTED
- KOREANS
- CEASELESSLY
- SPRINGY
- WEEDING
- RIVERBED
- PIGTAILS
- KUELIAN
- CHING
- SOWERS
- PEKIN
- PEDLARS
- CONDENSED
- MUGS
- HORSESHOERS
- SHAMPOOED
- TINS
- COGNAC
- INEFFICIENCY
- WORTHLESSNESS
- EPITOMIZES
- INTERMINABLY
- PREPARES
- ENDURES
- SQUADS
- SLIVERS
- BULLIED
- SEN
- ETHICS
- ADAPTABLE
- CHANG
- MANCHURIAN
- INSCRIBES
- DIVERGED
- MONGOL
- DIFFERENTIATIONS
- INFUSIONS
- SAMENESS
- MALAY
- AMBITIOUSLY
- NAPOLEONIC
- NIPPON
- BANZAI
- UNANIMITY
- DESPOIL
- SUBSTANTIALITY
- PLANET'S
- ORGANIZER
- BALKED
- INCOMPREHENSION
- TWISTS
- PRESTO
- DOZING
- AFRIKANDER
- UNAFRAID
- DUPLICATES
- RIVALLING
- INTERCHANGEABLE
- HIEROGLYPHICS
- THUMBED
- LUSTS
- VIOLENCES
- LOGARITHMS
- BETOKENS
- SEERS
- HARKED
- CONSTABLE
- REICHSSTAAT
- KULTURSTAAT
- INDIVIDUALIST
- PROMPTING
- CONQUEST'S
- POSTULATE
- ADVENTURING
- CHENG
- CYCLOPEAN
- SPOUTED
- BANKED
- POLLUTING
- DESPONDENT
- RASPING
- STOCKY
- OVERBEARING
- UNPROVOKED
- WHIZZING
- UPPERCUT
- DOMN
- MOPPING
- FLAPPIN
- LOOKY
- SORLEY
- NONPAREIL
- COALPIT
- MAGDALENE
- BOXCLOTH
- NOWT
- STANDIN
- RESPEC
- ASSISTANT'S
- BICEPS
- WILLIN
- SLAUGHTERER
- SHILLIN
- NIEF
- QUEERED
- QUEENSBERRY
- WRITIN
- BRADFORD
- SPARRING
- OGILVY
- MEDAL
- PROMOTERS
- EFFEMINACY
- LOR
- RUMMAGED
- THYSEL
- PUGILIST'S
- OWD
- LOITERERS
- MIDLANDER
- DUNN
- FERNIE
- WILLOX
- UNDISMAYED
- NORTON
- LEVI
- CONCEDING
- QUIRE
- BETTING
- UNTRIED
- UNDERRATE
- GUTTA
- PERCHA
- PUNCHED
- SLOGGER
- THOU'LT
- DEPRECATED
- FACER
- RUFFIANLY
- BLACKGUARDS
- PUND
- SCRATTED
- POTMAN
- CHEQUERS
- SPARRIN
- SPYIN
- DOAN'T
- BRAY
- BRAYED
- MAISTER'S
- FINISHER
- TURBLE
- DENOTES
- GRAEUBEN
- ANEROID
- THERMOMETERS
- SPADES
- HEADLAND
- BREAKFASTING
- GEYSER
- COMPUTATIONS
- OSCILLATIONS
- PERTINACIOUSLY
- DEFYING
- UNMEASURED
- SURTURBRAND
- REFITTED
- FEUDALISED
- CARAPACES
- RIDGED
- RESERVOIR
- SEDIMENTARY
- CONTORTED
- PREHISTORIC
- FOSSILS
- CUVIERS
- LEPTOTHERIA
- MERICOTHERIA
- LOPHIODIA
- ANOPLOTHERIA
- MEGATHERIA
- PROTOPITHECAE
- PTERODACTYLES
- BIBLIOMANIAC
- ALEXANDRIAN
- APOSTROPHE
- SAVANTS
- ABBEVILLE
- DEFENDANTS
- CUVIER
- MAXILLARIES
- GROTTOES
- GOLGOTHA
- BORDEAUX
- DESICCATED
- THOMASES
- PALAEONTOLOGY
- BARNUM
- KNEEPAN
- AJAX
- ORESTES
- SPARTANS
- ASTERIUS
- CUBITS
- TRAPANI
- POLYPHEMUS
- LUCERNE
- PLATER
- SCHEUCHZER'S
- ADAMITE
- CAMPET
- GIGANTOSTEOLOGIE
- MAMMOTH
- INCRUSTED
- SOLVENT
- OVOID
- CHEEKBONES
- PROGNATHISM
- JAPHETIC
- ECCENTRICITIES
- CLEFTS
- CATACOMB
- SCEPTICS
- COMMINGLED
- SLIDED
- PAPERED
- UNCHEERFUL
- UNPIN
- TARNISHED
- HASP
- UNFOUNDED
- PREPONDERATED
- FAGGOT
- AWFULNESS
- INSTANTS
- CELERITY
- UNSEARCHED
- LININGS
- EXEMPLIFICATION
- REKINDLING
- MAID'S
- UNDISCOVERED
- DOOR'S
- TEACHABLENESS
- LUCIDLY
- STAFFORDSHIRE
- DISENGAGED
- CONVERTS
- KNOLLS
- UNFIXED
- ENDEARS
- ENDEAR
- OVERDRAWN
- LASSITUDE
- REVERT
- TRANSFORMS
- REGRESSIVE
- FIRSTLY
- RETOLD
- FORMATIVELY
- SURROUNDS
- SELECTIVELY
- DISCARDS
- LICENTIOUS
- SUBSTANTIATED
- STIMULATOR
- DERIVATION
- PREDISPOSED
- SUPERSEDED
- GLOSSED
- ACCOMMODATES
- PROPINQUITY
- NURSERIES
- AROUSES
- HEMS
- APPROXIMATING
- RECEDES
- ISOLATES
- SACHS
- OVERESTIMATE
- OUTWORN
- PSYCHOANALYTIC
- PROSCRIBED
- INTERPRETATIONS
- DERIVATIONS
- CASTRATION
- INTIMIDATION
- SEXUALITY
- COMMENCES
- GENITALS
- DIFFERS
- HOMOSEXUALITY
- UNBRIDGABLE
- EXCREMENT
- ACCREDITS
- GENITAL
- POLYMORPHUS
- MISREPRESENTING
- ANSWERABLE
- FURTHERANCES
- REDISCOVER
- EVOLUTIONARY
- PSYCHICALLY
- INBREEDING
- DETERIORATE
- INCESTUOUS
- SLIGHTNESS
- REGRESSES
- DECEPTIVE
- UNDISTORTED
- UNINTERPRETED
- TRANSLATES
- REAWAKENS
- PREDOMINANCE
- ORIGINATES
- COMPLETES
- PROPOUND
- OUTGROWN
- CONVULSE
- FERMENTS
- CABAL
- PROVIDENT
- PREEXISTING
- CORRUPTING
- UNSAFE
- PRECARIOUSNESS
- CONSPIRING
- INTRUST
- UNQUALIFIED
- ADULATOR
- ARTIFICES
- COMPORTS
- DEPARTMENTS
- INDUED
- COEQUAL
- ASSIGNS
- DISBURSEMENT
- APPROPRIATIONS
- DEPUTIES
- NOMINATION
- ATTACHMENTS
- PERMANENCY
- EXCLUDING
- PECULATION
- EMOLUMENTS
- PROPENSITY
- ADAGE
- BAN
- FELLOWCITIZENS
- BANISHING
- ESSENTIALITY
- INTERDICTION
- NECESSITATE
- OPTION
- OBVIATE
- READMISSION
- COUNTERBALANCE
- RESENTMENTS
- DISABLING
- MATERIA
- PRIMA
- ANGERS
- COMICALITY
- PRECIOUSNESS
- INVALIDATING
- ANALYZED
- SUFFUSES
- INTERPENETRATES
- INTERRELATION
- IMPENETRABILITY
- RETICULATIONS
- TIGRESS
- INACTIVE
- ILLUSTRATES
- SUBJECTIVITY
- OBJECTIVITY
- CLASSIFICATIONS
- AMBIGUOUSLY
- CLASSING
- OBJECTORS
- CITING
- ADJECTIVES
- SANTAYANA
- OBJECTIFIED
- MASTERLY
- ESTHETIC
- RHETORICAL
- CONNOTING
- VERTIGO
- SIDIS
- GOODHART
- EQUIVOCALITY
- CONVENIENCES
- COEFFICIENTS
- DISPLACES
- ENGENDERING
- TRANSLOCATING
- GALILEO
- DESCARTES
- ATOMIC
- KANTIANS
- ILLUSORY
- TRANSLOCATION
- RATIONALISM
- MIND'
- ANTIPATHETIC
- PLOTTED
- AFFINITIES
- TENSIONS
- ANTHROPOMORPHIC
- DANGEROUSNESS
- VASCULAR
- DISCRETE
- EXTRACORPOREALLY
- SUBSERVE
- CONSECUTION
- FIXES
- INFALLIBLY
- SORTED
- WOOES
- PALMARY
- DETERMINATIONS
- SENSORIAL
- PERTURBATIONS
- INTROSPECTION
- HUTIBUDI
- CRUNCHING
- HEADMAN
- SAL
- MERAL
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_word_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech_100"]}
|
jkang/espnet2_librispeech_100_conformer_word
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:librispeech_100",
"arxiv:1804.00015",
"license:cc-by-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"noinfo"
] |
TAGS
#espnet #audio #automatic-speech-recognition #dataset-librispeech_100 #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us
|
ESPnet2 ASR model
-----------------
### 'jkang/espnet2\_librispeech\_100\_conformer\_word'
This model was trained by jaekookang using librispeech\_100 recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Tue Feb 22 17:38:22 KST 2022'
* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.7a1'
* pytorch version: 'pytorch 1.10.1'
* Git hash: 'e79e7185780b90e56618859855a038b4369b002c'
+ Commit date: 'Tue Feb 22 15:34:12 2022 +0900'
asr\_conformer\_lr2e-3\_warmup15k\_amp\_nondeterministic
--------------------------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'jkang/espnet2\\_librispeech\\_100\\_conformer\\_word'\n\n\nThis model was trained by jaekookang using librispeech\\_100 recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Feb 22 17:38:22 KST 2022'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.10.1'\n* Git hash: 'e79e7185780b90e56618859855a038b4369b002c'\n\t+ Commit date: 'Tue Feb 22 15:34:12 2022 +0900'\n\n\nasr\\_conformer\\_lr2e-3\\_warmup15k\\_amp\\_nondeterministic\n--------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #dataset-librispeech_100 #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us \n",
"### 'jkang/espnet2\\_librispeech\\_100\\_conformer\\_word'\n\n\nThis model was trained by jaekookang using librispeech\\_100 recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Feb 22 17:38:22 KST 2022'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.10.1'\n* Git hash: 'e79e7185780b90e56618859855a038b4369b002c'\n\t+ Commit date: 'Tue Feb 22 15:34:12 2022 +0900'\n\n\nasr\\_conformer\\_lr2e-3\\_warmup15k\\_amp\\_nondeterministic\n--------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
null |
espnet
|
## ESPnet2 DIAR model
### `jkang/espnet2_mini_librispeech_diar`
This model was trained by jaekookang using mini_librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout e08a89e0a43db7fc12bec835c62a000ad10bd417
pip install -e .
cd egs2/mini_librispeech/diar1
./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_mini_librispeech_diar
```
<!-- Generated by scripts/utils/show_diar_result.sh -->
# RESULTS
## Environments
- date: `Tue Feb 8 16:41:16 KST 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `e08a89e0a43db7fc12bec835c62a000ad10bd417`
- Commit date: `Sun Feb 6 18:54:20 2022 -0500`
## diar_train_diar_raw
### DER
dev_clean_2_ns2_beta2_500
|threshold_median_collar|DER|
|---|---|
|result_th0.3_med11_collar0.0|31.39|
|result_th0.3_med1_collar0.0|31.78|
|result_th0.4_med11_collar0.0|29.99|
|result_th0.4_med1_collar0.0|30.61|
|result_th0.5_med11_collar0.0|29.28|
|result_th0.5_med1_collar0.0|30.19|
|result_th0.6_med11_collar0.0|29.50|
|result_th0.6_med1_collar0.0|30.66|
|result_th0.7_med11_collar0.0|30.90|
|result_th0.7_med1_collar0.0|32.38|
## DIAR config
<details><summary>expand</summary>
```
config: conf/train_diar.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/diar_train_diar_raw
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 3
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/diar_stats_8k/train/speech_shape
- exp/diar_stats_8k/train/spk_labels_shape
valid_shape_file:
- exp/diar_stats_8k/valid/speech_shape
- exp/diar_stats_8k/valid/spk_labels_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 200000
chunk_shift_ratio: 0.5
num_cache_chunks: 64
train_data_path_and_name_and_type:
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
valid_data_path_and_name_and_type:
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.01
scheduler: noamlr
scheduler_conf:
warmup_steps: 1000
num_spk: 2
init: xavier_uniform
input_size: null
model_conf:
attractor_weight: 1.0
use_preprocessor: true
frontend: default
frontend_conf:
fs: 8k
hop_length: 128
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/diar_stats_8k/train/feats_stats.npz
encoder: transformer
encoder_conf:
input_layer: linear
num_blocks: 2
linear_units: 512
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
decoder: linear
decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf: {}
attractor: null
attractor_conf: {}
required:
- output_dir
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "diarization"], "datasets": ["mini_librispeech"]}
|
jkang/espnet2_mini_librispeech_diar
| null |
[
"espnet",
"audio",
"diarization",
"dataset:mini_librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"noinfo"
] |
TAGS
#espnet #audio #diarization #dataset-mini_librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 DIAR model
------------------
### 'jkang/espnet2\_mini\_librispeech\_diar'
This model was trained by jaekookang using mini\_librispeech recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Tue Feb 8 16:41:16 KST 2022'
* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.6a1'
* pytorch version: 'pytorch 1.10.1'
* Git hash: 'e08a89e0a43db7fc12bec835c62a000ad10bd417'
+ Commit date: 'Sun Feb 6 18:54:20 2022 -0500'
diar\_train\_diar\_raw
----------------------
### DER
dev\_clean\_2\_ns2\_beta2\_500
DIAR config
-----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'jkang/espnet2\\_mini\\_librispeech\\_diar'\n\n\nThis model was trained by jaekookang using mini\\_librispeech recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Feb 8 16:41:16 KST 2022'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.6a1'\n* pytorch version: 'pytorch 1.10.1'\n* Git hash: 'e08a89e0a43db7fc12bec835c62a000ad10bd417'\n\t+ Commit date: 'Sun Feb 6 18:54:20 2022 -0500'\n\n\ndiar\\_train\\_diar\\_raw\n----------------------",
"### DER\n\n\ndev\\_clean\\_2\\_ns2\\_beta2\\_500\n\n\n\nDIAR config\n-----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #diarization #dataset-mini_librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'jkang/espnet2\\_mini\\_librispeech\\_diar'\n\n\nThis model was trained by jaekookang using mini\\_librispeech recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Feb 8 16:41:16 KST 2022'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.6a1'\n* pytorch version: 'pytorch 1.10.1'\n* Git hash: 'e08a89e0a43db7fc12bec835c62a000ad10bd417'\n\t+ Commit date: 'Sun Feb 6 18:54:20 2022 -0500'\n\n\ndiar\\_train\\_diar\\_raw\n----------------------",
"### DER\n\n\ndev\\_clean\\_2\\_ns2\\_beta2\\_500\n\n\n\nDIAR config\n-----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
fill-mask
|
transformers
|
# LitBERTa uncased model
Not the best model because of limited resources (Trained on ~4.7 GB of data on RTX2070 8GB for ~10 days) but it covers special lithuanian symbols `ąčęėįšųūž`. 128K vocabulary chosen because language has a lot of word forms.
## How to use
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='jkeruotis/LitBERTa-uncased')
unmasker('lietuvių kalba yra viena iš <mask> kalbų pasaulyje.')
[{'sequence': 'lietuvių kalba yra viena iš populiariausių kalbų pasaulyje.',
'score': 0.13887910544872284,
'token': 9404,
'token_str': ' populiariausių'},
{'sequence': 'lietuvių kalba yra viena iš pirmaujančių kalbų pasaulyje.',
'score': 0.13532795011997223,
'token': 27431,
'token_str': ' pirmaujančių'},
{'sequence': 'lietuvių kalba yra viena iš seniausių kalbų pasaulyje.',
'score': 0.1184583529829979,
'token': 14775,
'token_str': ' seniausių'},
{'sequence': 'lietuvių kalba yra viena iš geriausių kalbų pasaulyje.',
'score': 0.09306756407022476,
'token': 5617,
'token_str': ' geriausių'},
{'sequence': 'lietuvių kalba yra viena iš nedaugelio kalbų pasaulyje.',
'score': 0.08187634497880936,
'token': 28150,
'token_str': ' nedaugelio'}]```
|
{"language": "lt", "license": "mit", "tags": ["exbert"]}
|
jkeruotis/LitBERTa-uncased
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"exbert",
"lt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"lt"
] |
TAGS
#transformers #pytorch #jax #safetensors #roberta #fill-mask #exbert #lt #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# LitBERTa uncased model
Not the best model because of limited resources (Trained on ~4.7 GB of data on RTX2070 8GB for ~10 days) but it covers special lithuanian symbols 'ąčęėįšųūž'. 128K vocabulary chosen because language has a lot of word forms.
## How to use
|
[
"# LitBERTa uncased model\n\nNot the best model because of limited resources (Trained on ~4.7 GB of data on RTX2070 8GB for ~10 days) but it covers special lithuanian symbols 'ąčęėįšųūž'. 128K vocabulary chosen because language has a lot of word forms.",
"## How to use"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #roberta #fill-mask #exbert #lt #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# LitBERTa uncased model\n\nNot the best model because of limited resources (Trained on ~4.7 GB of data on RTX2070 8GB for ~10 days) but it covers special lithuanian symbols 'ąčęėįšųūž'. 128K vocabulary chosen because language has a lot of word forms.",
"## How to use"
] |
question-answering
|
transformers
|
# XLNet Fine-tuned on SQuAD / Quoref Dataset
[XLNet](https://arxiv.org/abs/1906.08237) jointly developed by Google and CMU and fine-tuned on [SQuAD / SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) and [Quoref](https://leaderboard.allenai.org/quoref) for question answering down-stream task.
## Evaluation Result on Quoref
```
{
"exact_match": 73.65591397848462,
"f1": 77.9981532789881
}
```
## Results Comparison on Quoref
| Metric | XLNet Base Line | Model FT on SQuAD |
| ------ | --------- | --------- |
| **EM** | **61.88** | **73.66** (+11.78) |
| **F1** | **70.51** | **78.00** (+7.49)|
## How to Use
```
from transformers import XLNetForQuestionAnswering, XLNetTokenizerFast
model = XLNetForQuestionAnswering.from_pretrained('jkgrad/xlnet-base-cased-squad-quoref)
tokenizer = XLNetTokenizerFast.from_pretrained('jkgrad/xlnet-base-cased-squad-quoref')
```
|
{}
|
jkgrad/xlnet-base-cased-squad-quoref
| null |
[
"transformers",
"pytorch",
"xlnet",
"question-answering",
"arxiv:1906.08237",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1906.08237"
] |
[] |
TAGS
#transformers #pytorch #xlnet #question-answering #arxiv-1906.08237 #endpoints_compatible #region-us
|
XLNet Fine-tuned on SQuAD / Quoref Dataset
==========================================
XLNet jointly developed by Google and CMU and fine-tuned on SQuAD / SQuAD 2.0 and Quoref for question answering down-stream task.
Evaluation Result on Quoref
---------------------------
Results Comparison on Quoref
----------------------------
Metric: EM, XLNet Base Line: 61.88, Model FT on SQuAD: 73.66 (+11.78)
Metric: F1, XLNet Base Line: 70.51, Model FT on SQuAD: 78.00 (+7.49)
How to Use
----------
|
[] |
[
"TAGS\n#transformers #pytorch #xlnet #question-answering #arxiv-1906.08237 #endpoints_compatible #region-us \n"
] |
question-answering
|
transformers
|
# XLNet Fine-tuned on SQuAD 2.0 Dataset
[XLNet](https://arxiv.org/abs/1906.08237) jointly developed by Google and CMU and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for question answering down-stream task.
## Training Results (Metrics)
```
{
"HasAns_exact": 74.7132253711201
"HasAns_f1": 82.11971607032643
"HasAns_total": 5928
"NoAns_exact": 73.38940285954584
"NoAns_f1": 73.38940285954584
"NoAns_total": 5945
"best_exact": 75.67590331003116
"best_exact_thresh": -19.554906845092773
"best_f1": 79.16215426779269
"best_f1_thresh": -19.554906845092773
"epoch": 4.0
"exact": 74.05036637749515
"f1": 77.74830934598614
"total": 11873
}
```
## Results Comparison
| Metric | Paper | Model |
| ------ | --------- | --------- |
| **EM** | **78.46** | **75.68** (-2.78) |
| **F1** | **81.33** | **79.16** (-2.17)|
Better fine-tuned models coming soon.
## How to Use
```
from transformers import XLNetForQuestionAnswering, XLNetTokenizerFast
model = XLNetForQuestionAnswering.from_pretrained('jkgrad/xlnet-base-squadv2)
tokenizer = XLNetTokenizerFast.from_pretrained('jkgrad/xlnet-base-squadv2')
```
|
{}
|
jkgrad/xlnet-base-squadv2
| null |
[
"transformers",
"pytorch",
"xlnet",
"question-answering",
"arxiv:1906.08237",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1906.08237"
] |
[] |
TAGS
#transformers #pytorch #xlnet #question-answering #arxiv-1906.08237 #endpoints_compatible #region-us
|
XLNet Fine-tuned on SQuAD 2.0 Dataset
=====================================
XLNet jointly developed by Google and CMU and fine-tuned on SQuAD 2.0 for question answering down-stream task.
Training Results (Metrics)
--------------------------
Results Comparison
------------------
Metric: EM, Paper: 78.46, Model: 75.68 (-2.78)
Metric: F1, Paper: 81.33, Model: 79.16 (-2.17)
Better fine-tuned models coming soon.
How to Use
----------
|
[] |
[
"TAGS\n#transformers #pytorch #xlnet #question-answering #arxiv-1906.08237 #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5280
- Accuracy: 0.9395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imdb"], "metrics": ["accuracy"], "model-index": [{"name": "sentiment-model-sample", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.93948, "name": "Accuracy"}]}]}]}
|
jkhan447/sentiment-model-sample
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-imdb #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# sentiment-model-sample
This model is a fine-tuned version of bert-base-uncased on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5280
- Accuracy: 0.9395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
[
"# sentiment-model-sample\n\nThis model is a fine-tuned version of bert-base-uncased on the imdb dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5280\n- Accuracy: 0.9395",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-imdb #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# sentiment-model-sample\n\nThis model is a fine-tuned version of bert-base-uncased on the imdb dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5280\n- Accuracy: 0.9395",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
null |
transformers
|
### electra-ka is first of its kind, Transformer based, open source Georgian language model.
The model is trained on 33GB of Georgian text collected from 4854621 pages in commoncrowl archive.
|
{}
|
jnz/electra-ka
| null |
[
"transformers",
"pytorch",
"electra",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #electra #endpoints_compatible #region-us
|
### electra-ka is first of its kind, Transformer based, open source Georgian language model.
The model is trained on 33GB of Georgian text collected from 4854621 pages in commoncrowl archive.
|
[
"### electra-ka is first of its kind, Transformer based, open source Georgian language model.\n\n\nThe model is trained on 33GB of Georgian text collected from 4854621 pages in commoncrowl archive."
] |
[
"TAGS\n#transformers #pytorch #electra #endpoints_compatible #region-us \n",
"### electra-ka is first of its kind, Transformer based, open source Georgian language model.\n\n\nThe model is trained on 33GB of Georgian text collected from 4854621 pages in commoncrowl archive."
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_100_2epochs
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6279
- Train Accuracy: 0.6824
- Validation Loss: 0.7791
- Validation Accuracy: 0.2667
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.7045 | 0.4882 | 0.7236 | 0.2667 | 0 |
| 0.6279 | 0.6824 | 0.7791 | 0.2667 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "BERT_Tweet_Sentiment_100_2epochs", "results": []}]}
|
joe5campbell/BERT_Tweet_Sentiment_100_2epochs
| null |
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
BERT\_Tweet\_Sentiment\_100\_2epochs
====================================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.6279
* Train Accuracy: 0.6824
* Validation Loss: 0.7791
* Validation Accuracy: 0.2667
* Epoch: 1
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\_rate': 3e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.2
* TensorFlow 2.8.0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_100k_2eps
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1259
- Train Accuracy: 0.9542
- Validation Loss: 0.6133
- Validation Accuracy: 0.8315
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3330 | 0.8562 | 0.3847 | 0.8415 | 0 |
| 0.1259 | 0.9542 | 0.6133 | 0.8315 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "BERT_Tweet_Sentiment_100k_2eps", "results": []}]}
|
joe5campbell/BERT_Tweet_Sentiment_100k_2eps
| null |
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
BERT\_Tweet\_Sentiment\_100k\_2eps
==================================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.1259
* Train Accuracy: 0.9542
* Validation Loss: 0.6133
* Validation Accuracy: 0.8315
* Epoch: 1
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\_rate': 3e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.2
* TensorFlow 2.8.0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_10k
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3891
- Train Accuracy: 0.8273
- Validation Loss: 0.4749
- Validation Accuracy: 0.8073
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3891 | 0.8273 | 0.4749 | 0.8073 | 0 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "BERT_Tweet_Sentiment_10k", "results": []}]}
|
joe5campbell/BERT_Tweet_Sentiment_10k
| null |
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
BERT\_Tweet\_Sentiment\_10k
===========================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.3891
* Train Accuracy: 0.8273
* Validation Loss: 0.4749
* Validation Accuracy: 0.8073
* Epoch: 0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\_rate': 3e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.2
* TensorFlow 2.8.0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_50k_2eps
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1131
- Train Accuracy: 0.9596
- Validation Loss: 0.6972
- Validation Accuracy: 0.8229
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3420 | 0.8511 | 0.4293 | 0.8299 | 0 |
| 0.1131 | 0.9596 | 0.6972 | 0.8229 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "BERT_Tweet_Sentiment_50k_2eps", "results": []}]}
|
joe5campbell/BERT_Tweet_Sentiment_50k_2eps
| null |
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
BERT\_Tweet\_Sentiment\_50k\_2eps
=================================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.1131
* Train Accuracy: 0.9596
* Validation Loss: 0.6972
* Validation Accuracy: 0.8229
* Epoch: 1
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\_rate': 3e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.2
* TensorFlow 2.8.0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_50k_5eps
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0256
- Train Accuracy: 0.9913
- Validation Loss: 0.8905
- Validation Accuracy: 0.8291
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3395 | 0.8513 | 0.4071 | 0.8372 | 0 |
| 0.1095 | 0.9606 | 0.6561 | 0.8291 | 1 |
| 0.0487 | 0.9835 | 0.7597 | 0.8304 | 2 |
| 0.0329 | 0.9890 | 0.7814 | 0.8273 | 3 |
| 0.0256 | 0.9913 | 0.8905 | 0.8291 | 4 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "BERT_Tweet_Sentiment_50k_5eps", "results": []}]}
|
joe5campbell/BERT_Tweet_Sentiment_50k_5eps
| null |
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
BERT\_Tweet\_Sentiment\_50k\_5eps
=================================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.0256
* Train Accuracy: 0.9913
* Validation Loss: 0.8905
* Validation Accuracy: 0.8291
* Epoch: 4
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\_rate': 3e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.2
* TensorFlow 2.8.0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_TEST
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5541
- Train Accuracy: 0.9375
- Validation Loss: 0.6546
- Validation Accuracy: 1.0
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6902 | 0.625 | 0.6677 | 1.0 | 0 |
| 0.5541 | 0.9375 | 0.6546 | 1.0 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "BERT_Tweet_Sentiment_TEST", "results": []}]}
|
joe5campbell/BERT_Tweet_Sentiment_TEST
| null |
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
BERT\_Tweet\_Sentiment\_TEST
============================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.5541
* Train Accuracy: 0.9375
* Validation Loss: 0.6546
* Validation Accuracy: 1.0
* Epoch: 1
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\_rate': 3e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.2
* TensorFlow 2.8.0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ROBERTA_Tweet_Sentiment_50_2eps
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6625
- Train Accuracy: 0.6310
- Validation Loss: 0.8607
- Validation Accuracy: 0.25
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.7325 | 0.4762 | 0.7489 | 0.25 | 0 |
| 0.6625 | 0.6310 | 0.8607 | 0.25 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
{"tags": ["generated_from_keras_callback"], "model-index": [{"name": "ROBERTA_Tweet_Sentiment_50_2eps", "results": []}]}
|
joe5campbell/ROBERTA_Tweet_Sentiment_50_2eps
| null |
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us
|
ROBERTA\_Tweet\_Sentiment\_50\_2eps
===================================
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.6625
* Train Accuracy: 0.6310
* Validation Loss: 0.8607
* Validation Accuracy: 0.25
* Epoch: 1
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\_rate': 3e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.2
* TensorFlow 2.8.0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ROBERTA_Tweet_Sentiment_50k_2eps
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3553
- Train Accuracy: 0.8504
- Validation Loss: 0.5272
- Validation Accuracy: 0.7652
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5213 | 0.7298 | 0.4817 | 0.7715 | 0 |
| 0.3553 | 0.8504 | 0.5272 | 0.7652 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
{"tags": ["generated_from_keras_callback"], "model-index": [{"name": "ROBERTA_Tweet_Sentiment_50k_2eps", "results": []}]}
|
joe5campbell/ROBERTA_Tweet_Sentiment_50k_2eps
| null |
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us
|
ROBERTA\_Tweet\_Sentiment\_50k\_2eps
====================================
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.3553
* Train Accuracy: 0.8504
* Validation Loss: 0.5272
* Validation Accuracy: 0.7652
* Epoch: 1
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\_rate': 3e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.2
* TensorFlow 2.8.0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEST
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4904
- Train Accuracy: 0.9375
- Validation Loss: 0.7016
- Validation Accuracy: 0.5
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6954 | 0.5 | 0.7286 | 0.5 | 0 |
| 0.4904 | 0.9375 | 0.7016 | 0.5 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "TEST", "results": []}]}
|
joe5campbell/TEST
| null |
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
TEST
====
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 0.4904
* Train Accuracy: 0.9375
* Validation Loss: 0.7016
* Validation Accuracy: 0.5
* Epoch: 1
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\_rate': 3e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.2
* TensorFlow 2.8.0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning\\_rate': 3e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.8.0\n* Tokenizers 0.11.0"
] |
zero-shot-classification
|
transformers
|
# bart-lage-mnli-yahoo-answers
## Model Description
This model takes [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) and fine-tunes it on Yahoo Answers topic classification. It can be used to predict whether a topic label can be assigned to a given sequence, whether or not the label has been seen before.
You can play with an interactive demo of this zero-shot technique with this model, as well as the non-finetuned [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli), [here](https://huggingface.co/zero-shot/).
## Intended Usage
This model was fine-tuned on topic classification and will perform best at zero-shot topic classification. Use `hypothesis_template="This text is about {}."` as this is the template used during fine-tuning.
For settings other than topic classification, you can use any model pre-trained on MNLI such as [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or [roberta-large-mnli](https://huggingface.co/roberta-large-mnli) with the same code as written below.
#### With the zero-shot classification pipeline
The model can be used with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
nlp = pipeline("zero-shot-classification", model="joeddav/bart-large-mnli-yahoo-answers")
sequence_to_classify = "Who are you voting for in 2020?"
candidate_labels = ["Europe", "public health", "politics", "elections"]
hypothesis_template = "This text is about {}."
nlp(sequence_to_classify, candidate_labels, multi_class=True, hypothesis_template=hypothesis_template)
```
#### With manual PyTorch
```python
# pose sequence as a NLI premise and label as a hypothesis
from transformers import BartForSequenceClassification, BartTokenizer
nli_model = BartForSequenceClassification.from_pretrained('joeddav/bart-large-mnli-yahoo-answers')
tokenizer = BartTokenizer.from_pretrained('joeddav/bart-large-mnli-yahoo-answers')
premise = sequence
hypothesis = f'This text is about {label}.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
max_length=tokenizer.max_len,
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
```
## Training
The model is a pre-trained MNLI classifier further fine-tuned on Yahoo Answers topic classification in the manner originally described in [Yin et al. 2019](https://arxiv.org/abs/1909.00161) and [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html). That is, each sequence is fed to the pre-trained NLI model in place of the premise and each candidate label as the hypothesis, formatted like so: `This text is about {class name}.` For each example in the training set, a true and a randomly-selected false label hypothesis are fed to the model which must predict which labels are valid and which are false.
Since this method studies the ability to classify unseen labels after being trained on a different set of labels, the model is only trained on 5 out of the 10 labels in Yahoo Answers. These are "Society & Culture", "Health", "Computers & Internet", "Business & Finance", and "Family & Relationships".
## Evaluation Results
This model was evaluated with the label-weighted F1 of the _seen_ and _unseen_ labels. That is, for each example the model must predict from one of the 10 corpus labels. The F1 is reported for the labels seen during training as well as the labels unseen during training. We found an F1 score of `.68` and `.72` for the unseen and seen labels, respectively. In order to adjust for the in-vs-out of distribution labels, we subtract a fixed amount of 30% from the normalized probabilities of the _seen_ labels, as described in [Yin et al. 2019](https://arxiv.org/abs/1909.00161) and [our blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
|
{"language": "en", "tags": ["text-classification", "pytorch"], "datasets": ["yahoo-answers"], "pipeline_tag": "zero-shot-classification"}
|
joeddav/bart-large-mnli-yahoo-answers
| null |
[
"transformers",
"pytorch",
"jax",
"bart",
"text-classification",
"zero-shot-classification",
"en",
"dataset:yahoo-answers",
"arxiv:1909.00161",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1909.00161"
] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #bart #text-classification #zero-shot-classification #en #dataset-yahoo-answers #arxiv-1909.00161 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# bart-lage-mnli-yahoo-answers
## Model Description
This model takes facebook/bart-large-mnli and fine-tunes it on Yahoo Answers topic classification. It can be used to predict whether a topic label can be assigned to a given sequence, whether or not the label has been seen before.
You can play with an interactive demo of this zero-shot technique with this model, as well as the non-finetuned facebook/bart-large-mnli, here.
## Intended Usage
This model was fine-tuned on topic classification and will perform best at zero-shot topic classification. Use 'hypothesis_template="This text is about {}."' as this is the template used during fine-tuning.
For settings other than topic classification, you can use any model pre-trained on MNLI such as facebook/bart-large-mnli or roberta-large-mnli with the same code as written below.
#### With the zero-shot classification pipeline
The model can be used with the 'zero-shot-classification' pipeline like so:
#### With manual PyTorch
## Training
The model is a pre-trained MNLI classifier further fine-tuned on Yahoo Answers topic classification in the manner originally described in Yin et al. 2019 and this blog post. That is, each sequence is fed to the pre-trained NLI model in place of the premise and each candidate label as the hypothesis, formatted like so: 'This text is about {class name}.' For each example in the training set, a true and a randomly-selected false label hypothesis are fed to the model which must predict which labels are valid and which are false.
Since this method studies the ability to classify unseen labels after being trained on a different set of labels, the model is only trained on 5 out of the 10 labels in Yahoo Answers. These are "Society & Culture", "Health", "Computers & Internet", "Business & Finance", and "Family & Relationships".
## Evaluation Results
This model was evaluated with the label-weighted F1 of the _seen_ and _unseen_ labels. That is, for each example the model must predict from one of the 10 corpus labels. The F1 is reported for the labels seen during training as well as the labels unseen during training. We found an F1 score of '.68' and '.72' for the unseen and seen labels, respectively. In order to adjust for the in-vs-out of distribution labels, we subtract a fixed amount of 30% from the normalized probabilities of the _seen_ labels, as described in Yin et al. 2019 and our blog post.
|
[
"# bart-lage-mnli-yahoo-answers",
"## Model Description\n\nThis model takes facebook/bart-large-mnli and fine-tunes it on Yahoo Answers topic classification. It can be used to predict whether a topic label can be assigned to a given sequence, whether or not the label has been seen before.\n\nYou can play with an interactive demo of this zero-shot technique with this model, as well as the non-finetuned facebook/bart-large-mnli, here.",
"## Intended Usage\n\nThis model was fine-tuned on topic classification and will perform best at zero-shot topic classification. Use 'hypothesis_template=\"This text is about {}.\"' as this is the template used during fine-tuning.\n\nFor settings other than topic classification, you can use any model pre-trained on MNLI such as facebook/bart-large-mnli or roberta-large-mnli with the same code as written below.",
"#### With the zero-shot classification pipeline\n\nThe model can be used with the 'zero-shot-classification' pipeline like so:",
"#### With manual PyTorch",
"## Training\n\nThe model is a pre-trained MNLI classifier further fine-tuned on Yahoo Answers topic classification in the manner originally described in Yin et al. 2019 and this blog post. That is, each sequence is fed to the pre-trained NLI model in place of the premise and each candidate label as the hypothesis, formatted like so: 'This text is about {class name}.' For each example in the training set, a true and a randomly-selected false label hypothesis are fed to the model which must predict which labels are valid and which are false.\n\nSince this method studies the ability to classify unseen labels after being trained on a different set of labels, the model is only trained on 5 out of the 10 labels in Yahoo Answers. These are \"Society & Culture\", \"Health\", \"Computers & Internet\", \"Business & Finance\", and \"Family & Relationships\".",
"## Evaluation Results\n\nThis model was evaluated with the label-weighted F1 of the _seen_ and _unseen_ labels. That is, for each example the model must predict from one of the 10 corpus labels. The F1 is reported for the labels seen during training as well as the labels unseen during training. We found an F1 score of '.68' and '.72' for the unseen and seen labels, respectively. In order to adjust for the in-vs-out of distribution labels, we subtract a fixed amount of 30% from the normalized probabilities of the _seen_ labels, as described in Yin et al. 2019 and our blog post."
] |
[
"TAGS\n#transformers #pytorch #jax #bart #text-classification #zero-shot-classification #en #dataset-yahoo-answers #arxiv-1909.00161 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# bart-lage-mnli-yahoo-answers",
"## Model Description\n\nThis model takes facebook/bart-large-mnli and fine-tunes it on Yahoo Answers topic classification. It can be used to predict whether a topic label can be assigned to a given sequence, whether or not the label has been seen before.\n\nYou can play with an interactive demo of this zero-shot technique with this model, as well as the non-finetuned facebook/bart-large-mnli, here.",
"## Intended Usage\n\nThis model was fine-tuned on topic classification and will perform best at zero-shot topic classification. Use 'hypothesis_template=\"This text is about {}.\"' as this is the template used during fine-tuning.\n\nFor settings other than topic classification, you can use any model pre-trained on MNLI such as facebook/bart-large-mnli or roberta-large-mnli with the same code as written below.",
"#### With the zero-shot classification pipeline\n\nThe model can be used with the 'zero-shot-classification' pipeline like so:",
"#### With manual PyTorch",
"## Training\n\nThe model is a pre-trained MNLI classifier further fine-tuned on Yahoo Answers topic classification in the manner originally described in Yin et al. 2019 and this blog post. That is, each sequence is fed to the pre-trained NLI model in place of the premise and each candidate label as the hypothesis, formatted like so: 'This text is about {class name}.' For each example in the training set, a true and a randomly-selected false label hypothesis are fed to the model which must predict which labels are valid and which are false.\n\nSince this method studies the ability to classify unseen labels after being trained on a different set of labels, the model is only trained on 5 out of the 10 labels in Yahoo Answers. These are \"Society & Culture\", \"Health\", \"Computers & Internet\", \"Business & Finance\", and \"Family & Relationships\".",
"## Evaluation Results\n\nThis model was evaluated with the label-weighted F1 of the _seen_ and _unseen_ labels. That is, for each example the model must predict from one of the 10 corpus labels. The F1 is reported for the labels seen during training as well as the labels unseen during training. We found an F1 score of '.68' and '.72' for the unseen and seen labels, respectively. In order to adjust for the in-vs-out of distribution labels, we subtract a fixed amount of 30% from the normalized probabilities of the _seen_ labels, as described in Yin et al. 2019 and our blog post."
] |
text-classification
|
transformers
|
# distilbert-base-uncased-agnews-student
## Model Description
This model is distilled from the zero-shot classification pipeline on the unlabeled AG's News dataset using [this
script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation).
It is the result of the demo notebook
[here](https://colab.research.google.com/drive/1mjBjd0cR8G57ZpsnFCS3ngGyo5nCa9ya?usp=sharing), where more details
about the model can be found.
- Teacher model: [roberta-large-mnli](https://huggingface.co/roberta-large-mnli)
- Teacher hypothesis template: `"This text is about {}."`
## Intended Usage
The model can be used like any other model trained on AG's News, but will likely not perform as well as a model
trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model
can be distilled to a more efficient student.
|
{"language": "en", "license": "mit", "tags": ["text-classification", "pytorch", "tensorflow"], "datasets": ["ag_news"], "widget": [{"text": "Armed conflict has been a near-constant policial and economic burden."}, {"text": "Tom Brady won his seventh Super Bowl last night."}, {"text": "Dow falls more than 100 points after disappointing jobs data"}, {"text": "A new moon has been discovered in Jupter's orbit."}]}
|
joeddav/distilbert-base-uncased-agnews-student
| null |
[
"transformers",
"pytorch",
"tf",
"distilbert",
"text-classification",
"tensorflow",
"en",
"dataset:ag_news",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #distilbert #text-classification #tensorflow #en #dataset-ag_news #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# distilbert-base-uncased-agnews-student
## Model Description
This model is distilled from the zero-shot classification pipeline on the unlabeled AG's News dataset using this
script.
It is the result of the demo notebook
here, where more details
about the model can be found.
- Teacher model: roberta-large-mnli
- Teacher hypothesis template: '"This text is about {}."'
## Intended Usage
The model can be used like any other model trained on AG's News, but will likely not perform as well as a model
trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model
can be distilled to a more efficient student.
|
[
"# distilbert-base-uncased-agnews-student",
"## Model Description\n\nThis model is distilled from the zero-shot classification pipeline on the unlabeled AG's News dataset using this\nscript.\nIt is the result of the demo notebook\nhere, where more details\nabout the model can be found.\n\n- Teacher model: roberta-large-mnli\n- Teacher hypothesis template: '\"This text is about {}.\"'",
"## Intended Usage\n\nThe model can be used like any other model trained on AG's News, but will likely not perform as well as a model\ntrained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model\ncan be distilled to a more efficient student."
] |
[
"TAGS\n#transformers #pytorch #tf #distilbert #text-classification #tensorflow #en #dataset-ag_news #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilbert-base-uncased-agnews-student",
"## Model Description\n\nThis model is distilled from the zero-shot classification pipeline on the unlabeled AG's News dataset using this\nscript.\nIt is the result of the demo notebook\nhere, where more details\nabout the model can be found.\n\n- Teacher model: roberta-large-mnli\n- Teacher hypothesis template: '\"This text is about {}.\"'",
"## Intended Usage\n\nThe model can be used like any other model trained on AG's News, but will likely not perform as well as a model\ntrained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model\ncan be distilled to a more efficient student."
] |
text-classification
|
transformers
|
# distilbert-base-uncased-go-emotions-student
## Model Description
This model is distilled from the zero-shot classification pipeline on the unlabeled GoEmotions dataset using [this
script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation).
It was trained with mixed precision for 10 epochs and otherwise used the default script arguments.
## Intended Usage
The model can be used like any other model trained on GoEmotions, but will likely not perform as well as a model
trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model
can be distilled to a more efficient student, allowing a classifier to be trained with only unlabeled data. Note
that although the GoEmotions dataset allow multiple labels per instance, the teacher used single-label
classification to create psuedo-labels.
|
{"language": "en", "license": "mit", "tags": ["text-classification", "pytorch", "tensorflow"], "datasets": ["go_emotions"], "widget": [{"text": "I feel lucky to be here."}]}
|
joeddav/distilbert-base-uncased-go-emotions-student
| null |
[
"transformers",
"pytorch",
"tf",
"distilbert",
"text-classification",
"tensorflow",
"en",
"dataset:go_emotions",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #distilbert #text-classification #tensorflow #en #dataset-go_emotions #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# distilbert-base-uncased-go-emotions-student
## Model Description
This model is distilled from the zero-shot classification pipeline on the unlabeled GoEmotions dataset using this
script.
It was trained with mixed precision for 10 epochs and otherwise used the default script arguments.
## Intended Usage
The model can be used like any other model trained on GoEmotions, but will likely not perform as well as a model
trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model
can be distilled to a more efficient student, allowing a classifier to be trained with only unlabeled data. Note
that although the GoEmotions dataset allow multiple labels per instance, the teacher used single-label
classification to create psuedo-labels.
|
[
"# distilbert-base-uncased-go-emotions-student",
"## Model Description\n\nThis model is distilled from the zero-shot classification pipeline on the unlabeled GoEmotions dataset using this\nscript.\nIt was trained with mixed precision for 10 epochs and otherwise used the default script arguments.",
"## Intended Usage\n\nThe model can be used like any other model trained on GoEmotions, but will likely not perform as well as a model\ntrained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model\ncan be distilled to a more efficient student, allowing a classifier to be trained with only unlabeled data. Note\nthat although the GoEmotions dataset allow multiple labels per instance, the teacher used single-label \nclassification to create psuedo-labels."
] |
[
"TAGS\n#transformers #pytorch #tf #distilbert #text-classification #tensorflow #en #dataset-go_emotions #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# distilbert-base-uncased-go-emotions-student",
"## Model Description\n\nThis model is distilled from the zero-shot classification pipeline on the unlabeled GoEmotions dataset using this\nscript.\nIt was trained with mixed precision for 10 epochs and otherwise used the default script arguments.",
"## Intended Usage\n\nThe model can be used like any other model trained on GoEmotions, but will likely not perform as well as a model\ntrained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model\ncan be distilled to a more efficient student, allowing a classifier to be trained with only unlabeled data. Note\nthat although the GoEmotions dataset allow multiple labels per instance, the teacher used single-label \nclassification to create psuedo-labels."
] |
zero-shot-classification
|
transformers
|
# xlm-roberta-large-xnli
## Model Description
This model takes [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tunes it on a combination of NLI data in 15 languages. It is intended to be used for zero-shot text classification, such as with the Hugging Face [ZeroShotClassificationPipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline).
## Intended Usage
This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on XNLI, which is a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:
- English
- French
- Spanish
- German
- Greek
- Bulgarian
- Russian
- Turkish
- Arabic
- Vietnamese
- Thai
- Chinese
- Hindi
- Swahili
- Urdu
Since the base model was pre-trained trained on 100 different languages, the
model has shown some effectiveness in languages beyond those listed above as
well. See the full list of pre-trained languages in appendix A of the
[XLM Roberata paper](https://arxiv.org/abs/1911.02116)
For English-only classification, it is recommended to use
[bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or
[a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla).
#### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="joeddav/xlm-roberta-large-xnli")
```
You can then classify in any of the above languages. You can even pass the labels in one language and the sequence to
classify in another:
```python
# we will classify the Russian translation of, "Who are you voting for in 2020?"
sequence_to_classify = "За кого вы голосуете в 2020 году?"
# we can specify candidate labels in Russian or any other language above:
candidate_labels = ["Europe", "public health", "politics"]
classifier(sequence_to_classify, candidate_labels)
# {'labels': ['politics', 'Europe', 'public health'],
# 'scores': [0.9048484563827515, 0.05722189322113991, 0.03792969882488251],
# 'sequence': 'За кого вы голосуете в 2020 году?'}
```
The default hypothesis template is the English, `This text is {}`. If you are working strictly within one language, it
may be worthwhile to translate this to the language you are working with:
```python
sequence_to_classify = "¿A quién vas a votar en 2020?"
candidate_labels = ["Europa", "salud pública", "política"]
hypothesis_template = "Este ejemplo es {}."
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
# {'labels': ['política', 'Europa', 'salud pública'],
# 'scores': [0.9109585881233215, 0.05954807624220848, 0.029493311420083046],
# 'sequence': '¿A quién vas a votar en 2020?'}
```
#### With manual PyTorch
```python
# pose sequence as a NLI premise and label as a hypothesis
from transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained('joeddav/xlm-roberta-large-xnli')
tokenizer = AutoTokenizer.from_pretrained('joeddav/xlm-roberta-large-xnli')
premise = sequence
hypothesis = f'This example is {label}.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
```
## Training
This model was pre-trained on set of 100 languages, as described in
[the original paper](https://arxiv.org/abs/1911.02116). It was then fine-tuned on the task of NLI on the concatenated
MNLI train set and the XNLI validation and test sets. Finally, it was trained for one additional epoch on only XNLI
data where the translations for the premise and hypothesis are shuffled such that the premise and hypothesis for
each example come from the same original English example but the premise and hypothesis are of different languages.
|
{"language": ["multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur"], "license": "mit", "tags": ["text-classification", "pytorch", "tensorflow"], "datasets": ["multi_nli", "xnli"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "\u0417\u0430 \u043a\u043e\u0433\u043e \u0432\u044b \u0433\u043e\u043b\u043e\u0441\u0443\u0435\u0442\u0435 \u0432 2020 \u0433\u043e\u0434\u0443?", "candidate_labels": "politique \u00e9trang\u00e8re, Europe, \u00e9lections, affaires, politique", "multi_class": true}, {"text": "\u0644\u0645\u0646 \u062a\u0635\u0648\u062a \u0641\u064a 2020\u061f", "candidate_labels": "\u0627\u0644\u0633\u064a\u0627\u0633\u0629 \u0627\u0644\u062e\u0627\u0631\u062c\u064a\u0629, \u0623\u0648\u0631\u0648\u0628\u0627, \u0627\u0644\u0627\u0646\u062a\u062e\u0627\u0628\u0627\u062a, \u0627\u0644\u0623\u0639\u0645\u0627\u0644, \u0627\u0644\u0633\u064a\u0627\u0633\u0629", "multi_class": true}, {"text": "2020'de kime oy vereceksiniz?", "candidate_labels": "d\u0131\u015f politika, Avrupa, se\u00e7imler, ticaret, siyaset", "multi_class": true}]}
|
joeddav/xlm-roberta-large-xnli
| null |
[
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"tensorflow",
"zero-shot-classification",
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:1911.02116",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.02116"
] |
[
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur"
] |
TAGS
#transformers #pytorch #tf #xlm-roberta #text-classification #tensorflow #zero-shot-classification #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #dataset-multi_nli #dataset-xnli #arxiv-1911.02116 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# xlm-roberta-large-xnli
## Model Description
This model takes xlm-roberta-large and fine-tunes it on a combination of NLI data in 15 languages. It is intended to be used for zero-shot text classification, such as with the Hugging Face ZeroShotClassificationPipeline.
## Intended Usage
This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on XNLI, which is a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:
- English
- French
- Spanish
- German
- Greek
- Bulgarian
- Russian
- Turkish
- Arabic
- Vietnamese
- Thai
- Chinese
- Hindi
- Swahili
- Urdu
Since the base model was pre-trained trained on 100 different languages, the
model has shown some effectiveness in languages beyond those listed above as
well. See the full list of pre-trained languages in appendix A of the
XLM Roberata paper
For English-only classification, it is recommended to use
bart-large-mnli or
a distilled bart MNLI model.
#### With the zero-shot classification pipeline
The model can be loaded with the 'zero-shot-classification' pipeline like so:
You can then classify in any of the above languages. You can even pass the labels in one language and the sequence to
classify in another:
The default hypothesis template is the English, 'This text is {}'. If you are working strictly within one language, it
may be worthwhile to translate this to the language you are working with:
#### With manual PyTorch
## Training
This model was pre-trained on set of 100 languages, as described in
the original paper. It was then fine-tuned on the task of NLI on the concatenated
MNLI train set and the XNLI validation and test sets. Finally, it was trained for one additional epoch on only XNLI
data where the translations for the premise and hypothesis are shuffled such that the premise and hypothesis for
each example come from the same original English example but the premise and hypothesis are of different languages.
|
[
"# xlm-roberta-large-xnli",
"## Model Description\n\nThis model takes xlm-roberta-large and fine-tunes it on a combination of NLI data in 15 languages. It is intended to be used for zero-shot text classification, such as with the Hugging Face ZeroShotClassificationPipeline.",
"## Intended Usage\n\nThis model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on XNLI, which is a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:\n\n- English\n- French\n- Spanish\n- German\n- Greek\n- Bulgarian\n- Russian\n- Turkish\n- Arabic\n- Vietnamese\n- Thai\n- Chinese\n- Hindi\n- Swahili\n- Urdu\n\nSince the base model was pre-trained trained on 100 different languages, the\nmodel has shown some effectiveness in languages beyond those listed above as\nwell. See the full list of pre-trained languages in appendix A of the\nXLM Roberata paper\n\nFor English-only classification, it is recommended to use\nbart-large-mnli or\na distilled bart MNLI model.",
"#### With the zero-shot classification pipeline\n\nThe model can be loaded with the 'zero-shot-classification' pipeline like so:\n\n\n\nYou can then classify in any of the above languages. You can even pass the labels in one language and the sequence to\nclassify in another:\n\n\n\nThe default hypothesis template is the English, 'This text is {}'. If you are working strictly within one language, it\nmay be worthwhile to translate this to the language you are working with:",
"#### With manual PyTorch",
"## Training\n\nThis model was pre-trained on set of 100 languages, as described in\nthe original paper. It was then fine-tuned on the task of NLI on the concatenated\nMNLI train set and the XNLI validation and test sets. Finally, it was trained for one additional epoch on only XNLI\ndata where the translations for the premise and hypothesis are shuffled such that the premise and hypothesis for\neach example come from the same original English example but the premise and hypothesis are of different languages."
] |
[
"TAGS\n#transformers #pytorch #tf #xlm-roberta #text-classification #tensorflow #zero-shot-classification #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #dataset-multi_nli #dataset-xnli #arxiv-1911.02116 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# xlm-roberta-large-xnli",
"## Model Description\n\nThis model takes xlm-roberta-large and fine-tunes it on a combination of NLI data in 15 languages. It is intended to be used for zero-shot text classification, such as with the Hugging Face ZeroShotClassificationPipeline.",
"## Intended Usage\n\nThis model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on XNLI, which is a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:\n\n- English\n- French\n- Spanish\n- German\n- Greek\n- Bulgarian\n- Russian\n- Turkish\n- Arabic\n- Vietnamese\n- Thai\n- Chinese\n- Hindi\n- Swahili\n- Urdu\n\nSince the base model was pre-trained trained on 100 different languages, the\nmodel has shown some effectiveness in languages beyond those listed above as\nwell. See the full list of pre-trained languages in appendix A of the\nXLM Roberata paper\n\nFor English-only classification, it is recommended to use\nbart-large-mnli or\na distilled bart MNLI model.",
"#### With the zero-shot classification pipeline\n\nThe model can be loaded with the 'zero-shot-classification' pipeline like so:\n\n\n\nYou can then classify in any of the above languages. You can even pass the labels in one language and the sequence to\nclassify in another:\n\n\n\nThe default hypothesis template is the English, 'This text is {}'. If you are working strictly within one language, it\nmay be worthwhile to translate this to the language you are working with:",
"#### With manual PyTorch",
"## Training\n\nThis model was pre-trained on set of 100 languages, as described in\nthe original paper. It was then fine-tuned on the task of NLI on the concatenated\nMNLI train set and the XNLI validation and test sets. Finally, it was trained for one additional epoch on only XNLI\ndata where the translations for the premise and hypothesis are shuffled such that the premise and hypothesis for\neach example come from the same original English example but the premise and hypothesis are of different languages."
] |
text2text-generation
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 21895237
- CO2 Emissions (in grams): 1.5688902203257171
## Validation Metrics
- Loss: 1.6614878177642822
- Rouge1: 32.4158
- Rouge2: 24.6194
- RougeL: 29.9278
- RougeLsum: 29.4988
- Gen Len: 58.7778
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/joehdownardkainos/autonlp-intent-modelling-21895237
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["joehdownardkainos/autonlp-data-intent-modelling"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 1.5688902203257171}
|
joehdownardkainos/autonlp-intent-modelling-21895237
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autonlp",
"unk",
"dataset:joehdownardkainos/autonlp-data-intent-modelling",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #bart #text2text-generation #autonlp #unk #dataset-joehdownardkainos/autonlp-data-intent-modelling #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 21895237
- CO2 Emissions (in grams): 1.5688902203257171
## Validation Metrics
- Loss: 1.6614878177642822
- Rouge1: 32.4158
- Rouge2: 24.6194
- RougeL: 29.9278
- RougeLsum: 29.4988
- Gen Len: 58.7778
## Usage
You can use cURL to access this model:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 21895237\n- CO2 Emissions (in grams): 1.5688902203257171",
"## Validation Metrics\n\n- Loss: 1.6614878177642822\n- Rouge1: 32.4158\n- Rouge2: 24.6194\n- RougeL: 29.9278\n- RougeLsum: 29.4988\n- Gen Len: 58.7778",
"## Usage\n\nYou can use cURL to access this model:"
] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #autonlp #unk #dataset-joehdownardkainos/autonlp-data-intent-modelling #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 21895237\n- CO2 Emissions (in grams): 1.5688902203257171",
"## Validation Metrics\n\n- Loss: 1.6614878177642822\n- Rouge1: 32.4158\n- Rouge2: 24.6194\n- RougeL: 29.9278\n- RougeLsum: 29.4988\n- Gen Len: 58.7778",
"## Usage\n\nYou can use cURL to access this model:"
] |
text-classification
|
transformers
|
# bert-base-uncased-sem_eval_2010_task_8
Task: sem_eval_2010_task_8
Base Model: bert-base-uncased
Trained for 3 epochs
Batch-size: 6
Seed: 42
Test F1-Score: 0.8
|
{}
|
joelniklaus/bert-base-uncased-sem_eval_2010_task_8
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-uncased-sem_eval_2010_task_8
Task: sem_eval_2010_task_8
Base Model: bert-base-uncased
Trained for 3 epochs
Batch-size: 6
Seed: 42
Test F1-Score: 0.8
|
[
"# bert-base-uncased-sem_eval_2010_task_8\n\nTask: sem_eval_2010_task_8\n\nBase Model: bert-base-uncased\n\nTrained for 3 epochs\n\nBatch-size: 6\n\nSeed: 42\n\nTest F1-Score: 0.8"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-uncased-sem_eval_2010_task_8\n\nTask: sem_eval_2010_task_8\n\nBase Model: bert-base-uncased\n\nTrained for 3 epochs\n\nBatch-size: 6\n\nSeed: 42\n\nTest F1-Score: 0.8"
] |
token-classification
|
transformers
|
# distilbert-base-german-cased-ler
Task: ler
Base Model: distilbert-base-german-cased
Trained for 3 epochs
Batch-size: 12
Seed: 42
Test F1-Score: 0.936
|
{}
|
joelniklaus/distilbert-based-german-cased-ler
| null |
[
"transformers",
"pytorch",
"tf",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #distilbert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
# distilbert-base-german-cased-ler
Task: ler
Base Model: distilbert-base-german-cased
Trained for 3 epochs
Batch-size: 12
Seed: 42
Test F1-Score: 0.936
|
[
"# distilbert-base-german-cased-ler\n\nTask: ler\n\nBase Model: distilbert-base-german-cased\n\nTrained for 3 epochs\n\nBatch-size: 12\n\nSeed: 42\n\nTest F1-Score: 0.936"
] |
[
"TAGS\n#transformers #pytorch #tf #distilbert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilbert-base-german-cased-ler\n\nTask: ler\n\nBase Model: distilbert-base-german-cased\n\nTrained for 3 epochs\n\nBatch-size: 12\n\nSeed: 42\n\nTest F1-Score: 0.936"
] |
token-classification
|
transformers
|
# gbert-base-ler
Task: ler
Base Model: deepset/gbert-base
Trained for 3 epochs
Batch-size: 6
Seed: 42
Test F1-Score: 0.956
|
{}
|
joelniklaus/gbert-base-ler
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
# gbert-base-ler
Task: ler
Base Model: deepset/gbert-base
Trained for 3 epochs
Batch-size: 6
Seed: 42
Test F1-Score: 0.956
|
[
"# gbert-base-ler\n\nTask: ler\n\nBase Model: deepset/gbert-base\n\nTrained for 3 epochs\n\nBatch-size: 6\n\nSeed: 42\n\nTest F1-Score: 0.956"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# gbert-base-ler\n\nTask: ler\n\nBase Model: deepset/gbert-base\n\nTrained for 3 epochs\n\nBatch-size: 6\n\nSeed: 42\n\nTest F1-Score: 0.956"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# POCTS
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0970
- Rouge1: 26.1391
- Rouge2: 7.3101
- Rougel: 19.1217
- Rougelsum: 21.9706
- Gen Len: 46.2245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.3259 | 1.0 | 33875 | 3.2535 | 17.942 | 4.5143 | 14.2766 | 15.582 | 19.3901 |
| 2.9764 | 2.0 | 67750 | 3.1278 | 18.6558 | 5.1844 | 15.0939 | 16.3367 | 19.9174 |
| 2.5889 | 3.0 | 101625 | 3.0970 | 19.1763 | 5.4517 | 15.5342 | 16.7186 | 19.8855 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1+cu110
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["summarization"], "metrics": ["rouge"]}
|
jogonba2/POCTS
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bart #text2text-generation #summarization #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
POCTS
=====
This model is a fine-tuned version of facebook/bart-large on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.0970
* Rouge1: 26.1391
* Rouge2: 7.3101
* Rougel: 19.1217
* Rougelsum: 21.9706
* Gen Len: 46.2245
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.15
* num\_epochs: 3.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.7.1+cu110
* Datasets 1.11.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.15\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1+cu110\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #summarization #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.15\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1+cu110\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.